AI and Local Government: How the AI Act is Shaping the Future of Public Services

AI conversational assistants will soon be able to answer a wide range of questions from public service users—whether it’s to request a change in garbage truck schedules, find out how many times a day bus passes by the school, or learn which municipalities in France use biometric cameras. However, for these systems to function effectively, local authorities must ensure that the correct configurations are in place from the start and that regular monitoring is conducted. The successful adoption of AI by local governments will also require thoughtful decision-making and comprehensive training to ensure they choose the right tools and promote beneficial uses while avoiding potential pitfalls. The AI Act regulation which entered into force 

 

Thaima Samman, Partner, and Anca Caruntu, Public Affairs Director at Samman, published an article on this matter in the French magazine for local authorities Zepros in September, summarizing the AI Act’s main provisions of interest to local governments: 

 

The Regulatory Framework Following the Adoption of the AI Act

 

The EU has established a comprehensive regulatory framework for AI through the adoption of the AI Act regulation, effective from August 1, 2024, with two primary goals: 

  • Regulating the introduction, deployment, and use of AI systems based on their risks to health, safety, fundamental rights, the environment, democracy, and the rule of law. 
  • Imposing rules for general-purpose AI (GPAI) models, with stricter rules for GPAI models posing “systemic” risks. 

 

The AI Act takes a risk-based approach with:  

  • Prohibited AI Practices: These include social scoring, mass facial recognition, manipulation of vulnerable groups and predicting criminal behaviors. Some exceptions are made for medical or security purposes.  
  • High-risk AI Practices: These are authorized but highly regulated, particularly for product-safety related systems (e.g. toys, cars, healthcare) and systems listed in Annex III. Local authorities will have to exercise caution when using high-risk AI, particularly in areas such as access to essential public services an social benefits, biometric AI systems, AI in education and vocational training and AI in employment, workforce management and recruitment.  
  • Limited-risk AI Practices : These involve lighter regulations, such as for public authorities using chatbots or generative AI, with fewer transparency obligations. 

 

 

A Division of Obligations Between Providers and Deployers That Raises Questions

 

The AI Act outlines obligations for providers, importers, distributors, and deployers (those using or modifying AI systems). When a public authority uses a high-risk AI system, it is classified as the deployer and must: 

  • Ensure staff competency and training for operating the AI system;  
  • Implement appropriate technical and organizational measures to ensure proper system use;  
  • Provide necessary skills, training, and support to those overseeing the AI system;  
  • Maintain automated logs when exercising control over the high-risk AI system.  

Additional obligations specific to public authority deployers include verifying that the high-risk AI system is registered in the EU database, conducting data protection impact assessment and fundamental rights impact assessments. 

 

Depending on the context of use, a deployer may be reclassified as a provider, significantly increasing its responsibilities. While these obligations may seem daunting, understanding and implementing the appropriate procedures can be manageable and ultimately beneficial for both elected officials and their constituents.

 

Article p.30

 

Partager

Articles similaires