Preparing for Compliance: EU AI Act Privacy Essentials
- Advisori
- Nov 14, 2024
- 6 min read
Updated: Mar 12

The purpose of the AI Act.
The European Commission (EC) first proposed the European Artificial Intelligence Act (AI Act) in April 2021, then adopted by the European Parliament and the Council in December 2023. It officially came into effect on August 1, 2024. Much like the General Data Protection Regulation (GDPR), the AI Act is anticipated to establish a global benchmark, likely becoming the “gold standard” for AI regulation worldwide.
In essence, the AI Act aims to provide a comprehensive framework that governs the development, deployment, and utilization of AI technologies by establishing the following standards:
Safeguarding Fundamental Rights and Values
At its core, the AI Act is designed to protect individuals’ fundamental rights and freedoms. By setting stringent requirements for AI systems, particularly those deemed high-risk, the AI Act prohibits practices that could infringe on privacy, non-discrimination, and other essential human rights. This commitment to human rights reflects the EU’s dedication to upholding its foundational values in the face of technological advancement.
Promoting Trustworthy and Transparent AI
Trust is a critical component of AI adoption. The AI Act seeks to foster trust in AI technologies by mandating transparency and accountability. The AI Act empowers users to make informed decisions and is intended to build public confidence in AI applications by requiring clear communication about AI interactions and ensuring that AI systems are subject to rigorous oversight.
Encouraging Innovation and Competitiveness
While the AI Act imposes regulatory requirements, it also aims to create an environment conducive to innovation. The AI Act reduces business uncertainty and encourages investment in AI research and development by providing a clear and harmonized regulatory framework. This balance between regulation and innovation is designed to enhance the EU’s competitiveness in the global AI landscape.
Ensuring Safety and Reliability
The safety and reliability of AI systems are paramount concerns addressed by the AI Act. By implementing a risk-based classification system, the AI Act ensures that AI systems are subject to appropriate levels of scrutiny based on their potential impact. High-risk AI systems, in particular, must adhere to rigorous safety standards, minimizing the likelihood of harm to individuals and society.
Facilitating a Single Market for AI
The AI Act aims to harmonize AI regulations across member states, facilitating the creation of a single market for AI technologies. This harmonization reduces barriers to entry for businesses operating across borders and ensures a consistent regulatory environment. By fostering a unified market, the AI Act supports the EU’s goal of becoming a global leader in AI.
Key Players under the EU AI Act.
The EU AI Act provides several definitions that are crucial for understanding and applying its provisions to include:
Provider: A provider is any natural or legal person, public authority, agency, or other body that develops an AI system or has an AI system developed and places it on the market or puts it into service under its name or trademark.
Deployers: A Deployer is an entity that integrates AI systems into its operations and uses such systems under its authority.
Distributors: A Distributor is an entity, other than the provider or the importer, that makes an AI system available on the Union market.
Prohibited AI Practices
The AI Act delineates specific AI practices that are prohibited due to their potential to cause significant harm or infringe on fundamental rights. These prohibitions are designed to protect individuals and groups from manipulative, exploitative, or unjust AI systems such as:
Manipulative Techniques: AI systems that use subliminal or deceptive techniques to distort behavior and impact decision-making, leading to significant harm, are banned.
Exploitation of Vulnerabilities: AI systems that exploit vulnerabilities related to age, disability, or social/economic situations to distort behavior and cause harm are prohibited.
Social Scoring: AI systems that evaluate or classify individuals based on social behavior or characteristics, leading to unjustified or disproportionate treatment, are not allowed.
Predictive Policing: AI systems used solely for predicting criminal behavior based on profiling or personality traits are banned, except when supporting human assessments based on objective facts.
Facial Recognition Databases: The creation or expansion of facial recognition databases through untargeted scraping of images from the internet or CCTV footage is prohibited.
Emotion Recognition in Sensitive Areas: AI systems that infer emotions in workplaces or educational institutions are banned, except for medical or safety purposes.
Biometric Categorization: AI systems categorizing individuals based on biometric data to deduce sensitive attributes like race or political beliefs are prohibited, with exceptions for lawful data processing in law enforcement.
Real-Time Biometric Identification: The use of real-time remote biometric identification in public spaces for law enforcement is prohibited, except under strict conditions for specific objectives like locating missing persons or preventing imminent threats.
Designated “High-Risk AI Systems”
A key concept of the AI Act is “high-risk AI systems,” which are defined as AI systems that pose a significant risk to the health, safety, or fundamental rights of individuals due to their potential impact on society and individuals including:
Biometric Categorization: AI systems used for categorizing individuals based on sensitive or protected attributes or characteristics inferred from biometric data are considered high-risk. This includes systems that classify individuals according to attributes such as race, gender, or other protected characteristics.
Emotion Recognition: AI systems intended for recognizing and interpreting human emotions are also classified as high-risk. These systems analyze facial expressions, voice tones, or other biometric indicators to infer emotional states.
Critical Infrastructure: AI systems used to manage critical infrastructure, such as energy or transportation, where malfunctions could endanger public safety.
Education and Vocational Training: AI systems that influence access to education or determine the success of educational outcomes.
Employment and Worker Management: AI systems used in hiring processes, employee evaluation, or task allocation, which can affect individuals’ livelihoods.
Access to Essential Services: AI systems that determine eligibility for essential services, such as credit scoring or social benefits.
Law Enforcement: AI systems used in predictive policing, risk assessments, or biometric identification, which can impact individuals’ rights and freedoms.
Migration and Border Control: AI systems used in managing migration, asylum applications, or border security.
Administration of Justice: AI systems that assist in legal decision-making or influence judicial outcomes.
Regulatory Requirements for High-Risk AI Systems
Given their potential impact, high-risk AI systems are subject to more stringent regulatory requirements under the EU AI Act. These include:
Risk Management: Developers must implement robust risk management systems to identify, assess, and mitigate potential risks associated with any high-risk AI system.
Data Governance: High-quality data is crucial for training AI models. The Act mandates strict data governance practices to ensure data sets are relevant, representative, and free from bias.
Technical Documentation: Comprehensive documentation must be maintained to demonstrate compliance with the AI Act’s requirements, facilitating oversight and accountability.
Transparency and Information Provision: Clear instructions and information must accompany the AI system, enabling users to understand its capabilities and limitations.
Human Oversight: Mechanisms must be in place to ensure effective human oversight, allowing for intervention when necessary to prevent harm.
Accuracy, Robustness, and Cybersecurity: High-risk AI systems must be designed to perform consistently and securely, with measures in place to protect against errors, faults, and cyber threats.
Fundamental Rights Impact Assessment for High-Risk AI Systems
The AI Act outlines the critical requirements for conducting a Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems. The FRIA is mandatory for deployers of high-risk AI systems, particularly those governed by public law or private entities providing public services.
Components of the Fundamental Rights Impact Assessment
Process Description: Deployers must describe how the high-risk AI system will be used, ensuring alignment with its intended purpose.
Usage Timeline: The assessment should include the expected duration and frequency of the system’s use, providing a clear understanding of its operational context.
Affected Individuals and Groups: Deployers must Identify the categories of individuals and groups likely to be impacted by the system. This includes considering the specific context in which the AI system will operate.
Risk Identification: Deployers must assess specific risks of harm to the identified individuals or groups, leveraging information provided by the AI system’s provider.
Human Oversight: A description of the human oversight measures in place, as per the system’s instructions for use, must be included to ensure effective monitoring and control.
Risk Mitigation Measures: The assessment should outline measures to be taken if risks materialize, including internal governance arrangements and complaint mechanisms.
Conclusion
The AI Act represents a monumental step forward in the regulation of artificial intelligence, setting a robust framework that balances innovation with the dual imperatives of safeguarding privacy and fundamental rights. By categorizing certain AI systems as “high-risk” and by imposing stringent compliance requirements on them, the AI Act ensures that AI systems are developed and deployed responsibly, with a keen focus on ethical considerations and societal impact.
コメント