Digital Transformation Blog

Demystifying the EU AI Regulation

Written by Rob Farrell | Jan 2, 2024 5:42:26 PM

The European Union (EU) has introduced the world's first comprehensive regulation on artificial intelligence (AI), known as the EU AI Act. The regulation aims to ensure the safe and ethical use of AI while promoting innovation and investment in the technology. Let's briefly explore the EU AI Act, its implications for organizations, and its impact on society.

 

Overview of the EU AI Act

The EU AI Act establishes obligations for providers and users of AI systems based on the level of risk involved. The regulation covers AI systems that are "placed on the market, put into service, or used in the EU". The act defines AI systems as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with".

 

The AI Act covers AI systems that are “placed on the market, put into service or used in the EU.” This means that in addition to developers and deployers in the EU, it also applies to global vendors selling or otherwise making their system or its output available to users in the EU. 

There are three exceptions:

  • AI systems exclusively developed or used for military purposes, and possibly defense and national security purposes more broadly, pending negotiations; 

  • AI developed and used for scientific research; and,

  • Free and open source AI systems and components (a term not yet clearly defined), with the exception of foundation models which are discussed below.

 

 

A Risk-Based Approach

The regulation classifies AI systems into four categories based on their level of risk from unacceptable risk to minimal risk. The higher the risk, the more stringent the requirements for transparency, human oversight, and other issues.

High-risk AI systems fall into one of two categories:

  1. System is a safety component or a product subject to existing safety standards and assessments, such as toys or medical devices; or, 

  2. System is used for a specific sensitive purpose. The exact list of these use cases is subject to change during the negotiations, but are understood to fall within the following eight high-level areas:BiometricsCritical infrastructureEducation and vocational trainingEmployment, workers management and access to self-employmentAccess to essential servicesLaw enforcementMigration,asylum and border control managementAdministration of justice and democratic processes

 

Implications for Organizations

The EU AI Act requires organizations to comply with new rules related to transparency, provision of information to users, human oversight, and other issues. All AI systems, even those not deemed high-risk, must meet baseline requirements.

Organizations that develop or deploy high-risk AI systems must comply with additional requirements, such as risk management, data quality, and human oversight. The regulation also establishes a European Artificial Intelligence Board to oversee compliance with the regulation.

 

The EU AI Act is a comprehensive regulation that sets a standard for the harmonization of ethical usage of AI in all its forms while strengthening the uptake, investment, industry capacity, and innovation of AI in the EU. The regulation aims to ensure that AI systems in the EU are safe and respect fundamental rights and values, foster trustworthy AI innovation within the EU, and prevent AI systems from causing harm to individuals or society. The EU AI Act applies to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU. The regulation requires all AI systems, even those not deemed high-risk, to meet baseline requirements related to transparency, provision of information to users, human oversight, and other issues.

Organizations that develop or deploy high-risk AI systems must comply with additional requirements, such as risk management, data quality, and human oversight. The EU AI Act also creates new EU-wide rules for voluntary codes of conduct that can govern lower-risk AI uses. Therefore, organizations that want to operate AI systems within the EU must comply with the regulation to avoid legal uncertainty, mitigate risks, and build trust with businesses and citizens.

 

Impact on Society

The EU AI Act aims to ensure the safe and ethical use of AI while promoting innovation and investment in the technology. The regulation seeks to protect fundamental rights, such as privacy and non-discrimination, and to prevent AI systems from causing harm to individuals or society. The EU AI Act also creates new EU-wide rules for voluntary codes of conduct that can govern lower-risk AI uses. The regulation aims to facilitate trustworthy AI innovation within the EU.

People should care about the EU AI Act because it aims to protect their health, safety, and fundamental rights while promoting responsible innovation in artificial intelligence (AI) technology. The regulation ensures that AI systems are safe and respect fundamental rights and values, fostering trustworthy AI innovation within the EU and preventing AI systems from causing harm to individuals or society. The AI Act applies to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU.

The regulation requires all AI systems, even those not deemed high-risk, to meet baseline requirements related to transparency, provision of information to users, human oversight, and other issues. Organizations that develop or deploy high-risk AI systems must comply with additional requirements, such as risk management, data quality, and human oversight. The EU AI Act also creates new EU-wide rules for voluntary codes of conduct that can govern lower-risk AI uses. Therefore, people can trust that the AI systems they interact with are safe and respect their rights.

 

Key Takeaways

  1. The EU AI Act is the world's first comprehensive regulation on artificial intelligence.

  2. The regulation establishes obligations for providers and users of AI systems based on the level of risk involved.

  3. All AI systems, even those not deemed high-risk, must meet baseline requirements related to transparency, provision of information to users, human oversight, and other issues.

  4. Organizations that develop or deploy high-risk AI systems must comply with additional requirements, such as risk management, data quality, and human oversight.

  5. The EU AI Act aims to ensure the safe and ethical use of AI while promoting innovation and investment in the technology.

 

The EU AI Act is a comprehensive regulation that aims to ensure the safe and ethical use of AI while promoting innovation and investment in the technology. The regulation establishes obligations for providers and users of AI systems based on the level of risk involved. Organizations that develop or deploy high-risk AI systems must comply with additional requirements, such as risk management, data quality, and human oversight. The EU AI Act aims to facilitate trustworthy AI innovation within the EU.

 

Read More

The EU AI Act: A Primer

https://cset.georgetown.edu/article/the-eu-ai-act-a-primer/

What is the EU Artificial Intelligence Act?

https://www.europeanmovement.ie/what-is-the-eu-ai-act/

Regulatory framework proposal on artificial intelligence

https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai