EU AI Act

What is the EU AI Act?

The EU AI Act is the first comprehensive law to regulate artificial intelligence (AI). It is designed to ensure that AI systems in the EU are used safely, reliably and ethically. The law classifies AI systems according to their level of risk and sets appropriate requirements for each category.

Main objectives of the EU AI Act:

  • Promoting innovation: Creating a trustworthy environment for the development and use of AI in the EU.
  • Protection of fundamental rights: Protecting citizens from the risks that AI systems can pose.
  • Harmonization of the internal market: creation of uniform rules for AI systems in all EU member states.

Categorization of AI systems by risk level:

The EU AI Act divides AI systems into four categories:

  • Unacceptable risk: AI systems that pose a clear threat to people's safety, livelihoods and rights are prohibited (e.g. social scoring systems).
  • High risk: AI systems used in sensitive areas (e.g. healthcare, transport, energy) are subject to strict requirements for transparency, traceability and human oversight.
  • Limited risk: Low-risk AI systems (e.g. chatbots) must meet certain transparency obligations.
  • Minimal or no risk: Most AI systems fall into this category and are not subject to specific requirements of the AI ​​Act.

Important aspects of the EU AI Act:

  • Prohibition of certain AI practices: e.g. B. biometric remote identification in real time, social scoring systems.
  • Requirements for high-risk AI systems: e.g. B. Conformity assessment, technical documentation, risk management.
  • Transparency obligations: e.g. B. Labeling of AI-generated content.
  • Sanctions for violations: Violations of the AI ​​Act result in high fines.

Further information:

You can find detailed information about the EU AI Act, including the full legal text and frequently asked questions, on the European Commission website:

en_US