EU begins policing artificial intelligence | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

EU begins policing artificial intelligence


European Commission President Ursula von der Leyen Associated Press/Photo by Jean-Francois Badias

EU begins policing artificial intelligence

The law provides the first artificial intelligence-focused regulatory system in the world, according to the European Union. The law went into enforcement Thursday and is meant to ensure that AI systems respect human dignity, safety, and fundamental rights, according to the EU. AI systems classified as a high risk—for instance, those for use by teachers, by job recruiters, or law enforcement— are subject to strict regulations before they can be put on the market. Limited-risk AI systems must meet obligations such as telling humans they are talking to a chatbot, or labeling some kinds of AI-generated content.

Although the law is in effect, regulations for companies will not apply until well into next year, according to the EU. Some of the law’s regulations will not go into effect for at least three years.

What does the law do? The European AI Office is responsible for overseeing the law’s implementation in member states, the EU said. The law requires AI systems to be tested before being listed in an EU database. A system may not appear on the market until it it is approved. If the AI system is changed after an initial approval it must go back through the conformity assessment and database registration to be re-approved.

The law classifies AI systems as unacceptable, high, limited, or minimal risk.

  • Unacceptable: AI systems that pose a clear threat to people’s livelihoods, rights, and safety are illegal under the law. These include toys that encourage bad behavior with voice messages and AI systems that governments can use to implement social scoring systems.

  • High risk: Systems that carry a high risk of danger must operate with appropriate human oversight, provide activity logs to help authorities trace their results, and receive input only from what the EU defines as high-quality information so it doesn’t discriminate against certain groups.

  • Limited risk: These AI systems must inform people who interact with them that they are AI systems, and must identify the content they produce as AI-generated.

  • Minimal risk: People can freely use these AIs, which include AI spam filters and some video games.

Dig deeper: Read Brad Littlejohn’s column in WORLD Opinions about how smartphone access has harmed many kids who are growing up in what is becoming an anxious generation.


Josh Schumacher

Josh is a breaking news reporter for WORLD. He’s a graduate of World Journalism Institute and Patrick Henry College.


An actual newsletter worth subscribing to instead of just a collection of links. —Adam

Sign up to receive The Sift email newsletter each weekday morning for the latest headlines from WORLD’s breaking news team.
COMMENT BELOW

Please wait while we load the latest comments...

Comments