The European Union will seek to thrash out an agreement on sweeping rules to regulate artificial intelligence on Wednesday, following months of difficult negotiations in particular on how to monitor generative AI applications like ChatGPT.

ChatGPT wowed with its ability to produce poems and essays within seconds from simple user prompts.

AI proponents say the technology will benefit humanity, transforming everything from work to health care, but others worry about the risks it poses to society, fearing it could thrust the world into unprecedented chaos.

Brussels is bent on bringing big tech to heel with a powerful legal armory to protect EU citizens’ rights, especially those covering privacy and data protection.

The European Commission, the EU’s executive arm, first proposed an AI law in 2021 that would regulate systems based on the level of risk they posed. For example, the greater the risk to citizens’ rights or health, the greater the systems’ obligations.

Negotiations on the final legal text began in June, but a fierce debate in recent weeks over how to regulate general-purpose AI like ChatGPT and Google’s Bard chatbot threatened talks at the last minute.

Negotiators from the European Parliament and EU member states began discussions on Wednesday and the talks were expected to last into the evening.

Some member states worry that too much regulation will stifle innovation and hurt the chances of producing European AI giants to challenge those in the United States, including ChatGPT’s creator OpenAI as well as tech titans like Google and Meta.

Although there is no real deadline, senior EU figures have repeatedly said the bloc must finalize the law before the end of 2023.

Chasing local champions

EU diplomats, industry sources and other EU officials have warned the talks could end without an agreement as stumbling blocks remain over key issues.

Others have suggested that even if there is a political agreement, several meetings will still be needed to hammer out the law’s technical details.

And should EU negotiators reach agreement, the law would not come into force until 2026 at the earliest.

The main sticking point is over how to regulate so-called foundation models—designed to perform a variety of tasks—with France, Germany and Italy calling to exclude them from the tougher parts of the law.

“France, Italy and Germany don’t want a regulation for these models,” said German MEP Axel Voss, who is a member of the special parliamentary committee on AI.

The parliament, however, believes it is “necessary… for transparency” to regulate such models, Voss said.

Late last month, the three biggest EU economies published a paper calling for an “innovation-friendly” approach for the law known as the AI Act.

Berlin, Paris and Rome do not want the law to include restrictive rules for foundation models, but instead say they should adhere to codes of conduct.

Many believe this change in view is motivated by their wish to avoid hindering the development of European champions—and perhaps to help companies such as France’s Mistral AI and Germany’s Aleph Alpha.

‘Not scared to walk away’

Another sticking point is remote biometric surveillance—basically, facial identification through camera data in public places.

The EU parliament wants a full ban on “real time” remote biometric identification systems, which member states oppose. The commission had initially proposed that there could be exemptions to find potential victims of crime including missing children.

There have been suggestions MEPs could concede on this point in exchange for concessions in other areas.

Brando Benifei, one of the MEPs leading negotiations for the parliament, said he saw a “willingness” by everyone to conclude talks.

But, he added, “we are not scared of walking away from a bad deal”.

France’s digital minister Jean-Noel Barrot said it was important to “have a good agreement” and suggested there should be no rush for an agreement at any cost.

“Many important points still need to be covered in a single night,” he added.

Concerns over AI’s impact and the need to supervise the technology are shared worldwide.

US President Joe Biden issued an executive order in October to regulate AI in a bid to mitigate the technology’s risks.

News

Researchers Discover New Origin of Deep Brain Waves

Understanding hippocampal activity could improve sleep and cognition therapies. Researchers from the University of California, Irvine’s biomedical engineering department have discovered a new origin for two essential brain waves—slow waves and sleep spindles—that are critical for [...]