Generative AI stalls EU legislation talks

0 479

‘Foundation models’, or Generative AI, have become the main hurdle in talks over the European Union’s proposed AI Act, as negotiators met on Friday for crucial discussions ahead of final talks scheduled for December 6.

The AI Act is a flagship bill to regulate Artificial Intelligence based on its capacity to cause harm. The file is at the last phase of the legislative process, so-called ‘trilogues’, whereby the EU Commission, Council, and Parliament negotiate the regulatory provisions.

According to sources, EU lawmakers cannot agree on how to regulate systems like ChatGPT, in a threat to landmark legislation aimed at keeping artificial intelligence (AI) in check.

Foundation models like the one built by Microsoft-backed OpenAI are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks.

After two years of negotiations, the bill was approved by the European parliament in June. The draft AI rules now need to be agreed through meetings between representatives of the European Parliament, the Council and the European Commission.

EU policymakers aim to finalise an agreement at the next political trilogue on 6 December. Ahead of this crucial appointment, Spain, negotiating on behalf of EU countries, needs a revised mandate.

If Experts from EU countries cannot agree position on foundation models, access to source codes, fines and other topics, the act risks being shelved due to lack of time before European parliamentary elections next year.

Challenges

While some experts and lawmakers have proposed a tiered approach for regulating foundation models, defined as those with more than 45 million users, others have said smaller models could be equally risky.

But the biggest challenge to getting an agreement has come from France, Germany and Italy, who favour letting makers of generative AI models self-regulate instead of having hard rules.

Other pending issues in the talks include definition of AI, fundamental rights impact assessment, law enforcement exceptions and national security exceptions.

Also Read: UK, U.S. unveil global AI guidelines

Lawmakers have also been divided over the use of AI systems by law enforcement agencies for biometric identification of individuals in publicly accessible spaces and could not agree on several of these topics in a meeting on Nov. 29.

Spain, which holds the EU presidency until the end of the year, has proposed compromises in a bid to speed up the process.

The Spanish presidency shared a revised mandate to negotiate with the European Parliament on the thorny issue of regulating foundation models under the upcoming AI law.

If a deal does not happen in December, the next presidency Belgium will have a couple of months to one before it is likely shelved ahead of European elections.

“Had you asked me six or seven weeks ago, I would have said we are seeing compromises emerging on all the key issues. This has now become a lot harder,” said Mark Brakel, director of policy at the Future of Life Institute, a nonprofit aimed at reducing risks from advanced AI.

Self-regulation

European parliamentarians, EU Commissioner Thierry Breton and scores of AI researchers have criticised self-regulation.

In an open letter this week, researchers such as Geoffrey Hinton warned self-regulation is “likely to dramatically fall short of the standards required for foundation model safety.”

France-based AI company Mistral and Germany’s Aleph Alpha have also criticised the tiered approach to regulating foundation models, winning support from their respective countries.

Leave a Reply

Your email address will not be published. Required fields are marked *