EU countries endorse landmark AI rules

411

Europe has moved a step closer to adopting rules governing the utilization of artificial intelligence and models like Microsoft-supported ChatGPT, as EU countries approved a political agreement established in December.

EU industry chief Thierry Breton said the Artificial Intelligence (AI) Act is historical and a world first.

“Today member states endorsed the political agreement reached in December, recognising the perfect balance found by the negotiators between innovation and safety,” he said in a statement.

The regulations, initially proposed by the European Commission three years ago, aspire to establish a global standard for technology widely employed across various industries, spanning from banking and retail to the automotive and aviation sectors. They also set parameters for the use of AI for military, crime and security purposes.

The agreement reached on Friday was expected after France, the final dissenting party, withdrew its opposition to the AI Act. France secured stringent conditions to strike a balance between transparency and protecting business secrets, while also minimizing the administrative burden on high-risk AI systems.

The objective is to foster the growth of competitive AI models within the bloc, EU diplomatic officials said on Friday. The officials chose to remain anonymous as they were not authorized to comment publicly on the issue.

Deepfakes

Experts are significantly concerned that generative AI has fueled the proliferation of deepfakes—authentic-looking yet artificially generated videos produced by AI algorithms trained on extensive online content. These deepfakes often circulate on social media, contributing to the blurring of fact and fiction in public life.

Margrethe Vestager, the EU digital chief, emphasized the necessity for new regulations in light of the recent surge in the dissemination of fabricated sexually explicit images featuring pop singer Taylor Swift on social media.

Also Read: Germany to approve EU’s planned AI act

“What happened to @taylorswift13 tells it all: the #harm that #AI can trigger if badly used, the responsibility of #platforms, & why it is so important to enforce #tech regulation,” she said on X social platform.

French AI start-up Mistral, established by former researchers from Meta and Google AI, along with Germany’s Aleph Alpha, has been actively lobbying their respective governments on this matter, according to sources.

Earlier this week, Germany also threw its support behind the rules. However, the tech lobbying group CCIA, which includes members like Google, Amazon, Apple, and Meta Platforms, cautioned about potential obstacles ahead.

“Many of the new AI rules remain unclear and could slow down the development and roll-out of innovative AI applications in Europe.

“The Act’s proper implementation will therefore be crucial to ensuring that AI rules do not overburden companies in their quest to innovate and compete in a thriving, highly dynamic market.” CCIA Europe’s Senior Policy Manager, Boniface de Champris said.

The AI Act’s progression towards legislation entails a crucial vote by a key committee of EU lawmakers scheduled for February 13, followed by a European Parliament vote in March or April. Expected to be enforced before the summer, the legislation is set to take effect in 2026, with certain provisions coming into operation earlier.

Comments are closed.