Italy Fines OpenAI €15 Million for Data Privacy Breach

243

Italy’s data protection agency announced on Friday that it has fined OpenAI, the maker of  €15 million (£15.58 million) following an investigation into its use of personal data in training its generative artificial intelligence application.

The agency concluded that OpenAI processed users’ personal data without a sufficient legal basis, violating transparency principles and failing to meet information obligations towards users. OpenAI has described the decision as “disproportionate” and confirmed plans to appeal the ruling.

The investigation, which began in 2023, also revealed that OpenAI lacked an effective age verification system to prevent children under 13 from accessing inappropriate AI-generated content. As part of the ruling, the regulator ordered OpenAI to conduct a six-month media campaign in Italy to educate the public about ChatGPT’s data collection practices, covering both users and non-users.

Also Read: OpenAI’s ChatGPT breaches data privacy rules – Italy

Italy’s data protection authority, known as Garante, has been one of the EU’s most active regulators in scrutinising AI platforms for compliance with the bloc’s data privacy laws. In 2023, Garante briefly banned ChatGPT over alleged breaches of EU privacy regulations, later reinstating the service after OpenAI addressed concerns, including users’ rights to refuse data usage for algorithm training.

OpenA criticised the fine, arguing that it significantly exceeded the company’s revenue in Italy during the relevant period and claiming the decision undermines the country’s AI aspirations.

However, Garante stated that the fine reflects OpenAI’s cooperative stance, implying it could have been higher.

Under the EU’s General Data Protection Regulation (GDPR), organisations can face fines of up to €20 million or 4% of their global turnover for non-compliance.

Source Reuters

Comments are closed.