India Mandates Approval for Unreliable AI Tools
India has asked tech companies to seek its approval before publicly releasing artificial intelligence (AI) tools that are “unreliable” or under trial, noting that they should also be labelled for the possibility of returning incorrect answers to user queries.
The use of such tools, including generative AI, and its “availability to the users on Indian Internet must be done so with the explicit permission of the Government of India,” the country’s IT ministry said in an advisory issued last Friday to the platforms.
Countries across the world are racing to draw up rules to regulate AI. India has been tightening regulations for social media companies, which count the South Asian nation as a top growth market.
The advisory came a week after a top minister lambasted Google’s Gemini AI tool on February 23 in a response that Indian Prime Minister Narendra Modi has been accused of implementing policies characterised as “fascist.”
A day later, Google said it had quickly worked to address the issue and that the tool “may not always be reliable,” in particular for current events and political topics.
“Safety and trust are platform legal obligations. ‘Sorry, Unreliable’ does not exempt from the law,” deputy IT minister Rajeev Chandrasekhar said on the social media platform X in response to Google’s statement.
Reuters
Comments are closed.