OpenAI outlines AI safety plan
Microsoft-backed OpenAI has introduced a comprehensive framework aimed at addressing safety concerns related to its most advanced AI models.
The framework includes several measures such as allowing the board to review and possibly reverse safety-related decisions made by the company.
OpenAI says it will only deploy its latest technology if it is deemed safe in specific areas such as cybersecurity and nuclear threats.
We are systemizing our safety thinking with our Preparedness Framework, a living document (currently in beta) which details the technical and operational investments we are adopting to guide the safety of our frontier model development.https://t.co/vWvvmR9tpP
— OpenAI (@OpenAI) December 18, 2023
The company is also creating an advisory group to review safety reports and send them to the company’s executives and board. While executives will make decisions, the board can reverse those decisions.
Also Read: Microsoft wins OpenAI board seat
The move is an important step towards ensuring the safe and responsible development of AI technology, which has the potential to transform many aspects of our lives.
Since ChatGPT’s launch a year ago, the potential dangers of AI have been top of mind for both AI researchers and the general public.
Generative AI technology has dazzled users with its ability to write poetry and essays, but also sparked safety concerns with its potential to spread disinformation and manipulate humans.
In April, a group of AI industry leaders and experts signed an open letter calling for a six-month pause in developing systems more powerful than OpenAI’s GPT-4, citing potential risks to society.
Comments are closed.