OpenAI Forms Safety Committee to Oversee AI Model Training

347

OpenAI has formed a Safety and Security Committee, led by board members including CEO Sam Altman, to oversee the training of its next artificial intelligence (AI) model.

OpenAI stated on its blog that Directors Bret Taylor, Adam D’Angelo, and Nicole Seligman will also lead the committee.

Microsoft-backed OpenAI’s generative AI chatbots, capable of human-like conversations and creating images from text prompts, have raised safety concerns as AI models grow more powerful.

Former Chief Scientist Ilya Sutskever and Jan Leike, leaders of OpenAI’s Superalignment team, which ensured AI alignment with intended objectives, left the company earlier this month.

The Superalignment team was disbanded in May, with some members reassigned to other groups, CNBC reported following the high-profile departures.

Also Read: OpenAI partners with Common Sense Media on AI guidelines

The new committee will recommend safety and security decisions for OpenAI’s projects and operations.

Its first task will be to evaluate and enhance OpenAI’s existing safety practices over the next 90 days, after which it will share recommendations with the board.

After the board’s review, OpenAI will publicly update on the adopted recommendations, the company said.

Other committee members include newly appointed Chief Scientist Jakub Pachocki and Matt Knight, head of security. The company will also consult experts such as Rob Joyce, former U.S. National Security Agency cybersecurity director, and John Carlin, former Department of Justice official.

OpenAI did not provide further details on the new “frontier” model it is training, except that it aims to bring its systems to the “next level of capabilities on our path to AGI.”

Source Reuters

Comments are closed.