OpenAI to create democratic processes for AI software

389

OpenAI has announced plans to form a “Collective Alignment” team that will work towards implementing democratic processes to govern how its Artificial intelligence (AI) software should be developed and regulated to ensure that it is fair and unbiased.

The team is a continuation from a grant program aimed at funding experiments in the democratic process, as officially announced by the San Francisco-based firm in May 2023. The program concluded recently, providing a robust foundation for the ongoing initiatives of the team.

“As we continue to pursue our mission towards superintelligent models who potentially could be seen as integral parts of our society.

“It’s important to give people the opportunity to provide input directly,” Tyna Eloundou, a research engineer and founding member of OpenAI’s new team said.

The goal is to create a more inclusive and diverse AI community that addresses the needs of all stakeholders, including developers, users, and society as a whole.

The new OpenAI team is actively looking to hire a research engineer and research scientist, Eloundou said. The team will work closely with OpenAI’s “Human Data” team, which builds infrastructure for collecting human input on the company’s AI models, and other research teams.

Also Read: ChatGPT: OpenAI eases restrictions on military use

To guarantee the exclusivity of human voters, OpenAI might collaborate with Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman. This project, as suggested by Teddy Lee, the second member of the two-person team and a product manager, offers a solution for distinguishing between humans and AI bots.

Lee noted that the team has not made any concrete plans yet to integrate Worldcoin.

Since OpenAI’s late 2022 launch of ChatGPT, generative AI technology that can spin uncannily authoritative prose from text prompts has captivated the public, making the program one of the fastest growing apps of all time.

The issue of AI-generated “deepfake” images and misinformation has become a growing concern, particularly with the upcoming 2024 U.S. election campaign.

Critics have also pointed out that AI systems like ChatGPT can exhibit inherent bias due to the inputs used to develop them, which can lead to outputs that are racist or sexist in nature.

Source Reuters 

Comments are closed.