U.S. lawmakers seek regulation of AI vendors to government

362

A group of bipartisan United States congressmen have introduced a bill that would require federal agencies and their artificial intelligence vendors to adopt best practices for handling the risks posed by AI.

The move comes as the US government is slowly recognizing the need to regulate the rapidly developing AI technology. The bill also reflects a growing concern among lawmakers about the potential risks that AI poses to privacy, security, and civil liberties.

The proposed bill, sponsored by Democrats Ted Lieu and Don Beyer alongside Republicans Zach Nunn and Marcus Molinaro, is modest in scope but has a chance of becoming law since a Senate version was introduced last November by Republican Jerry Moran and Democrat Mark Warner

If passed, this legislation would require federal agencies to adopt AI guidelines unveiled by the Commerce Department last year.

The legislation would additionally mandate the Commerce Department to formulate precise standards for AI suppliers to the U.S. government. It calls upon the Federal Procurement Policy chief to devise language necessitating these suppliers to grant “appropriate access to data, models, and parameters” for thorough testing and evaluation, according to the bill.

Also Read: G7 to hold first meeting on AI regulation

Generative AI, capable of producing text, photos, and videos based on open-ended prompts, has generated both enthusiasm and apprehension in recent months. While it holds the potential to render certain jobs obsolete and disrupt elections by blurring the line between fact and misinformation, there is also a heightened concern that in extreme cases, AI might enable malicious actors to compromise critical infrastructure.

While the United States has made preliminary strides in AI regulation, Europe has significantly advanced in establishing comprehensive frameworks and regulations for governing artificial intelligence.

In October of last year, President Joe Biden signed an executive order aimed at enhancing the safety of AI. This directive mandates developers of AI systems with potential risks to U.S. national security, the economy, public health, or safety to share the outcomes of safety tests with the U.S. government before making them publicly available.

Source Reuters

Comments are closed.