White House and Tech Giants Unite to Tackle AI Safety: Voluntary Pledge to Mitigate Risks
The Biden White House has taken a major step to address AI safety and risks amid rapid advances in the technology. Seven influential tech companies, including Google and OpenAI, have voluntarily pledged to mitigate AI risks. The administration is now more involved in the AI regulation debate.
President Biden noted that the next few years could see more technological change than the last 50. The White House seeks bipartisan AI legislation to address AI’s challenges. Policymakers and AI ethicists want new laws to govern the technology, but industry lobbying and competing priorities have hampered previous efforts.
Companies pledged to let independent security experts test their AI systems before release. They also promised to share system safety data with government and academics. The tech firms also promised to create watermarking tools to notify the public of AI-generated images, videos, and texts.
The White House official noted that the Federal Trade Commission (FTC), the government’s main tech industry watchdog, would enforce the pledge. Under consumer protection law, breaking the pledge is deceptive.
President Biden said these commitments would help the industry fulfill its fundamental obligation to Americans in developing secure and trustworthy technology. Congress is still considering bipartisan AI regulations. Senate Majority Leader Charles E. Schumer formed a bipartisan AI legislation group to build on the Biden administration’s efforts.
The European Union is proactively regulating AI while government agencies investigate existing laws. The E.U. AI Act is being negotiated and should be law by year-end. European officials have asked tech companies to voluntarily comply with the “AI Pact” in preparation for the E.U. AI Act.
The White House’s voluntary AI safety pledges address tech sector power and influence concerns. Given tech companies’ inconsistent commitments, policymakers and consumer advocates stress the need for more comprehensive AI safety measures.
The White House pledge is a good start to addressing AI safety, security, and trust to ensure the industry maintains high standards while maximizing AI’s potential.
Also Read: Bridging the Economic Puzzle: Markets Anticipate Fed’s Delicate Balancing Act
Our Reader’s Queries
What is the White House Executive Order for AI safety?
The Executive Order sets out fresh guidelines for the safety and security of AI, safeguards the privacy of Americans, promotes fairness and civil rights, supports the rights of consumers and workers, fosters innovation and competition, advances American leadership globally, and more.
Which tech giants are at the White House today to talk about the risks of AI?
July saw the inaugural addition of seven major companies to the White House’s voluntary measures, including Google, Microsoft, Meta, Amazon, OpenAI, Anthropic, and Inflection. Their commitments encompass conducting comprehensive internal and external security assessments on AI systems prior to their launch, as well as sharing critical information regarding known risks within and beyond the industry.
What is the White House pledge for AI?
The President’s commitment to advancing AI development and usage in a safe and responsible manner is evident, as demonstrated by the signing of a significant Executive Order on October 30. The Biden-Harris Administration is prioritizing the use of AI to enhance health outcomes for Americans while also ensuring their protection.
What is the government doing to regulate AI?
The Federal Government is committed to providing comprehensive training for all its employees to fully grasp the advantages, drawbacks, and constraints of AI in their roles. Additionally, it aims to overhaul the Government’s IT infrastructure, eliminate red tape, and guarantee the safe and effective implementation of AI technologies.