Biden administration releases plan for ‘responsible AI,’ meets with Big Tech

May 4, 2023
Vice President Harris and senior administration officials met with the CEOs of Alphabet (Google), Anthropic, Microsoft and OpenAI to discuss the need for “responsible, trustworthy and ethical” innovation.

After meeting Thursday with several Big Tech executives about artificial intelligence, the White House unveiled new initiatives meant to promote “responsible AI” innovation and protect people’s rights and safety.

Vice President Harris and senior administration officials met with the CEOs of Alphabet (Google), Anthropic, Microsoft and OpenAI to discuss the need for “responsible, trustworthy and ethical” innovation with “safeguards that mitigate risks and potential harms” to society, including ensuring their products are safe before they are deployed or made public.

According to Reuters, the meeting included Google's Sundar Pichai, Microsoft Corp's Satya Nadella, OpenAI's Sam Altman and Anthropic's Dario Amodei, along with Vice President Kamala Harris and administration officials -- including Biden's Chief of Staff Jeff Zients, National Security Adviser Jake Sullivan, Director of the National Economic Council Lael Brainard and Secretary of Commerce Gina Raimondo.

The Biden-Harris administration announced an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles, on an evaluation platform developed by Scale AI.

“This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts,” the White House says.

Many have called on federal lawmakers to pass legislation to regulate AI research and products, although it’s not clear if Congress collectively knows enough about the technology currently to effectively do that.

At least two national efforts are underway to slow down or place a moratorium on AI research until more is learned about its capabilities and the potential dangers posed – although other countries would face no obligation to stop their own research.

Biden’s administration says it has taken steps to promote responsible innovation, including the Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.

Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission and Department of Justice’s Civil Rights Division issued a joint statement saying they were committed to leveraging their existing legal authorities to protect the American people from AI-related harms.

On Tuesday, the National Science Foundation announced $140 million in funding to launch seven new National AI Research Institutes, bringing the total number in the U.S. to 25. The institutes facilitate collaborative efforts across higher education, federal agencies, industry and other stakeholders to pursue “transformative AI advances that are ethical, trustworthy, responsible, and serve the public good.”

The Office of Management and Budget (OMB) also announced it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment. The guidance “will establish specific policies for federal departments and agencies to follow in order to ensure their development, procurement, and use of AI systems centers on safeguarding the American people’s rights and safety,” the White House says.

The Biden-Harris Administration’s announcements Thursday did draw some reaction from data security firms.

“There’s no putting the AI genie back in the bottle,” says Craig Burland, CISO for Inversion6. “Two years ago, if your product didn’t have AI it was considered last-generation. From SIEM to EDR, products had to have AI/ML. Now ChatGPT is evoking fears pulled from science fiction movies.

“Generative AI (GAI) has tremendous potential and troubling downsides. But the government will be hard-pressed to curtail building new models, slow expanding capabilities or ban addressing new use cases. These models could proliferate anywhere on the globe. Clever humans will find new ways to use this tool – for good and bad. Any regulation will largely be ceremonial and practically unenforceable.”

Ani Chaudhuri, CEO of Dasera, said the government’s actions are commendable but says it’s crucial to emphasize data security plays a vital role in ensuring AI's responsible and ethical use.

“AI developers must be held accountable for the security of their products, emphasizing their responsibility to make their technology safe before deployment or public use,” Chaudhuri says. “This includes proper data management, secure storage and measures to prevent unauthorized access to sensitive information.” 

About the Author

John Dobberstein | Managing Editor/SecurityInfoWatch.com

John Dobberstein is managing editor of SecurityInfoWatch.com and oversees all content creation for the website. Dobberstein continues a 34-year decorated journalism career that has included stops at a variety of newspapers and B2B magazines. He most recently served as senior editor for the Endeavor Business Media magazine Utility Products.