Q&AI
An uncertain future for US AI safety efforts.
Shirin Ghaffary

The future of AI safety efforts in the US is uncertain under the incoming Trump administration. But first…

Three things to know:

• Nvidia said its new chip will ship this quarter
• OpenAI CEO received $76,001 in pay last year, filing shows
• Apple readies more conversational Siri in bid to catch up in AI

In limbo

An A-list group of venture capitalists, business leaders and government officials from around the world gathered in San Francisco this week to talk through the risks that artificial intelligence poses to society.

The inaugural convening of the international network of AI safety institutes was a banner moment for the US asserting its leadership position on the issue. But the re-election of Donald Trump — and the likelihood he will rethink AI policy — loomed large over the event.

“I don’t want to talk about politics,” said Gina Raimondo, the US commerce secretary who has been called “America’s point woman on AI.” But during her remarks on stage Wednesday, she acknowledged that politics was on “everybody’s mind.” Indeed, the fate of the key US government group leading the meeting, the US AI Safety Institute, is now uncertain.

A little over a year ago, the Biden administration tasked the National Institute of Standards and Technology (NIST), which operates under Raimondo’s Department of Commerce, to oversee the rigorous work of evaluating AI’s capabilities and risks. NIST then created the US AI Safety Institute, which secured voluntary agreements from OpenAI and Anthropic to test their models. Several other countries, including the UK, created similar AI safety institutes.

But President-elect Donald Trump’s party pledged in its platform to undo Biden’s executive order on AI, saying that it “imposes Radical Leftwing ideas” on the field. If the Trump administration acts on that, it could muddy NIST’s mandate to oversee AI standards and throw the future of the US AI Safety Institute into question. 

Representatives for Trump’s transition team did not respond to a request for comment. 

There are currently two bipartisan bills in Congress to authorize the institute, but neither have moved forward so far. Tech industry groups backed by companies like OpenAI, Google and Anthropic, such as TechNet and the Information Technology Industry Council, have called for Congress to support the bills.

At the event on Wednesday, the institute made the case for its relevance. Elizabeth Kelly, the institute’s director, touted a series of recent accomplishments. The group released a joint statement of priorities for the international institutes and shared the results of the first safety evaluations of an AI company’s model — this one from Anthropic. The event’s organizers also played recorded remarks from legislators on both sides of the aisle speaking about their support for the institute’s work.

In a conversation on the sidelines of the event, Kelly told me the institute is ready to work with the new administration as part of NIST, a nonpartisan group that has existed for over a century and is not a regulatory agency. 

“The work that we are doing is very consistent with NIST’s longstanding process of doing testing and evaluations in partnership with industry on topics like facial recognition,” she said. “We view ourselves as continuing that legacy.”

Kelly also stressed that the US should not forfeit its leadership role on the issue of AI safety, especially considering many of the most prominent artificial intelligence developers are based in the country.  

“We are very aware that these are American companies,” she said. “We want to make sure that the US is leading the conversation and able to shape what that looks like, and not ceding the ground to other countries that may not have American companies’ best interests.”

Elizabeth Kelly Photographer: Ting Shen/Bloomberg

During the event, Anthropic Chief Executive Officer Dario Amodei praised the institute’s work. He said government expertise is needed to test for specific national security threats that AI companies like his aren’t well-versed on. 

“There’s a good role in which you all work together,” said Amodei, directing his comments to the various international AI safety institutes in the room. “You have expertise in different languages, in different subject matters, working with different companies. If you’re able to pool your expertise in one global brain that’s evaluating these models, that can be very efficient.”

Whether or not the US continues to invest in the institute, other countries are set to move ahead. In February, France will host its own international AI summit, in what is expected to be one of the biggest government convenings on the topic yet. Meanwhile, the UK AI Safety Institute has received $100 million in funding, ten times the amount that Congress approved for the US AI Safety Institute this year.

But supporters of the US institute are holding out hope that Trump won’t gut the organization. At the event, Raimondo also stressed the importance of AI safety work, regardless of who is in power in Washington. 

“I don’t care what political party you’re in. This is not in Republicans’ interest or Democrats’ interest,” Raimondo said. “It’s frankly in no one’s interest anywhere in the world, with any political party, for AI to be dangerous or for AI to get in the hands of malicious non-state actors that want to cause destruction and sew chaos.”

Got a question about AI? Email me, Shirin Ghaffary, and I’ll try to answer yours in a future edition of this newsletter.

Human quote of the week

“I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it.”
Dario Amodei
Anthropic CEO
In response to a question at the AI safety summit, Amodei said artificial intelligence companies, including his own, should be subject to mandatory testing requirements to ensure their technologies are safe for the public before release. “I think just public attention and the fact that employees care has created some pressure, but I do ultimately think it won’t be enough,” he said. 

One to watch

Navrina Singh, CEO of Credo AI, details the company's latest partnership with Microsoft and the future of AI under a second Trump administration. She joins Caroline Hyde to discuss on Bloomberg Technology.

Deep learning

More from Bloomberg

Get Tech Daily and more Bloomberg Tech weeklies in your inbox:

  • Cyber Bulletin for coverage of the shadow world of hackers and cyber-espionage
  • Game On for reporting on the video game business
  • Power On for Apple scoops, consumer tech news and more
  • Screentime for a front-row seat to the collision of Hollywood and Silicon Valley
  • Soundbite for reporting on podcasting, the music industry and audio trends