AI Regulation Takes Baby Steps on Capitol Hill

6 minute read

At an unprecedented closed-door meeting that brought together most of the U.S. Senate and the country’s top tech leaders on Wednesday, Senate Majority Leader Chuck Schumer tried to start simple. “I asked everyone in the room, 'Is government needed to play a role in regulating AI?'” he told reporters after the meeting. “And every single person raised their hands.”

The much-hyped forum on artificial intelligence, which was closed to the press and the public, was meant to set the tone for collaboration between the world’s biggest tech companies and Congress as it seeks to pass bipartisan AI legislation within the next year. But the six-hour meeting highlighted the current state of play in Washington when it comes to AI, where it has become much easier to agree on high-level discussions on the “existential risks” posed by the rapidly evolving technology than on any specific constraints or a plan of action.

That rhetorical disconnect was clear as some of the richest men in America, including Tesla and SpaceX CEO Elon Musk, Meta CEO Mark Zuckerberg, Google CEO Sundar Pichai and OpenAI CEO Sam Altman, filed out of the room. Musk told reporters that the meeting “may go down in history as being very important for the future of civilization.” Others, including some among the roughly 40 Senators who didn’t attend, took a less grandiose view of the proceedings. Sen. Josh Hawley, R-Mo., who skipped the event, said he refused to participate in what he called a “giant cocktail party for big tech.”

Wednesday's meeting came amidst a week of frenetic activity on AI legislation. Three other Congressional hearings also brought tech executives and AI experts to Capitol Hill to debate oversight, transparency measures, and the risks that the adoption of AI tools could pose to federal agencies. Legislators put forward a series of overlapping legislative proposals for everything from an independent federal office to oversee AI and requirements for the licensing of these technologies, to liability for civil rights and privacy violations and a ban on deceptive AI-generated content in elections.

So far, however, most proposals for legislation have been light on details, laying out rules for transparency and legal liability in very broad strokes. While there may be general agreement on a high-level framework that checks all the boxes–AI should be safe, effective, trustworthy, privacy-preserving, and non-discriminatory–“what that really means is that regulatory agencies will have to figure out how to give content to such principles, which will involve tough judgment calls and complex tradeoffs," says Daniel Ho, a professor who oversees an artificial intelligence lab at Stanford University and is a member of the White House's National AI Advisory Committee.

Not that regulating AI is easy. Any AI legislation will have to address a dizzying range of problems, from the environmental costs of training large models to concerns over privacy, surveillance, medical applications, national security, and misinformation. This would likely leave under-staffed, resource-strapped federal regulatory agencies with the task of figuring out how to implement or enforce these rules. “That's what makes it very hard,” says Ho.

There are also concerns that the recent hype over generative AI could obscure the risks posed by other AI technologies, experts say. With top tech executives like Musk, Zuckerberg and Altman often spending their time on Capitol Hill being questioned about the speculative “civilizational risks” of these technologies, there has been less attention paid to the day-to-day effects when these systems go astray, like the documented cases of facial recognition software misidentifying someone who has been arrested.

In another hearing this week, tech executives urged a Senate Judiciary subcommittee to set up an emergency brake for AI systems that control critical infrastructure to ensure that they can’t cause harm. “If a company wants to use AI to, say, control the electrical grid or all of the self-driving cars on our roads or the water supply… we need a safety brake, just like we have a circuit breaker in every building and home in this country,” Microsoft President Brad Smith said on Tuesday. "Maybe it's one of the most important things we need to do so that we ensure that the threats that many people worry about remain part of science fiction and don’t become a new reality.”

Last week, the two leaders of the Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law, Sens. Richard Blumenthal, D-Conn., and Hawley, released a blueprint for “real enforceable AI protections” that includes the creation of an independent oversight agency that AI companies would have to register with. It also proposes that AI companies should bear legal liability “when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms.”

Adding to the sense of urgency is widespread agreement among lawmakers that Congress acted too slowly when it came to regulating emerging new technologies, like social media platforms, in the past. “We don’t want to do what we did with social media,” Senate Intelligence Committee Chairman Mark Warner, D-Va., told reporters after Wednesday’s meeting, “which is let the techies figure it out, and we’ll fix it later.”

The slow pace of the debate in Washington has led several state lawmakers to take matters into their own hands. California state Sen. Scott Wiener introduced a bill on Wednesday proposing that “frontier” AI systems that require above a certain quantity of computing power to train should be subject to transparency requirements.

Read More: Exclusive: California Bill Proposes Regulating AI at State Level

As the state that is home to Silicon Valley, where most of the world’s top AI companies have their headquarters, Wiener says that California has a significant role to play in setting the guardrails for the industry. “In an ideal world we would have a strong federal AI regulatory scheme,” Wiener told TIME in an interview on Tuesday. “But California has a history of acting when the federal government is moving either too slowly or not acting.”

Most Americans support the recent push for action. More than half of U.S. adults, including 57% of Democrats and 50% of Republicans, “agree that the development of AI technologies should be heavily regulated by the government,” according to a Morning Consult poll in June

While lawmakers overwhelmingly agree on the need to regulate AI, a partisan split has also emerged, with some Republicans accusing lawmakers of using the issue to justify expanding federal regulation. “More than fearmongering and fanciful speculation are required by law,” Sen. Ted Cruz, a Texas Republican, said in a letter to Federal Trade Commission Chair Lina Khan Monday demanding answers about her agency’s stance on AI regulation. He echoed some of his colleagues’ concerns about following in the steps of the “heavy-handed regulation” of the European Union.

“To me, the biggest existential risk we face is ourselves,” Cruz wrote in the letter. “At this point, Congress understands so little about AI that it will do more harm than good… let’s pause before we regulate.”

More Must-Reads from TIME

Write to Vera Bergengruen at vera.bergengruen@time.com