When it Comes to AI, Let’s Move Fast and Fix Things 

4 minute read
Ideas
Steyer is the founder and CEO of Common Sense Media, a global nonprofit helping families media and tech.

Today, the White House was proud to announce it has received “voluntary commitments” from tech companies like Microsoft, Meta, and OpenAI to support forthcoming regulation on artificial intelligence. At first blush, it’s a reassuring gesture from tech companies who hold “human extinction” in the palm of their hands, but Americans should take their lip service with a grain of salt.

More than a decade after Mark Zuckerberg coined the mantra “move fast and break things,” the public is finally realizing the serious negative effect that social media platforms have had on youth mental health, and the brokenness in our democracy and public health that Big Tech has left in its wake.

Now, Big Tech wants to “launch and iterate” a new lab experiment on society writ large, this time with artificial intelligence. Leaders in the field agree that “smart regulation” is needed to avoid serious harm to humanity, but AI is already woven into the fabric of the mainstream’s daily lives. These calls for cooperation and heartwarming pledges are focused on future risks without sufficient recognition of the real and proximate harms happening to children today.

The time to move fast and fix things is now. Kids are using AI tools, in many cases without their parents or teachers knowledge, according to a Common Sense study, and many are already using them more than they’re using Google.

Broad AI adoption among children can present many real risks, data privacy being chief among them. Popular chatbots like Snapchat’s “MyAI” can quickly extract and process vast amounts of personal data, potentially exposing children to cyber threats, targeted advertising, and inappropriate content.

AI-driven surveillance of children could also be destructive. Because AI is only as good as the decisions of the programmers who design it, the use of facial, vocal, and emotional detection can cause disproportionate harm to marginalized groups. Studies have shown that the inaccuracy of facial recognition algorithms on darker skin tones could result in unfair suspensions and disciplinary actions for students of color. Even well-intentioned AI solutions have the potential to perpetuate unfair bias, especially if they do not preserve the full range of diversity among students and anchor equity at the center of the experience.

Read More: The Darwinian Argument for Worrying About AI

This means that transparency from AI technologists should be non-negotiable. ChatGPT can’t perceive the context of a prompt, and it usually doesn’t cite reliable references or links that let users explore the underlying sources, which can lead to the spread of misinformation, or worse. The public should have access to known model limitations and AI system performance, defined by a set of parameters that uniquely apply to AI such as accuracy, recall, and precision. That way, users aren’t misled to believe results that are incorrect, incomplete, or inappropriate.

If we’ve learned anything from the explosion of social media, it’s that the government will not move fast to establish tech policies that benefit children. Powerful tech lobbyists are fighting to safeguard companies’ rights to design products for adults, and some of those products will likely be later deemed harmful for children. They have an impressive track record of wins – despite clear evidence that company profits supersede child protections – and, as a result, the pace of legislative change has been tragically laggard.

Now, there is hope that policymakers may know better. European lawmakers are demonstrating impressive leadership in collaborating with regulators to develop guardrails for child protections. The UK Online Safety Bill will establish a duty of care and transparency for AI and the EU AI Act includes specific requirements for “foundation models” such as OpenAI’s ChatGPT, all of which have caused technologists like Sam Altman to think twice about moving fast into Europe. This kind of push-pull breeds a more cautionary environment that, in the end, will benefit children and help us navigate AI’s widespread risks.

All told, there is enormous risk in inaction, despite the many promising possibilities that AI has to offer. Rather than “can AI be used for this task?,” the first question must always be “should AI be used for this task?” Because when it comes to children, we should be absolutely sure that the benefits outweigh the risks. We must apply lessons from social media and urge policymakers and tech companies to protect children’s privacy, safety, and mental health. If we let them move fast and break things with AI, kids will be the biggest losers.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.