Just last week Tesla and SpaceX CEO Elon Musk revealed that he was only worried about the work of one AI company, which many read to be Google.
Musk and his fellow worriers should feel a little more at ease to know that Google and DeepMind are thinking about ways to ensure full human control.
In an academic paper, DeepMind’s Laurent Orseau and Stuart Armstrong of the Future of Humanity Institute at the University of Oxford have established a framework for interrupting an AI’s course of action.
“Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions,” reads the paper.
Of course, this isn’t much use if the clever-clogs AI (shall we just call it Skynet?) figures out how to override this ‘big red button’.
The paper suggests that the key is to tweak the AI’s reward function to prevent it from factoring in human interaction points and formulating undesirable shortcuts that circumvent them. So when a very literal big red button is pressed in a factory for safety reasons, the AI doesn’t learn that this is bad for productivity (thus a block to attaining its ‘reward’) and plough on regardless.
This article originally appeared on TrustedReviews.com
More Must-Reads From TIME
- What Student Photojournalists Saw at the Campus Protests
- Women Say They Were Pressured Into Long-Term Birth Control
- How Far Trump Would Go
- Scientists Are Finding Out Just How Toxic Your Stuff Is
- Boredom Makes Us Human
- John Mulaney Has What Late Night Needs
- The 100 Most Influential People of 2024
- Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time
Contact us at letters@time.com