• Ideas

Inside the Pentagon’s Effort to Build a Killer Robot

10 minute read
Ideas
Jacobsen's new book is Nuclear War: A Scenario

The Los Alamos National Laboratory sits at the top of a mountain range in the high desert of northern New Mexico. It is a long, steep drive to get there from the capital city of Santa Fe, through the Tesuque Indian Reservation, over the Rio Grande, and into the Santa Fe National Forest. I am headed to the laboratory of Dr. Garrett T. Kenyon, whose program falls under the rubric of synthetic cognition, an attempt to build an artificial brain.

Roboticists define artificial brains as man-made machines designed to be as intelligent, self-aware and creative as humans. No such machine yet exists, but Defense Advanced Research Projects Agency (DARPA) scientists like Dr. Kenyon believe that, given the rapid advances in DARPA technologies, one day soon they will. There are two technologies that play key roles in advancing artificial intelligence, and they are computing, which involves machines, and neuroscience, which involves the human brain.

Traumatic brain injury is as old as war. U.S. soldiers have sustained traumatic brain injuries in each and every one of America’s wars since the Revolution. During the recent wars in Iraq and Afghanistan, of the 2.5 million Americans who served, more than 300,000 returned home with brain injuries. DARPA calls these individuals brain-wounded warriors.

To address brain injuries sustained in modern warfare, DARPA has publicly stated that it has a multitude of science and technology programs in place. The agency’s longterm goals in brain science research, it says, revolve around trying to restore the minds and memories of brain-wounded warriors.

But if the past teaches us about the present, it is clear that DARPA’s stated goals regarding its brain programs are not DARPA’s only goals. DARPA is not primarily in the business of helping soldiers heal; that is the job of the U.S. Department of Veterans Affairs. DARPA’s job is to “create and prevent strategic surprise.” DARPA prepares vast weapons systems of the future. So what are the classified brain programs really for?

DARPA’s limb prosthetics program might offer a number of clues. In 2005, with IEDs dominating the war news, DARPA initiated a program called Revolutionizing Prosthetics. Over the next two years the program was split in two parts. DEKA Research and Development Corporation, in New Hampshire, was given a DARPA contract to make a robotic prosthetic arm. Johns Hopkins University’s Applied Physics Laboratory was given a DARPA contract to create a “thought-controlled” robotic arm. These were highly ambitious goals.

The technologies DARPA is pursuing in its brain and prosthetics programs have dual use in DARPA’s efforts to engineer hunter-killer robots, or lethal autonomous weapons systems. Coupled with the quest for artificial intelligence, all this might explain why DARPA is so focused on looking inside people’s brains.

Outside Dr. Kenyon’s office at Los Alamos there is an armored truck with a machine gun mounted on top. It is parked in the red zone, by the front entrance. Inside the building, Dr. Kenyon and his team work on artificial intelligence, man’s quest to create a sentient machine. Dr. Kenyon is part of the synthetic cognition group at Los Alamos National Laboratory. He and his team are simulating the primate visual system, using a supercomputer to power the operation to create a precise computer model of the human eye to understand the relationship between visual cognition and the brain.

At present, true recognition—as in cognition, or acquiring knowledge and understanding through thought, experience, and the senses—is done only by sentient beings. “We think that by working hard to understand how biological systems solve this problem, how the primate visual system recognizes things, we can understand something fundamental about how brains solve the problems they do, like recognition. Until then, computers are blind,” Kenyon says. “They can’t see.”

Which raises at least one technical problem regarding artificial intelligence and autonomous hunter-killer drones. “I think robot assassins are a very bad idea for a number of reasons,” Kenyon asserts. “Moral and political issues aside, the technical hurdles to overcome cannot be understated,” he says.

Dr. Kenyon is excited by his research. He is convinced that neuroscientists of today are like alchemists of the Middle Ages try ing to understand chemistry. That all the exciting discoveries lie ahead. “Think of how much chemists in the Dark Ages did not understand about chemistry compared to what we know now. We neuroscientists are trapped in a bubble of ignorance. We still don’t have a clue about what’s going on in the human brain. We have theories; we just don’t know for sure. We can’t build an electrical circuit, digital or analogue or other, that mimics the biological system. We can’t emulate the behavior. One day in the future, we think we can.”

Dr. Kenyon says that one of the most powerful facts about DARPA as an organization is that it includes theoretical scientists and engineers in its ranks. The quest for artificial intelligence, he says, is similar to getting humans to Mars. Once you have confidence you can do it, “then getting to Mars is an engineering problem,” he says. In his laboratory, metaphorically, “we just don’t know where Mars is yet.” But Dr. Kenyon and his team are determined. “I don’t think it’s that far away,” he says of artificial intelligence. “The question is, who will be the Columbus here?”

If Dr. Garrett Kenyon’s Los Alamos laboratory represents the future of the mind, the laboratory of Dr. Susan V. Bryant and Dr. David M. Gardiner at the University of California, Irvine, represents the future of the human body. Dr. Bryant and Dr. Gardiner are a husband-and-wife team of regeneration biologists. Dr. Bryant also served as the dean of the School of Biological Sciences and the vice chancellor for research at U.C. Irvine. Dr. Gardiner is a professor of developmental and cell biology and maintains the laboratory where he does research as a regenerative engineer.

This laboratory looks like many university science labs. It is filled with high-powered microscopes, dissection equipment, and graduate students wearing goggles and gloves. The work Dr. Gardiner and Dr. Bryant do here is the result of a four-year contract with DARPA and an extended five-year contract with the Army. Their work involves limb regeneration. Gardiner and Bryant believe that one day soon, humans will also be able to regenerate their own body parts.

“We are driving our biology toward immortality,” Dr. Gardiner says. “Or at least toward the fountain of youth.”

In the 21st-century world of science, almost anything can be done. But should it be done? Who decides? How do we know what is wise and what is unwise?

For the public to stay informed, the public has to be informed. Dr. Bryant and Dr. Gardiner’s program was never classified. They worked for DARPA for four years, then both parties amiably moved on. What DARPA is doing with the limb regeneration science, DARPA gets to decide. If DARPA is working on a cloning program, that program is classified, and the public will be informed only in the future, if at all.

If human cloning is possible, and therefore inevitable, should American scientists be the first to achieve this milestone, with Pentagon funding and military application in mind? If artificial intelligence is possible, is it therefore inevitable?

Another way to ask, from a DARPA frame of mind: Were Russia or China or South Korea or India or Iran to present the world with the first human clone, or the first artificially intelligent machine, would that be considered a Sputnik-like surprise?

DARPA has always sought the technological and military edge, leaving observers to debate the line between militarily useful scientific progress and pushing science too far. What is right and what is wrong?

In 2014 Stephen Hawking and a group of colleagues warned against the risks posed by artificially intelligent machines. “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

In Geneva in 2013, the United Nations held its first-ever convention on lethal autonomous weapons systems, or hunter-killer drones. Over four days, the 117-member coalition debated whether or not these kinds of robotic systems should be internationally outlawed. Testifying in front of the United Nations, Noel Sharkey, a world-renowned expert on robotics and artificial intelligence, said, “Weapons systems should not be allowed to autonomously select their own human targets and engage them with lethal force.”

To coincide with the UN convention, Human Rights Watch and the Harvard Law School International Human Rights Clinic released a report called “Losing Humanity: The Case Against Killer Robots.” “Fully autonomous weapons threaten to violate the foundational rights to life,” the authors wrote, because robotic killing machines “undermine the underlying principles of human dignity.” Stephen Goose, Arms Division director at Human Rights Watch, said: “Giving machines the power to decide who lives and dies on the battlefield would take technology too far.”

In an interview, Noel Sharkey relayed a list of potential robot errors he believes are far too serious to ignore, including “human-machine interaction failures, software coding errors, malfunctions, communication degradation, enemy cyber-attacks,” and more. “I believe there is a line that must not be crossed,” Sharkey says. “Robots should not be given the authority to kill humans.”

Can the push to create hunter-killer robots be stopped? The physicist and artificial intelligence expert Steve Omohundro believes that “an autonomous weapons arms race is already taking place,” because “military and economic pressures are driving the rapid development of autonomous systems.” Stephen Hawking, Noel Sharkey, and Steve Omohundro are three among a growing population who believe that humanity is standing on a precipice.

DARPA’s goal is to create and prevent strategic surprise. But what if the ultimate endgame is humanity’s loss? What if, in trying to stave off foreign military competitors, DARPA creates an unexpected competitor that becomes its own worst enemy? A mechanical rival born of powerful science with intelligence that quickly becomes superior to our own. An opponent that cannot be stopped, like a runaway train. What if the 21st century becomes the last time in history when humans have no real competition but other humans?

In a world ruled by science and technology, it is not necessarily the fittest but rather the smartest that survive. DARPA program managers like to say that DARPA science is “science fact, not science fiction.” What happens when these two concepts fuse?

Adapted from The Pentagon’s Brain.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.