AI Robopocalypse: A Movie That May Be but Will Never Be Real

COMMENTARY Technology

AI Robopocalypse: A Movie That May Be but Will Never Be Real

Mar 11th, 2020 2 min read
COMMENTARY BY
James Jay Carafano

Vice President, Kathryn and Shelby Cullom Davis Institute

James Jay Carafano is a leading expert in national security and foreign policy challenges.
“Sophia the Robot” on stage before a discussion by Hanson Robotics about Sophia's multiple intelligences and artificial intelligence (AI) at RISE Technology Conference in 2018. ISAAC LAWRENCE / Contributor / Getty Images

Key Takeaways

Michael Bay’s latest movie falls into the “artificial intelligence takes over the world” genre of cinema.

In Robopocalypse, a mass intelligence called “Archos” escapes human control and decides that its mission of preserving “life” requires wiping out the human race.

If AI brings the end of us, it won’t be because of threats like Robopocalypse. It will be because of us screwing up how we use AI.

From sci-fi Transformers to reality-based movies like 13 Hours: The Secret Soldiers of Benghazi Director Michael Bay has a long record of delivering kick-butt action flicks. His forthcoming Robopocalypse (which may or may not make it into theaters in 2020) is based on a best-selling science fiction novel, so it has promise of being awesome, too.

Bay’s latest falls into the “artificial intelligence takes over the world” genre of cinema. And, if past is prologue, Robopocalypse’s most menacing moments are likely to stay on the silver screen, never to arise in the real world.

Computers gone bad have been a Hollywood staple for some time. From Colossus in The Forbin Project (1970) to the sinister SkyNet of the Terminator (2019) franchise, artificial minds too smart for their own good have wreaked havoc on mankind. But in real life we are no more likely to be menaced by machines than by vampires.

In Robopocalypse, a mass intelligence called “Archos” escapes human control and decides that its mission of preserving “life” requires wiping out the human race. The humans do not agree. Mayhem ensues. [I know, I know. This sounds a little too much like the uber-dud I, Robot (2004). But we’ll see.]

Originally, the world was supposed to end in 2016 with the movie directed by Steven Spielberg, but real human minds intervened. Now, in theory, Robopocalypse is coming out this year.

Like all these stupid AI movies, the film ducks the real issue: Can this science fiction ever become science fact?

Elon Musk argues that AI creates “existential risks.” Ray Kurzweil, chief engineer for Google and famous futurist, doesn’t go nearly that far, but he warns about “difficult episodes” ahead. There are, however, practical to limits to the scale of AI concerns. Battles between machines and the “Butlerian Jihad” are probably off the table.

Here is why.

AI systems, once developed, are given more and increasingly complex problems to solve. As those assignments grow, the “contingencies”—the problems and choices the system has to deal with—grow exponentially. Pretty soon, the system can’t keep up. Scaling to a problem like battling humanity presents virtually infinite problems.

Can computers ever match human ability in intuition, creativity, responding to uncertainty and dealing with complexity? Maybe not. Computer learning is limited by the algorithms programed into the computer. Human brains are different. They are also being capable of non-algorithmic processing, solving problems without reference to a fixed formula.

So here is the bottom line: If AI brings the end of us, it won’t be because of threats like Robopocalypse. It will be because of us—more specifically, because of us screwing up how we use AI.

How is that different from how humans have had to deal with any technology, since like oh the invention of fire? The answer is there is no difference. People are the problem. Also, the answer.

This piece originally appeared in The National Interest