How Humans Can Keep Superintelligent Robots From Murdering Us All

Ultron, an artificially intelligent robotMarvel

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.


While Kevin Drum is focused on getting better, we’ve invited some of the remarkable writers and thinkers who have traded links and ideas with him from Blogosphere 1.0 to this day to contribute posts and keep the conversation going. Today, we’re honored to present a post from Bill Gardner, a health services researcher in Ottawa, Ontario, and a blogger at The Incidental Economist.

This weekend, you, I, and about 100 million other people will see Avengers: Age of Ultron. The story is that Tony Stark builds Ultron, an artificially intelligent robot, to protect Earth. But Ultron decides that the best way to fulfill his mission is to exterminate humanity. Violence ensues.

Oxford philosopher Nick Bostrom argues that sometime in the future a machine will achieve “general intelligence,” that is, the ability to solve problems in virtually all domains of interest—including artificial intelligence.

You will likely dismiss the premise of the story. But in a book I highly recommend, Oxford philosopher Nick Bostrom argues that sometime in the future a machine will achieve “general intelligence,” that is, the ability to solve problems in virtually all domains of interest. Because one such domain is research in artificial intelligence, the machine would be able to rapidly improve itself.

The abilities of such a machine would quickly transcend our abilities. The difference, Bostrom believes, would not be like that between Einstein and a cognitively disabled person. The difference would be like that between Einstein and a beetle. When this happens, machines can and likely would displace humans as the dominant life form. Humans may be trapped in a dystopia, if they survive at all.

Competent people—Elon Musk, Bill Gates—take this risk seriously. Stephen Hawking and physics Nobel laureate Frank Wilczek worry that we are not thinking hard enough about the future of artificial intelligence.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here—we’ll leave the lights on”? Probably not—but this is more or less what is happening with AI…little serious research is devoted to these issues…All of us…should ask ourselves what can we do now to improve the chances of reaping the benefits and avoiding the risks.

There are also competent people who dismiss these concerns. University of California-Berkeley philosopher John Searle argues that intelligence requires qualities that computers lack, including consciousness and motivation. This doesn’t mean that we are safe from artificially intelligent machines. Perhaps in the future killer drones will hunt all humans, not just Al Qaeda. But Searle claims that if this happens, it won’t be because the drones reflected on their goals and decided that they needed to kill us. It will be because human beings have programmed drones to kill us.

Searle has made this argument for years, but has never offered a reason why it will always be impossible to engineer machines with autonomy and general intelligence. If it’s not impossible, we need to look for possible paths of human evolution in which we safely benefit from the enormous potential of artificial intelligence.

What can we do? I’m a wild optimist. In my lifetime I have seen an extraordinary expansion of human capabilities for creation and community. Perhaps there is a future in which individual and collective human intelligence can grow rapidly enough that we keep our place as free beings. Perhaps humans can acquire cognitive superpowers.

But the greatest challenge of the future will not be the engineering of this commonwealth, but rather its governance. So we have to think big, think long-term, and live in hope. We need to cooperate as a species and steer our technological development so that we do not create machines that displace us. At the same time, we need to protect ourselves from the expanding surveillance of our current governments (such as China’s Great Firewall or the NSA). I doubt we can achieve this enhanced community unless we also find a way to make sure the superpowers of enhanced cognition are available to everyone. Maybe the only alternative to dystopia will be utopia.

Fact:

Mother Jones was founded as a nonprofit in 1976 because we knew corporations and billionaires wouldn't fund the type of hard-hitting journalism we set out to do.

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2024 demands.

payment methods

Fact:

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2024 demands.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate