We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More
Technology

The Singularity Might Redefine What It Means to Be Human and Machine

Someday soon, computers could attain superintelligence and revolt. But futurist Nick Bostrom sees a future that's far less grim.

By Nathaniel ScharpingFeb 8, 2021 3:33 PM
bigidea
(Credit: 13_Phunkod/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

This article appeared in the March/April issue of Discover magazine as "Embracing the Singularity." Subscribe for more stories like these.


Ever since computers took shape — first filling rooms, then office desks, then pockets — they have been designed by human minds. Over the years, plenty of people have asked: What would happen if computers designed themselves?

Someday soon, an intelligent computer might create a machine far more powerful than itself. That new computer would likely make another, even more powerful, and so on. Machine intelligence would ride an exponential upward curve, attaining heights of cognition inconceivable to humans. This, broadly speaking, is the singularity.

The term dates back over 50 years, when scientists were just beginning to tinker with binary code and the circuitry that made basic computing possible. Even then, the singularity was a formidable proposition. Superintelligent computers might leap forward from nanotechnology to immersive virtual reality to superluminal space travel. Instead of being left behind with our puny, cell-based brains, humans might merge themselves with AI, augmenting our brains with circuits, or even digitally uploading our minds to outlive our bodies. The result would be a supercharged humanity, capable of thinking at the speed of light and free of biological concerns.

Philosopher Nick Bostrom thinks this halcyon world could bring a new age entirely. “It might be that, in this world, we would all be more like children in a giant Disneyland — maintained not by humans, but by these machines that we have created,” says Bostrom, the director of Oxford University’s Future of Humanity Institute and the author of Superintelligence: Paths, Dangers, Strategies.

Depending on where you stand, this might sound like a utopian fantasy, or a dystopian nightmare. Bostrom is well aware of this. He’s been thinking about the emergence of superintelligent AI for decades, and he’s intimately familiar with the risks such creations entail. There’s the classic sci-fi nightmare of a robot revolution, of course, where machines decide they’d rather be in control of the Earth. But perhaps more likely is the possibility that the moral code of a superintelligent AI — whatever that may be — simply doesn’t line up with our own. An AI responsible for fleets of self-driving cars or the distribution of medical supplies could cause havoc if it fails to value human life the same way we do.

The problem of AI alignment, as it’s called, has taken on new urgency in recent years, due in part to the work of futurists like Bostrom. If we cannot control a superintelligent AI, then our fate could hinge on whether future machine intelligences think like us. On that front, Bostrom reminds us that there are efforts underway to “design the AI in such a way that it would in fact choose things that are beneficial for humans, and would choose to ask us for clarification when it is uncertain what we intended.”

There are ways we might teach human morality to a nascent superintelligence. Machine learning algorithms could be taught to recognize human value systems, much like they are trained on databases of images and texts today. Or, different AIs could debate each other, overseen by a human moderator, to build better models of human preferences.

But morality cuts both ways.

There may soon be a day, Bostrom says, when we’ll need to consider not just how an AI feels about us, but simply how it feels. “If we have machine intelligences that become artificial, digital minds,” he continues, “then it also becomes an ethical matter [of] how we affect them.”

In this age of conscious machines, humans may just have a newfound moral obligation to treat digital beings with respect. Call it the 21st-century Golden Rule.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 70% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.