When it comes to the dangers of artificial intelligence (AI) and now artificial superintelligence (ASI) (sometimes called artificial general intelligence), I feel as if we've been transported onto the set of the 1983 film "WarGames."
In the film teenage hacker David Lightman stumbles onto the military's most sensitive war scenario planning computer while believing he has simply found a soon-to-be-released game called "Global Thermonuclear War" on the server of a computer game company. Lightman activates the game which ultimately makes personnel at the North American Air Defense Command (NORAD) mistakenly believe that the Soviet Union is preparing for an attack. On big screens throughout the war room, Soviet movements and preparations become ever more threatening by the hour. As we are told later, the object of the game is to win and so the computer sets out to win a thermonuclear war.
When Lightman realizes what he's done, he seeks out the one person he believes can stop the madness. (I'm skipping a lot of steps here.) He catches up with the architect of that war planning computer system, Stephen Falken. Falken is living a solitary, anonymous existence (under a different name) in a home that Falken says is near a primary nuclear target. He explains to the young hacker: "A millisecond of brilliant light and we're vaporized. Much more fortunate than the millions who'll wander sightless through the smouldering aftermath. We'll be spared the horror of survival."
Lightman pleads with Falken to call his former associates at NORAD to tell them what is happening. Falken refuses saying that the world might gain a few years if he makes the call, "but humanity planning its own destruction, that a phone call won't stop."
Like the fictional Stephen Falken, the computer industry's geniuses are now playing games with very complex systems (with literally trillions of inputs) called AI, systems that have emergent properties. Emergent properties are ones you don't program in and that you don't expect—not unlike the computer in "WarGames" making its users think that a simulation is the real thing. That's why we are now treated to a constant barrage of reports about so-called "hallucinations" emitted by AI programs, that is, information that is incorrect or simply nonexistent. See this listing for some interesting and disturbing "hallucinations." AI chatbots have also been known to counsel teenagers on how to commit suicide and one teenager succeeded.
Right now what is called AI is primarily based on what are called large language models. This type of AI hoovers up huge amounts of text, typically from the web (often violating copyright) and "trains" on that text. What it really does when it responds to an inquiry is predict based on statistical analysis what the next word on a particular topic should be. It doesn't have "knowledge," just statistical inclinations based on its training which is why it is prone to mistakes.
The many minor mistakes that current chatbots make may seem amusing or possibly inconvenient unless the AI is used for critical purposes such as surgery, when mistakes can and now already have been very damaging. In a piece I wrote in 2024 I noted: "Now think about the mess AI will make if used without respect for its limitations in the fields of medicine and law where honed judgment from seasoned professionals who know the subject matter extremely well is crucial." The mess is upon us.
We are seeing troubling results in other fields as well. "Futurism" magazine contemplates the damage to the food system if ordering and delivery systems now increasingly powered by AI go on the fritz or if such systems are intentionally disabled by a cyberattack. No one working in the grocery stores would have any idea about how to fix them, at least not in any timely manner. Unlike Stephen Falken, grocery workers may not be able to solve such a problem by simply picking up the phone to order more supplies. And, a phone call will certainly do nothing to put a dent in the long-term vulnerability.
Think about what happens when airline computer systems go down. If that happens to the food system—say, through a nationwide cyberattack—we will be in a race against time since due to increased logistics efficiency, most communities in the the United States have only a three-day supply of food in groceries and restaurants.
Okay, you say. But we are in the shakedown period for AI. AI is really going to supercharge the economy, so we should keep working on it. Maybe that's true. But maybe it's not. "Futurism" magazine again reports: "In a new analysis of a survey published by the National Bureau of Economic Research and highlighted by Fortune, around 90 percent of the nearly 6,000 interviewed CEOs, chief financial officers, and other top executives at firms across the US, UK, Germany, and Australia, said that AI has had no impact on productivity or employment at their business."
With a record like that I'm not concerned that AI as it is currently configured will destroy human civilization unless a bunch of idiots decide to allow it to run critical infrastructure autonomously without human supervision. I suppose that could happen, but I think the accumulating disasters on the way there would slow and then stop such a trajectory.
What may be of even more concern is ASI. This refers to AI that acquires a human-like general intelligence. There is no agreed definition of "intelligence" let alone "general intelligence" or "superintelligence" apart from saying that the latter two refer to the kind of intelligence that humans have. So, it's not clear how we'd know if we humans have engineered such intelligence. In fact, I'm not convinced that this kind of machine intelligence is even possible in an entity that does not have a body with the same biological apparatus as a human.
But that doesn't mean that AI couldn't become much more powerful and have much more power over us if we allow it to. With increasingly human-like capabilities unconstrained by human values and institutions, it's just possible that such an intelligence could pursue its objectives—say, maximizing production of computer chips—by draining all the water from a city's reservoir.
Of course, we wouldn't allow that, would we? Even assuming good motives, with AI or ASI we cannot be assured of good outcomes. The very structure of how they work does NOT allow us to put in high-level restraints such as "You must follow all of Issac Asimov's three laws of robotics." We can try, but these systems are designed in ways that create novel output and novel trajectories than cannot be foreseen. By the way, Asimov's three laws of robotics are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later added a fourth law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." This, of course, was the supreme law above all the others.
But as the co-author of "If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species," points out in this interview, much of the excitement in Asimov's science fiction comes from robots that do NOT follow the laws of robotics.
The author, Nate Soares, notes that people who are developing AI and hoping soon to create ASI believe that ASI has a 10 to 25 percent chance of destroying civilization. Those are people inside the industry, not the critics! He explains that even if we accept the industry insiders' estimates of catastrophe, we should ask ourselves the equivalent of whether we'd get on a plane that has a 10 to 25 percent chance of crashing on the way to our destination.
In fact, any technology that has a nonzero chance of wiping out human civilization as a whole, that is, that carries such huge systemic risk, should by definition be abandoned. There is no amount of benefit that can outweigh even a small risk of destroying of the entirety of the human cultural project.
Soares thinks it's possible for ASI to be made safe, but he doesn't think politicians understand the gravity of the threat. And no one in the industry wants to be the first to put restrictions on development without the entire industry adopting similar restrictions. So, we are now locked in a potential suicide pact from which no one who is profiting has incentive to escape.
Soares imagines a day when robots mine and process the materials used to make more of them and then assemble more robots in a factory and deliver them to other places where they are needed for various tasks, all without human intervention. Such robot factories could engage in innovation without bothering to tell us humans. And that innovation may turn out to be great for robot efficiency and performance, but maybe not great for us because those "innovations" so degrade the environment that they make it increasingly uninhabitable for humans. (I've added to Soares' example just a little to draw out the possible bad conclusions.)
With robots doing most everything we need done, what would that do to human agency? That turns out to be an important question when seen in the context of other important questions as Soares writes in a separate article in The Atlantic magazine:
This all adds up to a worrying picture, where companies are racing to build a kind of AI that would be very dangerous. How could machines possibly do something other than what we ask? Why would they wind up with drives of their own that we didn’t put there on purpose? Because nobody puts much of anything into AIs on purpose in the first place. AIs aren’t like traditional software, where every piece was put there by some programmer who knows precisely what it means. All sorts of weird drives and behaviors get trained into them, for reasons nobody entirely understands. They can and do act in ways other than their creators intended, and we’re already seeing the warning signs.
The fictional Stephen Falken gave up on designing war games for the military. The military was trying to figure out how to win a nuclear war. But Falken said he could never teach his war planning computer program the most important lesson: "Futility. That there's a time when you should just give up." He meant that there is really no way to win a nuclear war.
It's hard to imagine the titans of AI all turning into Stephen Falkens. But it is not impossible for an informed public to say no to technology that might just wipe us out. The keepers of the so-called Doomsday Clock now include AI as a serious threat to civilization along with nuclear war, climate change and biological research that threatens catastrophe if turned into a weapon or a mistake releases a deadly novel micro-organism. Given the added dangers of AI, it shouldn't be surprising that the clock is set to its closest ever position to midnight which represents global cataclysm, just 85 seconds away.
It shouldn't be lost on us that these dangers are all of our own making. That means there's a possibility that we could unmake them if we have the will to do so.
Kurt Cobb is a freelance writer and communications consultant who writes frequently about energy and environment. His work has appeared in The Christian Science Monitor, Resilience, Common Dreams, Naked Capitalism, Le Monde Diplomatique, Oilprice.com, OilVoice, TalkMarkets, Investing.com, Business Insider and many other places. He is the author of an oil-themed novel entitled Prelude and has a widely followed blog called Resource Insights. He can be contacted at kurtcobb2001@yahoo.com.
No comments:
Post a Comment