The general narrative in the media regarding artificial intelligence (AI) is binary. AI is either the greatest human invention ever, so powerful it will catapult us into an age of abundance for all and solve our environmental problems as well OR it is the most momentous human invention ever which is so dangerous that it could wipe out human civilization (and therefore must be tightly controlled or abandoned altogether). The trouble with binary narratives is that they fail to incorporate hundreds or even thousands of pieces of information that lead to much more nuanced narratives that might help us understand AI.
Henry Farrell is a professor of international affairs at Johns Hopkins University School of Advanced International Studies who recently published an article (along with three co-authors) entitled "Large AI models are cultural and social technologies." The article appeared in Science magazine earlier this month. So what could a professor of international affairs possibly have to say about AI? Well, it turns out quite a lot. And, in a related article, Farrell shows how our societal understanding of AI sheds light on the activities of the so-called Department of Government Efficiency (DOGE) that seeks to tear out and discard large portions of the U.S. federal government. More on that later.
First, it's worth understanding what AI really is and Farrell provides a surprisingly succinct explanation of the variant that is most in the news today, Large Language Models:
But behind their agent-like interfaces and anthropomorphic pretensions, Large Language Models (LLM) and Large Multi-modal Models are statistical models that take enormous corpora of text produced by humans, break them down into particular words, and estimate the probability distribution of long word sequences. This is an imperfect representation of language but contains a surprisingly large amount of information about the patterns it summarizes. It allows the LLM to predict which words come next in a sequence, and so generate human-like text. Large Multi-modal Models do the same with audio, image, and video data.
Farrell warns that we shouldn't mistake this for what is called artificial general intelligence (AGI), that is, the machine version of the kind of intelligence that humans have, something we are not even close to achieving. Rather, LLMs should be regarded as extensions of our current capabilities which "combine the features of [existing] cultural and social technologies in a new way" and which "generate summaries of unmanageably large and complex bodies of human-generated information."
Therefore, these models "are analogous to such past technologies as writing, print, markets, bureaucracies, and representative democracies" which gather and process information to make it more useful and available to us. Farrell and his fellow authors believe that LLMs will likely have a substantial impact on society, but will augment rather than replace human intelligence.
Which brings us to Farrell's understanding of the DOGE phenomenon. In his other piece mentioned above Farrell asks in the title "Should AGI-preppers embrace DOGE?", he explains that many technologists believe AGI is coming soon. He cites one who predicts its arrival in 2026. And, many of those who believe in the near-term advent of AGI also believe it will eclipse in speed and quality most, if not all, human decision-making. This then would mean that the American federal bureaucracy—and, in fact, every bureaucracy in any country that can afford to deploy AGI—is vastly oversized and will have to be cut down considerably as AGI algorithms take over day-to-day operations and even policy-making at higher and higher levels.
Obviously, Farrell doesn't buy into this view and neither, as it turns out, do three quarters of AI researchers. But this makes it clear why AGI believers think slashing people from government and corporations should commence immediately.
Readers may be wondering if we should take a wait-and-see attitude before hacking bureaucracies to pieces. No one has clearly defined or demonstrated what will take their place. The idea that such bureaucracies could be more efficient is not new. But clearly what's happening now is not about efficiency; otherwise, there would be careful consideration of tasks to be done and qualified people to do them.
If the idea behind the cuts is simply to cripple government so that the public will support radical downsizing, then the jig is up. The cost-cutters are intentionally injuring the capacity of government to function in order engineer support for its downsizing without taking responsibility for causing the very injuries that are creating the problems. The argument that government doesn't work evaporates when you are the one intentionally making it not work.
Even if AI will help government become more effective and efficient, it will still need people to use it, people who understand its capabilities and limits. The well-known arguments for those limits are worth repeating. First, the map is not the territory. Images are representations of reality, not reality itself. So, too, words are representations rather than reality itself, and they are not lived experience. Lived experience is what gives people judgment. LLMs cannot apply judgment to any situation because they do not and cannot draw from lived experience.
Second, LLMs are constructed using computer code. Code is a vastly more limited subset of language and therefore cannot even capture what everyday language can.
Third, there is the garbage-in, garbage-out problem. LLMs are only as good as the data they are trained on. Vast amounts of data may be inaccurate, outdated, mistaken and biased by the viewpoints and experiences of the contributors. And, LLMs are known to "hallucinate." For example, they cite legal documents and academic articles that do not exist.
Fourth, so far LLMs are only useful if you as a user have enough knowledge to spot the hallucinations. In other words, you have to know the discipline within which you are seeking information well enough to exercise independent judgment. And, that suggests that for now LLMs will have more limited usefulness than advertised.
Fifth, AI-generated content has the unfortunate side-effect of pushing the very people who create the content AI trains on out of their professions. AI is already replacing human-written stories in some media organizations. When all the reporters are gone, who will be left to do the actual reporting upon which AI bases its articles? And, if we have AI music and AI art and it gains wide appeal, who will provide the new tunes and paintings to feed AI since the musicians and artists will have been driven into some other pursuit to make a living?
If you find dealing with an AI-generated bot on a government website frustrating now, think of what it will be like when there are no humans to talk to in order to straighten out your benefits or to understand whether your situation constitutes an emergency. AI may "understand" your questions in some fashion, but it almost certainly will not be able to judge whether you are asking the wrong questions in your situation and what the right ones might be.
There is the cautionary tale of the bank that decided it would no longer allow its customers to use teller window services for routine deposits and withdrawals under a certain amount. I forget the cutoff, but I remember the incident because it made big news when a competing bank came out with an advertising campaign with the headline: Our tellers love people. That was the end of the transaction limit.
Will citizens faced with the ongoing encroachment of AI in their interactions with companies and now government revolt? Maybe researchers will someday create "personal" AI and I'll be able to work out a problem with my bank or an online retailer by saying: "I'll have my bot talk to your bot." I'm not at all certain that that conversation would lead to a satisfactory result.
Kurt Cobb is a freelance writer and communications consultant who writes frequently about energy and environment. His work has appeared in The Christian Science Monitor, Resilience, Common Dreams, Naked Capitalism, Le Monde Diplomatique, Oilprice.com, OilVoice, TalkMarkets, Investing.com, Business Insider and many other places. He is the author of an oil-themed novel entitled Prelude and has a widely followed blog called Resource Insights. He can be contacted at kurtcobb2001@yahoo.com.
1 comment:
The main REAL problems with current AI include, but are not limited to, the fact that it is just another corporation, and a perfect smoke screen to mess with democracy. I recently summarized those problems here:
https://fossforce.com/2025/03/ai-is-not-unavoidable-not-this-ai-thats-for-sure/
Post a Comment