The general narrative in the media regarding artificial intelligence (AI) is binary. AI is either the greatest human invention ever, so powerful it will catapult us into an age of abundance for all and solve our environmental problems as well OR it is the most momentous human invention ever which is so dangerous that it could wipe out human civilization (and therefore must be tightly controlled or abandoned altogether). The trouble with binary narratives is that they fail to incorporate hundreds or even thousands of pieces of information that lead to much more nuanced narratives that might help us understand AI.
Henry Farrell is a professor of international affairs at Johns Hopkins University School of Advanced International Studies who recently published an article (along with three co-authors) entitled "Large AI models are cultural and social technologies." The article appeared in Science magazine earlier this month. So what could a professor of international affairs possibly have to say about AI? Well, it turns out quite a lot. And, in a related article, Farrell shows how our societal understanding of AI sheds light on the activities of the so-called Department of Government Efficiency (DOGE) that seeks to tear out and discard large portions of the U.S. federal government. More on that later.
First, it's worth understanding what AI really is and Farrell provides a surprisingly succinct explanation of the variant that is most in the news today, Large Language Models: