A scene from the 1992 film "The Player" seems to foretell a current dispute over the role of artificial intelligence between studios and production companies and the writers who feed them content. The dispute is one among many that have led to an ongoing strike by the writers. In "The Player" a studio executive while in a meeting with other colleagues says:
I've yet to meet a writer who could change water into wine and we have a tendency to treat them like that.
Then, the executive asks for headlines from a newspaper lying on the conference table and shows how those headlines can instantly be fashioned into recognized Hollywood movie formulas.
When another executive who reviews script submissions from writers finally comments, he says:
I was thinking what an interesting concept it is to eliminate the writer from the artistic process. If we can just get rid of the actors and directors, maybe we've got something.
What was a thought experiment in 1992 has become a reality today. Here is a short science fiction film written by AI. Here is one both written and directed by AI; ironically, it's about AI taking over the world. AI is now simulating actors' voices. And, at some point virtual actors (NOT digital copies of real actors) could make film and television acting obsolete...or will it?
I can imagine a world in which people will come to crave a real live actor over an AI substitute. The advent of recorded music did not end the desire for live music performances. The advent of film and then television did not eliminate live theater performances. Strangely, recent vinyl record sales have exceeded CD sales—though downloads of music almost certainly outpace both. Old technologies and live performances (or recorded ones by human artists) are probably going to hang on much longer than anticipated.
The big question is whether AI can imagine the way human writers imagine. Nassim Nicholas Taleb, author of The Black Swan and Antifragile, suggested on Twitter that the answer is probably not. He noted that AI source material comes from the internet where soon much of the material posted will come from AI. Taleb calls it "[p]rogressively a self-licking lollipop."
In addition, Taleb notes,"ChatGPT is the modern version of Flaubert's 'Dictionary of Received Ideas' (Dictionnaire des idées reçues), that is, a powerful cliché parroting engine." If you want entertainment and information that is entirely constructed to represent prevailing attitudes and understanding, then perhaps artistic and nonfiction output from AI will satisfy you.
But the problems with AI are actually much deeper. Arvind Narayanan, a Princeton computer scientist, calls AI a "bullshit generator." It provides a narrative answer to questions without having any idea about whether the narrative is accurate.
Tech investor Robert McNamee says that the so-called race to master AI is a huge mistake:
Those who support the current approach to AI argue that we cannot afford to slow down because we are in a race with China. Really? How does flooding the information ecosystem with bad answers, disinformation, and civil rights violations help us compete with China?
I have an even more basic critique. AI is and must be based on language and on a subset of our language, computer code. It is a conceit in tech circles that language can capture experience and so ultimately, not only will AI imitate humans, but it will exceed our abilities because of its vast information collecting and processing abilities.
But language is always limited, ambiguous, ever-evolving and so open-ended. Words mean different things in different contexts at different times. They never have definite, fixed meanings because those meanings continue to evolve with the society around them. That's why we have difficulty understanding the meaning of words we think we understand written 100, 500 or 1,000 years ago. Then there is the problem of trying to map all of the languages of the world to reality over many time periods.
Beyond language, human intonations and gestures are not always easy to describe and categorize and also have different meanings in different contexts. The often quoted idea that more than 90 percent of human communication is nonverbal itself requires context and qualifiers. But clearly, accurate understanding of human communications depends heavily on perceiving and understanding tone of voice, gestures, body posture and clues about emotional state (such as crying or biting one's nails.)
Yet, no matter how closely one observes all of this, it cannot be translated into mere language and even less so, code. Human understanding goes well beyond language to feelings, images (real, imagined and dreamt), and intuitions that often cannot be expressed in language. Computer code barely touches this range of experience and reflects all the prejudices, limitations and ignorance of the people who write it.
And, once we go beyond the human realm to all of the natural world, we have a machine inside the very system it is trying to describe in nothing more complicated than code and unable to observe the entire system because it is embedded within it. Modeling the things we are trying to understand always leaves out a lot of stuff; otherwise, we end up with a model that is the thing itself—and of no use as a model since models are useful precisely because they are compact, less expensive to create and far less information dense than the things they represent.
For this reason I do not believe that autonomous vehicles outside of a close coursed will be safe. The world we live in and the humans who inhabit it are ever changing and not always predictable. And AI works primarily based on predicting what comes next based on a very limited knowledge of the past that is entirely language-based and thus woefully incomplete and unable to anticipate novelty that reaches beyond the mere statistical mixing of incident descriptions from the past.
I'm certain that many readers have been able to predict the ending of a film or television show they are watching. That's because those stories are often based on formulas that don't allow for truly surprise endings. Sometimes this doesn't matter to us because we enjoy the characters and their paths to the conclusion anyway.
But I wonder how many AI-generated films or television shows will ever be able to surprise us with their plots, characters or endings. And, how could they possibly contain insights regarding emerging realities and trends if they are derived from the already stale information on the world wide web either posted as language or translated into it with no intuitive process to make novel associations and connections?
As for predictions of doom based on notions that AI will become more intelligent and powerful than humans, those predictions will only come true if we are foolish enough to turn over critical (rather than noncritical routine or trivial) decisions to the technology. I think the innovators who are first to hand over critical operations to AI will illuminate the folly for the rest of us. We should pay careful attention to their missteps.
P. S. For regular readers I would be remiss if I did not acknowledge that AI depends on the hugely energy-intensive internet which is growing in its energy consumption every day. Without a sufficient supply of energy, the internet will not function and therefore not support the aspirations of the AI promoters. As fossil fuel depletion enters a stage that will lead not merely to our failure to keep up with demand, but to actual declines in energy consumption, the dreams of AI may very well evaporate along with cornucopian fantasies of plenty forever.
Kurt Cobb is a freelance writer and communications consultant who writes frequently about energy and environment. His work has appeared in The Christian Science Monitor, Resilience, Common Dreams, Naked Capitalism, Le Monde Diplomatique, Oilprice.com, OilVoice, TalkMarkets, Investing.com, Business Insider and many other places. He is the author of an oil-themed novel entitled Prelude and has a widely followed blog called Resource Insights. He can be contacted at kurtcobb2001@yahoo.com.
3 comments:
QUOTE: ***It is a conceit in tech circles that language can capture experience and so ultimately, not only will AI imitate humans, but it will exceed our abilities because of its vast information collecting and processing abilities.***
I wonder if this conceit might not be shared by those we (used to) call 'men of letters' as well. There are those who untiringly speak of literature and the 'Great Books' as if they were manna from Heaven. Thus Homer and Shakespeare not only use language to capture experience, they are veritable gods on earth who give life to language. (Wow!) This sort of conceit (and conceit it is indeed) turns me off no end, because it amounts to telling me that I'm a deficient human being if I don't like Homer or Shakespeare.
Well, maybe, just maybe, one day AI will produce literary masterworks that will make Homer and Shakespeare pale into insignificance, that will move anyone to tears regardless of the time or place s/he may live in. Hey, sci-fi author Stanislaw Lem played with that idea in his The Cyberiad.
Intelligence and wisdom are two different things. How would this post (and the rest of the world) be different if artificial wisdom were being developed ?
I was half expecting this to end with the words 'This week's article was written using an AI chat bot' :)
Post a Comment