Plato, through the words of the central character in his famous dialogues, the philosopher Socrates, tells us that the invention of writing severely impaired human memory. The impairment resulted in part from disuse. We humans no longer had to commit to memory important information that could now be rendered on the page. Socrates insists that living memory is far better and far more responsive to inquiry than the written word.
Human learning has not, however, disappeared or even diminished in the age of the written word, but rather prospered as the wisdom of the ages can be readily passed down to each generation. The invention of moveable type in the 15th century spread the written word across world, making it accessible as never before. Plato must have realized the irony that he was preserving Socrates' argument against writing for future generations by writing it down. Plato could not have guessed, however, that 2500 years later his writings would be part of the canon of Western philosophy and that moveable type and modern transportation and communications would make his writings available practically anywhere.
Modern communications devices and adjuncts to learning and investigation such as artificial intelligence (AI) programs bid us to remember how writing itself was once critiqued and how that critique in large part was dispelled by subsequent events. But ought we be so sanguine about our reliance on such devices as cellphones, computers, and the emerging AI programs? Do these aid us or dull our abilities? Do they allow us to inquire deeper into the human world or become more divorced from it?
I suppose the general answer is: It all depends on how we use these tools.
I'm old enough to remember when handheld electronic calculators were just being introduced and only a few of my fellow high-school students chose to spend about $150 to get one (the equivalent of paying almost $1,000 today). I learned manual methods in my math classes for calculating answers and found one of these methods so useful I still use it today for adding a written column of numbers as it is quicker than inputting all the numbers into a spreadsheet or calculator.
So, when a grandfather who is also a computer scientist recently asked his grandchildren—who were already holding their cellphones—what one-third of nine is, those grandchildren immediately went to their cellphones to find the answer. I'm used to doing math in my head for a quick estimate of the answer to a real-world problem that I can confirm later with written or computer calculations. Will these young people never be faced with a situation where estimating the answer to a math problem in their heads will be useful? I cannot foresee such a time before the fall of our technical civilization.
But just as Plato explained, this grandfather believes that "our cognitive abilities weaken when we let technology think for us." Defaulting to Google for every answer we don't know weakens our minds.
Richard Murphy, an accountant by trade, but now a critic who writes extensively on public finance, explained in a recent piece that accounting firms are no longer seeking to train employees in taxation as tax questions will be answered by AI programs, or so they believe. But Murphy counters that "[t]he way you become good at tax is by reading a lot about it; by writing a lot about it (usually to advise a client); and by having to correct your work when someone superior to you says you have not got it right. There is a profoundly iterative process in human learning."
When I was a freshman in college, my advisor explained to me that in whatever profession I chose, I should seek experience in all the jobs from the bottom up. That way, when I became a manager, I would not be able to be fooled by the people under me. The accounting firms do NOT understand that the managers they are now creating won't know whether the firm's AI tax program has answered a tax question correctly. The program will become the equivalent of "people under me" and the new managers will be easily fooled by an authoritative seeming piece of software.
Murphy adds that AI programs answer only the question they are given. They cannot know whether the question is the right question under the circumstances. In other words, AI cannot detect a wrong question and reorient the user to find the right one. It turns out that the only way to detect a wrong question is extensive experience with the subject matter and with the people you serve.
Nassim Nicholas Taleb, the self-styled student of risk and author of The Black Swan, summed up this problem very succinctly in a recent post on X (formerly Twitter): "VERDICT ON ChatGPT: It is ONLY useable if you know the subject very, very well. It makes embarrassing mistakes that only a connoisseur can detect." So, the AI programs that accounting firms are counting on to answer tax questions will only be useful to someone who is already thoroughly trained in tax law and tax accounting. Who knew?
Now think about the mess AI will make if used without respect for its limitations in the fields of medicine and law where honed judgment from seasoned professionals who know the subject matter extremely well is crucial.
One psychology professor explained AI this way: "It's a machine algorithm that's really good at predicting the next word. Full stop." The psychologist added that humans learn best in situations that include meaning, emotion, and social interaction. AI only learns from data that people give to it.
This begs the question: Where will all the expert data and words come from if no one is being trained to be an expert because "AI will take care of that"? We are once again back to having to become experts ourselves to know whether AI is giving us correct information.
It's worth noting that expertise does not actually reside on the page. It resides in the minds of a community of interacting experts who are constantly debating and renewing their expertise by evaluating new information, insights and data from experiments and real-world situations.
So, it turns out we never really abandoned the mind as a repository of memory. These communities of experts rely on a sort of common mind which they create to hold evolving information and views among the community members. Socrates would be pleased. But would AI be able to explain WHY he was pleased?
Kurt Cobb is a freelance writer and communications consultant who writes frequently about energy and environment. His work has appeared in The Christian Science Monitor, Resilience, Common Dreams, Naked Capitalism, Le Monde Diplomatique, Oilprice.com, OilVoice, TalkMarkets, Investing.com, Business Insider and many other places. He is the author of an oil-themed novel entitled Prelude and has a widely followed blog called Resource Insights. He can be contacted at kurtcobb2001@yahoo.com.
I was an engineering student when computers were large, slow and expensive (remember card decks?) Calculators were available, but they were also large, slow and expensive.
ReplyDeleteThe joke at the time was that an engineer is someone who multiplies 2 by 2 on a slide rule, gets and answer of 3.9 and approximates to 4.
Throughout my career, I've had a great deal of exposure to the learning process, particularly for specialized knowledge. Some skills are definitely acquired along the way, and we get better with practice. Conversely, if we understand that we can best do without the mental gymnastics, we stop doing the work in our heads, and that skill atrophies, At one time, I could perform some fairly complex financial calculations in my head, but coming back to it now, it would take some time to regain that skill.
ReplyDeleteToday we have a lot of tools for that sort of thing, and people who formerly kept those skills are surrendering them, especially for more mundane tasks. There are also somewhat stricter legal requirements. For example, I was in a bank recently, talking about mortgage payments. the bank officer went to his computer to create a payment schedule, When I pointed out that I just wanted a "ballpark" payment amount, he pointed out that they are no longer allowed to provide informal quotes, everything MUST be done via an approved program to calculate this.
I noticed this effect perhaps over 30 years ago. If everyone relies on computers for this sort of task, after a short time, only a small minority will still know how to do this, or understand how these things work at all. Organizations become heavily reliant on such people when a transaction or a whole process doesn't fit a fairly standard mold, as defined by a set of approved programs. Moreover, those who don't really know what these tools are doing are more subject to errors, and are less creative, which translates into being less competitive. I call this sad state, where a few knowledgeable individuals keep huge organizations dependent by default, an "inverted pyramid", an inherently unstable condition.
Writing is valuable; it allows us to integrate the knowledge of those that lived before us. I think of writing as freeze-drying knowledge. It's all there, but it requires someone to assimilate that and re-animate the dry words and symbols back into understanding and insight, and re-embodied as one or more skills, Only then can the contribution of the original author can be appreciated and applied to the challenges at hand.
Finally, we have the advantage of the fruits of more than one person's labor here. A typical student of Physics, for example, now has the life work of the greatest and most insightful researchers at his fingertips, something almost none of the predecessors can claim. This is an essential property of integrating our understanding so that progress can be made.
In a century or so, we will be in a much better position to assess how AI has contributed to that knowledge base. I am skeptical, and we are still in the early days of including AI in these activities.