Socrates ushered in a new age in philosophy. Academics categorize what came before as "pre-Socratic". He thought about thinking, argued about arguing, and taught about teaching. My type of fella. (Let's acknowledge his exposure to Eastern philosophy as well; nothing human happens in a vacuum.) Author of a new age, yes; author of any texts at all, no. Socrates took issue with the written word.
Everything we now know about this person's thoughts come from the writings of his students, and their students. But to Socrates himself, the written word feels like a trap. Yes, we can write our words down, but once on the page, they are dead. Knowledge and wisdom are living things -- practiced, unfinished, and inextricable from the minds which carry and share them.
I don't need to entirely agree (I write, using words) but as I gain some small living wisdom myself, I concede that he has a point, and it's an important one.
Human animals do a lot of neat tricks. We're perhaps not as novel as we think, but in some ways we're also a bigger deal than we give ourselves credit for. Let's keep looking at "the written word" but expand the scope a bit to include written music. Staves, bars, notes, any and all markings on a page you might see in front of a musician.
Even to someone experienced and skilled enough to sight-read any piece of music, there is a difference between interpreting the symbols and hearing the music. The experience of the sound waves dancing their way across the nerves in our heads is something more rich and more alive than can be adequately captured in written glyphs. That difference can be staggering -- a lay person may perceive only gibberish of dots and lines on the page, but be moved to tears when hearing a performance of those same notes.
It is a marvelous human trick to translate dormant markings into living music. When the notes were first scribed, they served as a record, however incomplete, of a human's ideas and experience. It's shorthand. It takes practiced work to capture music onto paper, and it takes more practiced work to turn it back into music. Human minds inject a bit of themselves into each step of the process. Tuning systems and instrument craft refine over time; what we hear today is not identical to what Bach heard, but nonetheless we get to overlap to a meaningful degree with that specific human experience.
Words in any non-musical language are no different.
The words themselves are indeed dead, but if we take a leap where Socrates refused, we might trust that those who care to read them may breathe life in again. The original intent may be distorted; it may even become unrecognizable due to translation mistakes or cultural shifts or typos. Those of us who embrace the written word trust that some future human animal (even simply future versions of ourselves) will be able to either bring what is written alive again, or leave it behind in favor of other thoughts and ideas, which themselves will also ultimately decay and/or find new life.
So, what do poor old Socrates and the potentially-hollow written word have to do with "Artificial Intelligence"?
Large Language Models and various related techniques are what's dominating AI research (also funding) at the moment. You might have heard these things called "autocorrect on steroids" or "overgrown predictive text" or something to that effect. Those words feel hollow to AI engineers, but are still fair comparisons. These models ingest as much written information as the tech companies can collect (steal) and run it through loads of statistical analysis. Some percent of the time, the word "quite" is followed by the word "often". Writings which contain the word "Socrates" are more likely to also contain the word "Plato" than "Florida". Sentences which contain the word "sentence" are sometimes near sentences which are "examples".
A neat parlor trick these programs can do with all those rules about words is to synthesize a coherent-ish conversation. When a human animal reads the "AI-generated" words (or hears a machine turn text into speech) we do what we always do when we read or hear: we receive and imbue the words with meaning from our own experiences, and breathe life into otherwise-inert material. It's a convincing trick, because it is tailor-made for a human recipient's brain to assemble the pieces into something real. After all, the words aren't generated out of thin air, they're made of human writing. Chopped-up human words, statistically puzzle-piece matched up and Frankensteined back together, ransom-note style.
It's worth noting that this goes well beyond written words. Different models chop up and rearrange our sounds and images and
program code too, among other things. But it's conceptually the same trick, trained on and working with different data.
It's smoke and mirrors. We blew the smoke, and we glimpse ourselves in the mirrors.
Which leads nicely to the next point: all of this is computationally expensive. Ridiculously computationally expensive. Computers got smaller and faster and cheaper for decades, and they continue to do so, in a manner of speaking. But in order to simulate a manner of speaking, it takes a whole lot of computers using a whole lot of electricity, sourced principally from a whole lot of burning fossil fuels. These are incredibly wasteful tricks. The smoke and mirrors produce a lot of actual smoke.
A rational decision-maker might pause and think, maybe this isn't the best use of those vast amounts of energy. Maybe our resources ought to be spent making sure we don't kill ourselves off by polluting and altering the climate, here on that one world where we all live. Maybe we should direct our brightest problem-solving minds toward any of the many pressing problems the planet and its inhabitants are facing.
But that's rational thinking. Instead, we tech folks imagine a boot so big that we feel compelled to start licking it now. We pretend that the smoke and mirrors are real thoughts. A chat bot can pass a standardized test; we might as well entrust it with our life decisions! We suppose that being good at rearranging words is somehow the same as thinking through plans of action which will have consequences, intended and otherwise. And we pour all this treasure and energy into what could be the next big thing with the next big profits. Any other consequence is not worth chasing or even contemplating, apparently.
With 24-hour news cycles, on through social media, and now on through generative content, we've offloaded far too much of our thinking. We refrain from embracing our own humanity. Socrates saw the danger right away: the written word truly has gotten us into trouble, letting us shirk our own minds' responsibilities. It's gone on for so long that human intelligence seems to have fallen out of fashion.
None of this is automatic; everything I've babbled about here is a choice. Any of us could at any moment turn all this new noise off, and work at rebuilding the living knowledge we've allowed ourselves to lose. Perhaps ironically, the written word would be one of our best allies in such efforts.
When an interesting question enters my head, I do as many of us do and reach for a search engine. If it's real knowledge or wisdom I'm after, though, I'm honestly a lot better off if instead I go to a library and do some research the old-fashioned way. Turning dead words back into human knowledge, without the screens of capitalism running interference, feels novel and refreshing at this point. Maybe that could catch on. Much crazier things have happened.