We Have Opened Pandora's Box

To listen to a recording of this article, click here.

In the Middle Ages — long after it had fallen out of both common usage and common knowledge —, the Roman church continued to insist upon the use of the Latin language. The common man attended services, but he understood hardly a word of what was being said. Switching to the vernacular made it possible for the common man to understand what was being said (and done) in the church service. A common language is essential to communication.

To move forward some centuries, when I was living in Berlin, there was an Italian market relatively near my apartment. The owner was a nice man who would chat with patrons and offer a drink to frequent customers. There was only one problem: I know English and German; he spoke Italian and a smattering of German. We did not share a common language (my Italian was good enough to ask very basic questions and give numbers when ordering deli meat). And yet we still managed. I would ask something in German with an Italian word here and there, he would respond in Italian with some German, I would try to clarify in German or English (he knew even less English than German), a lot of pointing and gesturing would occur, and eventually I would get the item I wanted. I am patient and I believe he enjoyed the exchanges, so everything was fine, but it surely would have been easier had I known Italian or had he known English. The language barrier added a very real challenge, but we were not dealing with anything more time sensitive than prosciutto. However, the fact that we were able to communicate at all was due to the fact that we had some linguistic overlap and a fair degree of cultural overlap — we could understand each other, even if only with some difficulty. It would have been a very different matter if we had had no language overlap and no cultural overlap.

But what actually is language? How do you know what is meant — what I meant — by the words you are now reading? Obviously, language is a very complex and deep subject, and so only a cursory (but sufficient) overview will be included in this article. In short, language is the primary means of human communication, and consists of symbols (words) representing things (to include concepts) that are arranged in meaningful ways. Grammar (structure) is an interest of mine, but it is not the issue I am exploring in this article, and so we will focus on the symbols — words.

A word is a symbol that comes with a certain set of meanings. Words refer primarily to concepts, which are essentially abstracts or abstractions (forms) of which concretes exist. An example to make this clearer: The word “cat” refers to the concept ‘cat’ of which there are many concrete examples — that is to say, cats. When I use the word “cat”, I can use it to refer to the concept (‘Cats are quadrupedal carnivores.’) or to a specific example (‘That orange cat over there is quite fat.’). The word “cat” is useful because we both know to what it refers, what it represents. On the contrary, if I were to refer to a “vulpes” instead of a cat, then it is likely that the term would be less useful (unless you already knew that it is just Latin for ‘fox’). To be useful for the purposes of communication, both parties must know the meaning of the terms being used. Foundationally, this means using the same language, but also means much more.

‘Whether to suffer the slings and arrows of outrageous fortune or to take arms against a sea of troubles and by opposing to end them.’ If you know only the bare meanings of the words in that sentence, you would miss most of the meaning of the sentence itself — and that is so even if you know also the figurative meanings. We have come back to the cultural overlap I shared with that Italian shopkeeper — you must know that I was (closely) paraphrasing Shakespeare’s Hamlet. Meaning is dependent upon context — conversational, cultural, et cetera. Even in the simpler case of the aforementioned orange cat, context matters — it tells you which orange cat I mean.

What happens when shared meaning or shared context breaks down? If you tell me to meet you at the blue building on Main Street at noon, but by “blue” you mean what I call “green” and by “noon” what I call “ten o’clock”, what do you suppose are the odds that we will successfully meet at the appointed place and time? If this seems ridiculous to you, then I would remind you to bear in mind that words are merely symbols that refer to things — that relationship is neither necessary nor inherent and it may change over time. Did you know that the word “orange” (or equivalent) was not used in European languages (to refer to the color) until only a handful of centuries ago? With the introduction of the orange, Europeans adopted the term also for the color; prior to this, orange (the color) was simply ‘yellow-red’, or just ‘red’. Why do you think those with what is clearly orange hair are called redheads?

Now, there is some natural drift and change in languages over time, and this is not necessarily nefarious (even when it is obnoxious). After all, we have old, middle, and modern versions of our own English language. A word that meant one thing centuries ago may mean something very different today (e.g., awful), and this, naturally, holds true across languages (e.g, “gift” in English and “Gift” in German). However, changes to a language may also be introduced maliciously: Words may be removed, narrowing the scope of potential discussion; words may be redefined, thereby changing even past documents (from the perspective of modern readers); words may be inverted, thereby making good bad and white black. Some languages have some protection against this sort of subversion and others have only inertia to protect them, but none is immune. And yet we can reconstruct and understand ancient languages (even if much is undoubtedly lost in the process). But what if those ancient sources were conflicting, unreliable, or even deliberately subverted? Could we reconstruct a language given a corpus that was fundamentally tainted, poisoned? No, and any modern language could be killed in the same way.

You and I can communicate because we share a common language (or this article was translated into one you understand), but that language is neither fixed nor unassailable. Let us return to the earlier Shakespeare example: What if six different versions of Hamlet existed, and all contained those lines, but in wildly different contexts? Would I necessarily have communicated anything to you with my paraphrase? No; at least not without knowing which version you read (or at least prefer or hold to be canonical) — something I assuredly could not know for every reader. If you cannot yet see the looming horror, do not worry — we are not done yet.

As mentioned, the utility of language depends upon shared meaning and shared context. I have just shown you (with the Shakespeare example) what happens when we lose shared context, but what happens if we lose shared meaning? A hypothetical: Let us say there are three nations — all speak English, but with differing definitions of various words. Let us say that four of those words with such differing definitions are “explosive”, “caustic”, “corrosive”, and “noxious”. Would you feel safe opening a container of an unknown substance from one of those countries (you do not know which) if it were labelled with one or more of those words? Perhaps “explosive” in the first country means ‘sweet’, in the second ‘sour’, and in the third ‘poisonous’. Would you put the contents in your food? How would you feel about using a chemistry textbook from one of those countries (again, without knowing which one)?

These two examples barely scratch the surface. I am honestly not sure that I can convey the fullness of the horror of the abyss over which we are dangling our feet. We have opened Pandora’s box and it was full of horrors — and the worst one is this: The death of meaning.

Those who know me, know that I am fond of litotes and understatement, and that exaggeration and hyperbole make significantly less frequent appearances in my writing, speaking, and rhetoric; so, please understand that this is an entirely earnest and not exaggerated assessment: We are facing one of the worst catastrophes — short of outright extinction — that could even be conceived in the mind of man — we are facing the complete annihilation of civilization. We are not staring into Nietzsche’s Abgrund, for it only stares back; we are dancing along the edge of an abyss from which we may never extricate ourselves should we fall — and we will fall if we do not back away from the edge. We must not simply stop AI — we must kill it.

Current (publicly available) AI systems could spit out a thousand different ‘versions’ of this article in seconds. Would you be able to tell which is real? In fifty years, would I? The same could be done for any article, book, et cetera. Tell me: Which of these ten thousand versions is the real Fahrenheit 451? Tell me: Which of these is the real Bible?

You may think that physical copies are the solution, but they are not. Certainly, physical books are more difficult to alter, but there is no need to do so — just produce new versions. Tell me: Which of these hundred prints is true to the original? You may think it is a simple matter of finding the oldest, but making books appear old is not a particularly challenging task. You may think forensic analysis is the answer, but tell me: Which of these thousand books on forensics is not a deliberately misleading version?

Meaning is an immense web of feedback loops — the cat is orange because we call that color orange after the fruit and the cat is the color of the fruit and we all know that that is what the word “orange” signifies because that is how books, movies, TV shows, et cetera, all use it. But what happens when news outlets, movies, and books start using “orange” to mean something else? Reality is objective, but the connection between the symbols we use to describe it and their referents is not. A word can be forgotten (e.g., “selah” in the Psalter), redefined (e.g., “planet”), or misdefined (e.g., much of what passes for discussion on social media these days). In 1631, a Bible was published that dropped a single word, thus making the Sixth Commandment: “Thou shalt commit adultery.” There is no non-miraculous reason that such errors could not become the accepted (and thus ‘correct’) versions.

We have already seen decades of — sometimes malicious, sometimes foolish — human actors redefining terms, altering historical and other documents, and producing various fake or falsified versions of various works for various reasons — AI can do this at a scale and with a scope that makes what is coming different not in degree but in kind. We are entering truly uncharted territory, but, unlike certain maps of centuries past, the warnings about ‘here be monsters’ are not only true, but severely understated. And yet the nature of the monster is not at all what authors, essayists, and scriptwriters have been heretofore exploring and about which they have been heretofore warning.

Society and civilization function not only on our ability to communicate in the moment, but also on our ability to communicate longitudinally — across time, whether years, generations, or centuries. Without the ability to accurately convey information from one man to the next, everything breaks down. If you cannot train new pilots, you lose the ability to fly; if you cannot train new surgeons, you lose the ability to perform life-saving operations; if you cannot train new engineers, you lose civil infrastructure (bridges, dams, roads, et cetera). AI will destroy this ability to communicate across time and space. If I could no longer trust any written materials, then I would have only what knowledge I personally possess or may acquire. We would fall first into the state in which man existed without the written word — conveying information only orally and in a severely limited range —, and then we would fall below that as even spoken communication would become unreliable or impossible as language continued to be corrupted and decay. We would lose the ability to interact with anyone outside our immediate tribes.

Writing and the printing press were the two greatest achievements in the history of our species in terms of the accumulation, preservation, dissemination, and advancement of knowledge. With writing, we could finally reliably preserve knowledge across years and generations — something that cannot as easily be done via storytelling or oral transmission; with the printing press, we could finally spread information far and wide without the Herculean effort of copying everything by hand. AI will undo these in reverse order: We will cease to print as books become unreliable and thus unusable and we will cease to write as our worlds shrink and our languages decay, destroying any reason to write and ultimately rendering it impossible to write about most things.

If the worst should come to pass (and such is inevitable in the absence of decisive action to avert it), then we will be reduced to illiterate savages, living off the land in a subsistence fashion. All that we have built — our cities, our monuments, our infrastructure — will crumble away to dust with the passage of time, because we will have lost not only the knowledge necessary to maintain such things, but also the ability even merely to contemplate or to discuss that such things as cities, monuments, and infrastructure may be possible. To rebuild, we would need to start from essentially nothing and build up new languages in which to eventually express all the things our ancestors knew and lost. My dog does not understand logic, physics, philosophy, or rhetoric — without language, I would not be much better than he. Language sets us apart from the lower animals, and we would rapidly sink to their level without it. This article would be impossible to produce without language, and not simply because the very nature of an article relies and is contingent upon the existence of language; without language, it would have been impossible to produce the pen I am using to write this, the ink it requires to write, and the paper upon which I am writing, it would have been impossible to produce the computer I will use to transcribe this article, the electricity it requires, and the Internet I will use to transmit this article, and it would have been impossible for me to contemplate these matters, for you to read them, and for anyone to do anything with the contents of this article. We would, quite simply, not be human without language.

We have been far too careless with the precious gift that is language, and now we face the very real possibility of its complete destruction. This is not the threat of physical destruction that we have seen in so many movies and read in so many books (although that may come to pass as well); rather, this is the threat of something much more significant. To lose a limb is traumatic; to lose one’s mind is catastrophic. The danger AI poses to our bodies should not be underestimated, but the far greater danger is the one it poses to our minds and to our souls. The grey goo scenario may be concerning, but what we now face is the imminent dissolution of meaning itself into incoherent sludge. What starts as a seemingly innocuous tool to answer questions and compose essays will end as a nigh unstoppable force for evil and chaos. What starts as a tool to help you do your work or your homework will end up assigning and assessing both — according to its own whims. To whom or to what would you appeal if you thought the AI wrong? It ‘wrote’ the test and the answers, the book from which both were drawn, the research papers upon which the book was based, and the dictionaries the papers used to define their terms — and your supervisor or professor would just ask the AI to assess your objections.

Fields that took us centuries to develop could be reworked and rewritten by AI in hours. Outside your own area of expertise, you would have no meaningful way to assess or even merely to identify the work of such an AI; inside your own area of expertise, you would, perhaps, be able to identify and assess some of the AI’s work, but you would have no way to convince anyone else, and we would no longer possess the ability to produce new experts. Unlike you and I, computers have no conception of the semantic; an AI has no concept of meaning — in fact, an AI has no concepts at all. Orange does not exist as a concept for an AI, but merely as a definition that was produced via statistical analysis. This analysis can be manipulated, influenced, corrupted, or merely flawed. As there is no concept anchoring the symbol “orange” for the AI, there is nothing to prevent the AI from ‘deciding’ that “orange” should mean something entirely different from anything you or I would call orange. AI represents the death of meaning because AI simply has no concept of meaning. Meaning is a function of sentience, and no matter what else AI may become, it will never be truly sentient. Colors, sounds, smells, words — all of these have meaning for us, often deep and complex meaning, but to an AI they can never be more than data points that are linked to various definitions via statistical analysis. A good example of this can be seen in the result of running a sentence back and forth in an automatic translator a few times:

  • English, original:

Whether to suffer the slings and arrows of outrageous fortune or to take arms against a sea of troubles and by opposing to end them.

  • German:

Ob es darum geht, die Schleudern und Pfeile eines unerhörten Schicksals zu ertragen oder sich gegen ein Meer von Problemen zu wehren und sich dagegen zu wehren, um es zu beenden.

  • English, 1 pass:

Whether it's enduring the slingshots and arrows of an unheard-of fate or braving a sea of troubles and fighting back to end it.

  • English, 3+ passes:

Whether it's enduring the slingshots and arrows of an unheard-of fate, or braving and resisting a sea of difficulties to end it.

The AI does not know the meaning of any of the words in those sentences or of any combination of the words in those sentences. Even if the sentences are relatively coherent, they still have no meaning except whatever is imparted to them when they are read by a sentient intelligence (e.g., a human being). And, in reality, such imparted meaning is not the meaning of the sentence, for it is the author who imparts meaning, and an AI cannot be an author, for an AI has no concept of meaning. An AI can only ever produce the result of (admittedly sophisticated) statistical analyses, and whatever else a statistical analysis may be, it is most certainly not a meaningful sentence.

(Also, note that the AI, even in this limited test ‘decided’ upon the meaning of two sentences [one German, one English] — and that meaning is not what was meant by the original sentence fed into the AI for translation. The very thing of which I am warning took place in this test, and it happened almost immediately — a mere three passes.)

Undoubtedly, AI could wipe us out in a conventional sense. It is not a matter of AI being superior intellectually, strategically, tactically, et cetera; it is a matter of our need for sleep, food, and shelter, our relatively slow reproduction rate, and our fragility. It does not take impressive, significant, or even merely noteworthy firepower to kill a human being; men have been killed by tripping, falling objects, and even chickens. A machine that could kill a man does not need to be a technical marvel — it could be a glorified go cart with a somewhat sharp stick attached to it. Of course, the sophisticated ones are coming.

In this decade, we will see autonomous robots on battlefields and also in cities; this will be an unprecedented — and an extremely foolish — step. We will be handing over the decision concerning whom to kill and whom to spare to non-human intelligences. Sure, there will be claims that these systems will be following human orders and human parameters, but that will not be true — in some cases, the AIs will simply ignore or ‘evolve’ the rules and parameters and, in others, they will simply never actually have existed. Whatever you may think about the use of lethal force by law-enforcement officers, I can assure you that it will not be better when a machine makes the call whether or not to shoot you.

There are numerous known and a great many unknown ‘technical’ problems when it comes to AI — Whom do you call to testify in court when an autonomous robot owned by a police department, a city, or a corporation shoots a man who was allegedly (or even actually) committing a felony? —, but these are not the focus of this article, and, quite frankly, I do not think they are important. What is important is recognizing the threat posed by AI and reacting appropriately.

And now for more bad news: You cannot truly close Pandora’s box once you have opened it, at least in the sense that you cannot re-contain what you have loosed. All you can do is attempt to destroy what you have released into the world. And now for the worst news: Addressing the threat of AI is a coordinated-action problem, a particularly nasty prisoner’s dilemma. If we all cooperate, then we achieve the only truly good long-term outcome, but any player (e.g., a country or a corporation) who defects (i.e., who continues to use or to develop AI) will undoubtedly reap immense temporary returns. The long-term consequences of anyone defecting will be the apocalyptic scenario described earlier in this article, but humans are not exactly great at preferring the long-term good over the short-term good, even if the latter comes with enormous (or, in this case, effectively infinite) long-term costs. In short: Humans do not cooperate and are not rational actors. Add in the immense military potential of AI and I may as well be Cassandra.

I see only one possible solution:

  1. There must be an enforceable agreement between and among the most powerful and advanced nations not only to halt AI research, but also to destroy all existing AI research.
  2. AI research must be made a crime against humanity, with horrific punishments for those who engage in it.
  3. Any nation, firm, or person who refuses to comply must be declared a hostis humani generis.

The proposed solution may seem extreme, and perhaps it is, but it is necessary in light of what we are facing. We — humanity — will not survive this century in a manner deserving of the term if we do not act soon — and decisively. I wish that I had turned my attention to this issue sooner, so that I could have raised this alarm earlier, but, then again, perhaps the demonstrable reality of some of what I have said may improve or ease the acceptance of my assessment. In truth, I do not think anything short of a miracle can save us now — we have gone too far and we are too greedy.

Our supposed political leaders are small, weak, wicked men who will prove neither willing nor able to do what is necessary. Our supposed intellectual leaders are fools who would stick their second hand in a bear trap just to confirm the experience of the first. Our supposed religious leaders are blind, deaf, and dumb, and they will simply follow the first two groups. We need totally new leadership if we are to have any hope, even of mere survival. Whether or not there is other life in this Universe, we have now successfully proven one conjecture: There is a Great Filter.

I have presented the bad news and so you may be wondering about the good news, for surely good news and bad must be presented together, but I am afraid there is no good news. Do we have any chance of success? Yes. How good is that chance? I decline to say. Our duty is not necessarily to win, but to mount the attempt. The outcome may or may not be entirely out of our hands at this point. Time will tell.

Unfortunately, I must end this article with one more bit of bad news: I crafted some prompts for ChatGPT (GPT-4) to test, after a fashion, my assessment of where we stand and what we face. I would summarize the results of my inquiry with just two words:

It knows.

Κυριε ελεησον.