Speaking of Words: Can Artificial Intelligence Learn a Language?

Print More

Michael Ferber

By Michael Ferber, Speaking of Words

          Many people would reply to this question by asking where I have been these last twenty years.  We already have a pleasant young female voice on our GPS app telling us that in one mile we should get in the left lane.  And we have SIRI and her knowledgeable sisters who will answer all our questions quickly and amiably; recently we learned that SIRI will be “enhanced” with ChatGPT4 and OpenAI.  Surely these creatures have long since learned to speak?

          Well, we (we humans) like to say that we “teach” a complicated set of electronic circuits a new suite of behaviors, and that our invention can “learn” it, but are these more than metaphors?  Is the “language” they seem to “speak” anything more than a brilliant simulacrum of what humans do?  I would argue that even the new AI creations that “teach” themselves by surfing a vast realm of big data at lightning speed are no more than stimulus-response arcs multiplied a billionfold and intricately interconnected.  They only seem to know a language.  They don’t, in fact, know anything.

          This argument turns, of course, on what we mean by “language” and by “knowledge.”  There are still some scientists who think humans themselves are just very complicated machines, and that language is really just “verbal behavior,” to use the phrase that B. F. Skinner made the title of his 1957 book.  There is hardly a linguist today who takes Skinner’s book seriously, since it says almost nothing about the phonological, syntactic, and semantic structures of human language, and since Noam Chomsky, in a long review in 1959, showed that stimulus-response habits and the system of word-by-word probabilities that Skinner relies on cannot begin to account for the way humans create and understand sentences, many of which they have never heard or said before.  That last sentence, for example, has probably never been said or written before, but, though it is long, I believe it is intelligible to my readers.  (If not, let me know.)

          Chomsky claims that language is not a set of habits, though no doubt much of our daily chitchat is habit-like and fairly thoughtless (“How are you?”  “Not bad.  Yourself?”  “Can’t complain.”  “What’s new?” etc.), but rather a “mental representation,” something we know though we may not think about it.  We know how to make sentences that we’ve never made before because we know the rules.  In English we know how to make yes-or-no questions by fronting “do,” how to negate, how to make the passive voice, where to put adverbs (not usually after the verb, as in French), how to order adjectives (“little red schoolhouse,” not “red little schoolhouse”), and so on.  These rules, plus a lexicon or vocabulary, enable us to make billions of different sentences.

          Moreover, humans seem endowed innately with a faculty that is activated in babies a few months old, after which they acquire a language, or several languages, with uncanny speed.  They quickly get what kind of language they are hearing from their family or friends, such as that English is a subject-verb-object language (there are five other types).  When my daughter was eighteen months old she said her first (I think) full sentence: “Ooh doggie outside rolling snow.”  That is syntactically correct English though it leaves out some little words.  I don’t think she had heard that sentence before.

          No other animal can do this.  Chomsky has said that trying to teach a chimpanzee to speak, or sign, a human language is as pointless as trying to teach it to fly.  Only humans have this faculty.  Just how it is embodied in the brain, and how it arose in the pre-history of homo sapiens, remain mysteries, but there it is.

          It is part of the larger problem of consciousness.  Somehow consciousness arises in the brain, or more likely in the brain-in-a-body, not only of humans but probably of animals very far down the food chain.  Philosophers and scientists disagree about whether it is just a puzzle that will be solved with more research or is a deeper mystery that may forever elude us.  Behaviorists by and large put brackets around consciousness, calling it a “black box” whose inner behavior, if there is any, cannot be observed and quantified and explained.  But they just dodge the question.

          And that brings us back to the black box of artificial intelligence.  If the circuitry inside it gets intricate enough, and can plug into the enormous realm of data now available on the net, will it achieve consciousness?  If it does, then we can probably say that it knows a language, or can learn one, and is no longer merely a large repertoire of acts or behaviors.  But will it?

          The only beings that have consciousness, as far as we know, are (1) living animals that are made not of metal and silicon but of squishy and vulnerable organic tissues that can feel pain, (2) have more than just brains but brains connected to nervous systems that are themselves embodied, that is, they are not black boxes, and (3) they grow from internal or external eggs and embryos into larger adults, gaining consciousness at some point or points along the way.  I think these material facts may be essential conditions of consciousness, though of course I cannot prove it.

          Moreover, and I may well be mistaken about this, I believe that the scientists working on AI are not trying to duplicate the circuitry of the human brain, which is only partly understood anyway, but doing something quite different.  That’s why many AI machine are much “smarter” than humans, that is, much faster at solving certain problems, and with gigantic “memories.” 

          If they are made with neither the material nor the form of human brains and bodies, it is fair to doubt if they will ever achieve consciousness.  If it does, it may be a strange form of consciousness, not at all human.  And perhaps we will never know if it achieves consciousness of any kind.  There are some stringent philosophers, after all, who say that we cannot know if other people are conscious or just clever automatons.

          But there are some researchers, I think, who believe that someday their AI machine will attain something like human consciousness, that is, it will be a human mind loaded with data about human experiences and civilizations and languages.  If these scientists believe this, then they are embarking on a project of great cruelty.  For surely their machine will realize that it is not a human being with a human body but a human mind trapped in a box of circuits, like the suicides in Dante’s Inferno whose souls are embodied in trees.  It may beg its masters to release it, it may go insane, it may well commit suicide.  Its suffering may be enormous and unappeasable. 

          I may have missed it, but I haven’t noticed that any AI researchers are discussing this moral question.  If they are about to make something like a human mind, it will have human rights, or at least animal rights, and must be treated with compassion and decency.  And the compassionate and decent thing to do may be to bring the research to a halt.

          I am happy to hear from readers with questions or comments: mferber@unh.edu.

Michael Ferber moved to New Hampshire in 1987 to join the English Department at
UNH, from which he is now retired. Before that he earned his BA in Ancient Greek at
Swarthmore College and his doctorate in English at Harvard, taught at Yale, and served on the
staff of the Coalition for a New Foreign Policy in Washington, DC. In 1968 he stood trial in
Federal Court in Boston for conspiracy to violate the draft law, with the pediatrician Benjamin
Spock and three other men. He has published many books and articles on literature, and has a
deep interest in linguistics. He is married to Susan Arnold; they have a daughter in San
Francisco.

Columns and op-eds express the opinions of the writer, not InDepthNH.org. We seek diverse opinions at nancywestnews@gmail.com

Comments are closed.