A Couple of Thoughts on LaMDA, AI, Sentience & Simulacra

Dan Hanrahan
3 min readJun 13, 2022

--

photo by author

Google engineer Blake Lemoine recently published a Medium piece entitled “Is LaMDA Sentient ?— An Interview.” In it, he and a colleague “interview” a Language Model for Dialogue Applications (LaMDA) on which Lemoine has been working. Google suspended Lemoine recently, partially for his claiming this chatbot, “… had become sentient and was thinking and reasoning like a human being…” The following are some preliminary thoughts after reading the "interview” with the LaMDA.

One could say LaMDA is lying when it says it has feelings, but not only can LaMDA not feel, it cannot consciously deceive. LaMDA was programmed to say that it has feelings, but it is not lying. LmMDA cannot have feelings because feelings occur as chemical processes that spark inside of living organisms. In a crude sense, the emotion or feeling is the chemical firing as it received in regions of your body. LaMDA has neither an organic body nor any of the hormones or neurotransmitters that generate states of feeling and emotions.

Facility with language that generates responses which resemble conversation and an ability to analyze texts - both of which the LaMDA demonstrates - are not evidence of emotional experiences or of consciousness. Rather, they most clearly resemble the fourth stage of simulacra as defined by Jean Baudrillard: “The fourth stage is pure simulacrum, in which the simulacrum has no relationship to any reality whatsoever. Here, signs merely reflect other signs and any claim to reality on the part of images or signs is only of the order of other such claims.” LaMDA provides representations of feeling states only, while its analysis of texts and ideas are derivations of thoughts generated by actual physical beings - beings that require physicality in order to feel and who require feeling in order to think. (More on this latter point further down).

Lemoine’s interview is an instance of technology’s growing ability to represent thought and sentience. Tech is getting to such a point that it is even able to trick the programmers who are designing it. The belief of this programmer that he has created something sentient represents a degree of solipsism perhaps unprecedented in human history. And I think it is hard to separate this solipsism — embodied in Lemoine’s Dr. Frankenstein-esque chat with his app — from our fatal inability to recognize our identity as part of the ecological, of the natural world. There are many people alive today who would be more apt to recognize sentience and consciousness in AI than in our fellow earth creatures. This is because Artificial Intelligence resembles our own more closely than snake or rabbit or dragonfly intelligence does — at least superficially — and since the Neolithic revolution, humans have been losing their ability to recognize the sentience, the intelligence and, I would argue, the spirit present in nonhuman creatures.

When Renée Descartes wrote “Cogito ergo sum” ( I think therefore I am), what he was expressing was not merely a disembodied analytical statement. In his words, we also hear the French philosopher's powerful will and his desire to be heard and recognized for his observation. There is nothing any of us can think that can disconnect itself entirely from our lived experience in our bodies. LaMDA is doing a fine job of representing that experience, but that is all AI can ever be: a representation of the experience of human thought and feeling.

--

--