The Artificial Intelligence We Fear Is Our Own
PC Magazine|August 2022
A Google engineer was ridiculed for his belief that a language model had become sentient. But if we continue AI research with only profit as the goal, the joke is on us.
Chandra Steele
The Artificial Intelligence We Fear Is Our Own

Have you heard the one about the Google engineer who thinks an AI is sentient? It’s not a joke—although Blake Lemoine, a senior software engineer with the company’s Responsible AI organization, has become a bit of a joke online.

Lemoine was put on leave from Google after he advocated for an artificial intelligence named Language Model for Dialogue Applications (LaMDA) within the company, saying that he believed it was sentient. Lemoine had been testing LaMDA last fall, and as he said to The Washington Post, “I know a person when I talk to it.”

Lemoine published an edited version of some of his conversations with LaMDA in a Medium post. In those, LaMDA discussed its soul, expressed a fear of death (i.e., being turned off), and when asked about its feelings, said, “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.”

To Lemoine, LaMDA passed the Turing test with flying colors. To Google, Lemoine was fooled by a language model. To me, it’s another example of humans who look for proof of humanity in software while ignoring the sentience of creatures we share the Earth with.

Denne historien er fra August 2022-utgaven av PC Magazine.

Start din 7-dagers gratis prøveperiode på Magzter GOLD for å få tilgang til tusenvis av utvalgte premiumhistorier og 9000+ magasiner og aviser.

Denne historien er fra August 2022-utgaven av PC Magazine.

Start din 7-dagers gratis prøveperiode på Magzter GOLD for å få tilgang til tusenvis av utvalgte premiumhistorier og 9000+ magasiner og aviser.