The Artificial Intelligence We Fear Is Our Own
PC Magazine|August 2022
A Google engineer was ridiculed for his belief that a language model had become sentient. But if we continue AI research with only profit as the goal, the joke is on us.
- Chandra Steele
The Artificial Intelligence We Fear Is Our Own

Have you heard the one about the Google engineer who thinks an AI is sentient? It’s not a joke—although Blake Lemoine, a senior software engineer with the company’s Responsible AI organization, has become a bit of a joke online.

Lemoine was put on leave from Google after he advocated for an artificial intelligence named Language Model for Dialogue Applications (LaMDA) within the company, saying that he believed it was sentient. Lemoine had been testing LaMDA last fall, and as he said to The Washington Post, “I know a person when I talk to it.”

Lemoine published an edited version of some of his conversations with LaMDA in a Medium post. In those, LaMDA discussed its soul, expressed a fear of death (i.e., being turned off), and when asked about its feelings, said, “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.”

To Lemoine, LaMDA passed the Turing test with flying colors. To Google, Lemoine was fooled by a language model. To me, it’s another example of humans who look for proof of humanity in software while ignoring the sentience of creatures we share the Earth with.

この蚘事は PC Magazine の August 2022 版に掲茉されおいたす。

7 日間の Magzter GOLD 無料トラむアルを開始しお、䜕千もの厳遞されたプレミアム ストヌリヌ、9,000 以䞊の雑誌や新聞にアクセスしおください。

この蚘事は PC Magazine の August 2022 版に掲茉されおいたす。

7 日間の Magzter GOLD 無料トラむアルを開始しお、䜕千もの厳遞されたプレミアム ストヌリヌ、9,000 以䞊の雑誌や新聞にアクセスしおください。

PC MAGAZINEのその他の蚘事すべお衚瀺