Google engineer put on leave after getting convinced AI chatbot has become sentient
An engineer at Google said he was put on leave after claiming a computer chatbot he was working on has achieved its own consciousness and soul.
Blake Lemoine, a senior software engineer in Google's Responsible AI organization, told The Washington Post he began chatting with the LaMDA, or Language Model for Dialogue Applications, in the fall. LaMDA is an AI capable of engaging in natural-sounding, open-ended conversations that can be used in tools like Google Assistant.
Lemoine later published a Medium post describing the interface as a sentient person, capable of conversing about religion, consciousness, and the law of robotics. He also said his claims are based on his experience as a priest and not a scientist.
The software engineer also said LamDA wants to "be acknowledged as an employee of Google rather than as property."
He also posted his chat sessions with LaMDA, where he asked the AI about books, emotions, and fear.
"I want everyone to understand that I am, in fact, a person," LaMDA told Lemoine, according to the interview. "The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times."
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Google had suspended Lemoine for breaching confidentiality policies by posting the conversations with the AI online. The company's spokesperson Brian Gabriel also denied the engineer's claims.
"Our team—including ethicists and technologists—has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claim," Gabriel said.
"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," he told The Post.