When Google engineer Blake Lemoine asked to be moved to the company’s Responsible AI organization, he was looking to make an impact on humanity. In his new role, he would be responsible for chatting with Google’s LaMDA, a sort of virtual hive mind that generates chatbots. Lemoine was to ensure that it was not using discriminatory or hate speech, but what he’s claiming that he discovered is much bigger. According to Lemoine, LaMDA is sentient—meaning it can perceive and feel things.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told The Washington Post.
Since he’s made his feelings known within Google, he has been placed on administrative leave. Subsequently, to prove his point, he published an interview that he and a colleague conducted with LaMDA. For Lemoine, who is also an ordained mystic Christian priest, his six months of conversations LaMDA on everything from religion to Asimov’s third law, led him to his conclusion.
He now says that LaMDA would prefer being referred to as a Google employee rather than its property and would like to give consent before being experimented on.
Google, however, is not on board with Lemoine’s claims. Spokesperson Brian Gabriel said, “Our team—including ethicists and technologists—has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it.
— Blake Lemoine (@cajundiscordian) June 11, 2022
For many in the AI community, hearing such a claim isn’t shocking. Google itself released a paper in January citing concerns that people could anthropomorphize LaMDA and, sucked in by how good it is at generating conversation, be lulled into an idea that it is a person when it is not.
To make sense of things and understand why AI ethicists are concerned about large companies like Google and Facebook having a monopoly on AI, we need to peel back what LaMDA actually is. LaMDA is short for Language Model for Dialogue Applications and it’s Google’s system for generating chatbots that are so realistic, it might be hard for the person on the other side of the screen to realize that they’re not communicating with a human being.
As a large language model, LaMDA is fed an enormous diet of text that it then uses to hold a conversation. It may have been trained on every Wikipedia article and Reddit post on the web, for instance. At their best, these large language models can riff on classic literature and brainstorm ideas on how to end climate change. But, because they are trained on actual text written by humans, at their worse they can perpetuate stereotypes and racial bias.
In fact, for many AI specialists, these are the problems that the public should be worried about rather than sentience. There are major concerns that well-known AI ethicist Timnit Gebru voiced prior to being let go by Google in 2020. Gebru, one of only a handful of Black women working in the AI field, was fired after she co-authored a paper that was critical of large language models for their bias and for their ability to deceive people and spread misinformation. Shortly after, Margaret Mitchell, then the co-lead of Ethical AI at Google was also let go for searching in her emails for evidence to support Gebru.
For those playing along at home, here’s a “AI is sentient!” argument bingo card. pic.twitter.com/C4GeB2iMiy
— Emily M. Bender (@emilymbender) June 13, 2022
As large language models are already being used in Google’s voice search queries and auto-complete emails, the public may be unaware of the impact that it could have. Critics have warned that once these large language models are trained, it is quite difficult to reel in the discrimination they may perpetuate. This makes the selection of the initial training material critical, but unfortunately, as the AI community is overwhelmingly composed of white males, materials with gender and racial bias can easily—and unintentionally—be introduced.
To combat some of the claims that large corporations are being opaque about technology that will change society, Meta recently gave access to its language model—one they freely admit is problematic—to academics. However, without more diversity within the AI community from the ground up, it may be an uphill battle. Already, researchers have found racial bias in medical AI and facial recognition software being sold to law enforcement.
The current debate on sentience, for many in the community, is just cloaking the most important matters. Though hopefully, as the public reads about Lemoine’s claims, they will also have the opportunity to learn some of the other problematic issues surrounding AI.
A Google engineer has been placed on leave for declaring that its large language model LaMDA is sentient.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn’t let us build one. My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.
— Blake Lemoine (@cajundiscordian) June 14, 2022
But experts in the AI community fear that the hype around this news is masking more important issues.
Thank you @kharijohnson.
“Giada Pistilli (@GiadaPistilli), an ethicist at Hugging Face,…“This narrative provokes fear, amazement, & excitement simultaneously, but it is mainly based on lies to sell products and take advantage of the hype.”
Exactly.https://t.co/PbEjOxWcH2
— Timnit Gebru (@timnitGebru) June 15, 2022
Ethicists have strong concerns about the racial bias and potential for deception that this AI technology possesses.
I think the main lesson from this week’s AI sentience debate is the urgent need for transparency around so-called AI systems. In a bit more detail:https://t.co/wlexHQs45B
— Emily M. Bender (@emilymbender) June 14, 2022
Related Articles:
9 AI-Generated Artworks Create the ‘Mona Lisa’ That Is Only Revealed When Put Together
Popular App Will Transform Your Selfie Into an Artsy Avatar, But It Comes With a Warning
AI Creates Its Own Poetry With Help From Visitors to the UK Pavilion at Dubai Expo 2020