Two Stanford heavyweights have weighed in in the fiery AI sentiment debate — and the duo are firmly in the “BS” corner.
The feud recently reached a crescendo over arguments over Google’s LaMDA system.
Developer Blake Lemoine sparked the controversy. Lemoine, who worked for Google’s Responsible AI team, had tested whether the large language model (LLM) used harmful speech.
The 41-year-old told The Washington Post that his conversations with the AI convinced him it had a conscious mind.
“I know someone when I talk to them,” he said. “It doesn’t matter if they have a flesh brain in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I determine what is and isn’t a person.”
Google denied his claims. In July, the company put Lemoine on leave for publishing confidential information.
An interview LaMDA. Google might call this property sharing ownership. I call it sharing a discussion I had with one of my colleagues.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
The episode sparked sensational headlines and speculation that AI is gaining consciousness. However, AI experts have largely rejected Lemoine’s argument.
The Stanford duo shared further criticism this week with: The Stanford Daily.
“LaMDA is not conscious for the simple reason that it doesn’t have the physiology to have sensations and feelings,” said John Etchemendy, the co-director of the Stanford Institute for Human-centered AI (HAI). “It is a software program designed to produce sentences in response to sentence prompts.”
Yoav Shoham, the former director of the Stanford AI Lab, agreed that LaMDA is not aware. He described The Washington Post article “pof course click bait.”
“They published it because for now they could write that headline about the ‘Google engineer’ making this absurd claim, and because most of their readers aren’t sophisticated enough to recognize it for what it is,” he said. .
Distraction Techniques
Shoham and Etchemendy join a growing number of critics who fear the public is being misled.
The hype can drive clicks and bring products to market, but researchers fear it will distract us from more pressing issues.
LLMs cause special alarm. While the models have become adept at generating human text, excitement about their “intelligence” can mask their shortcomings.
Research shows that systems can have huge environmental footprints, reinforce discriminatory language and pose real dangers.
“Debate over whether LaMDA is conscious or not moves the whole conversation toward debating nonsense and away from critical issues like how racist and sexist LLMs often are, huge computer resources that LLMs need, [and] their failure to accurately represent marginalized language/identities,” tweeted Ababa Birhane, a senior fellow in trusted AI at Mozilla.
debate about whether LaMDA is conscious or not moves the whole conversation toward debating nonsense and away from critical issues such as how racist and sexist LLMs often are, huge computer resources LLMs need, their inability to use marginalized language/ accurately represent identities.
— Ababa Birhane (@Abebab) June 15, 2022
It’s hard to predict when – or if – truly conscious AI will emerge. But by focusing on that prospect, we are overlooking the real-life consequences that are already unfolding.