12.2 C
London
Monday, September 26, 2022

Stanford AI experts call BS over claims Google’s LaMDA is sensitive

Must read

Ian grows into a hurricane as Florida begins evacuations and Cuba braces for potential flooding

Ian strengthened into a hurricane Monday as Florida began ordering evacuations this week and preparing for potential flooding.Tornadoes are also possible late Monday and...

These are the industries ripe for innovation under the Inflation Reduction Act • londonbusinessblog.com

With a month In hindsight, we're getting a better idea of ​​what the Inflation Reduction Act will mean for the US economy and the...

Gently’s store aggregator aims to take the friction out of locating second-hand clothing • londonbusinessblog.com

Samuel Spitz is a used clothing enthusiast, but found that he spent hours searching dozens of resale sites to find certain items and came...

Limit reached – Join the EU Startups CLUB

€147/quarter This option is ideal for companies and investors who want to keep up to date with Europe's most promising startups, have full access...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Two Stanford heavyweights have weighed in in the fiery AI sentiment debate — and the duo are firmly in the “BS” corner.

The feud recently reached a crescendo over arguments over Google’s LaMDA system.

Developer Blake Lemoine sparked the controversy. Lemoine, who worked for Google’s Responsible AI team, had tested whether the large language model (LLM) used harmful speech.

Greetings, Humanoids

Sign up for our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

The 41-year-old told The Washington Post that his conversations with the AI ​​convinced him it had a conscious mind.

“I know someone when I talk to them,” he said. “It doesn’t matter if they have a flesh brain in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I determine what is and isn’t a person.”

Google denied his claims. In July, the company put Lemoine on leave for publishing confidential information.

The episode sparked sensational headlines and speculation that AI is gaining consciousness. However, AI experts have largely rejected Lemoine’s argument.

The Stanford duo shared further criticism this week with: The Stanford Daily.

“LaMDA is not conscious for the simple reason that it doesn’t have the physiology to have sensations and feelings,” said John Etchemendy, the co-director of the Stanford Institute for Human-centered AI (HAI). “It is a software program designed to produce sentences in response to sentence prompts.”

Yoav Shoham, the former director of the Stanford AI Lab, agreed that LaMDA is not aware. He described The Washington Post article “pof course click bait.”

“They published it because for now they could write that headline about the ‘Google engineer’ making this absurd claim, and because most of their readers aren’t sophisticated enough to recognize it for what it is,” he said. .

Distraction Techniques

Shoham and Etchemendy join a growing number of critics who fear the public is being misled.

The hype can drive clicks and bring products to market, but researchers fear it will distract us from more pressing issues.

LLMs cause special alarm. While the models have become adept at generating human text, excitement about their “intelligence” can mask their shortcomings.

Research shows that systems can have huge environmental footprints, reinforce discriminatory language and pose real dangers.

“Debate over whether LaMDA is conscious or not moves the whole conversation toward debating nonsense and away from critical issues like how racist and sexist LLMs often are, huge computer resources that LLMs need, [and] their failure to accurately represent marginalized language/identities,” tweeted Ababa Birhane, a senior fellow in trusted AI at Mozilla.

It’s hard to predict when – or if – truly conscious AI will emerge. But by focusing on that prospect, we are overlooking the real-life consequences that are already unfolding.


More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Ian grows into a hurricane as Florida begins evacuations and Cuba braces for potential flooding

Ian strengthened into a hurricane Monday as Florida began ordering evacuations this week and preparing for potential flooding.Tornadoes are also possible late Monday and...

These are the industries ripe for innovation under the Inflation Reduction Act • londonbusinessblog.com

With a month In hindsight, we're getting a better idea of ​​what the Inflation Reduction Act will mean for the US economy and the...

Gently’s store aggregator aims to take the friction out of locating second-hand clothing • londonbusinessblog.com

Samuel Spitz is a used clothing enthusiast, but found that he spent hours searching dozens of resale sites to find certain items and came...

Limit reached – Join the EU Startups CLUB

€147/quarter This option is ideal for companies and investors who want to keep up to date with Europe's most promising startups, have full access...

The biggest names in quantum startups are part of a new government advisory group to make this happen

Leading startup founders in the quantum computing space have been brought in by the Federal Secretary of Industry and Science, Ed Husic, as part...