8.9 C
London
Monday, January 30, 2023

Stanford AI experts call BS over claims Google’s LaMDA is sensitive

Must read

CPP Investments commits $205 million to Indospace’s new fund to build industrial, logistics parks in India

Canada Pension Plan Investment Board has committed to invest USD 205 million in IndoSpace's new real estate fund for the development of industrial and...

SAS Fat to Slim: ‘Dangerous and gimmicky’ weight loss start-up gets slammed on Shark Tank India

SAS Fat to Slim is a startup that claims to provide diet plans for weight loss which requires no practice. All judges withdrew from...

Who is Joe Montana’s wife Jennifer Montana? Some untold facts about her

Jennifer Montana is a successful model, actress and jewelry designer. She has appeared in a number of successful TV series including Home &...

Bajaj Finserv posts 42% growth in Q3 net profit to ₹1,782 crore

Bajaj Finserv on Monday posted a profit growth of 42 percent to ₹1,782 crore for the quarter ended 2022, up from ₹1,256 crore in...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Two Stanford heavyweights have weighed in in the fiery AI sentiment debate — and the duo are firmly in the “BS” corner.

The feud recently reached a crescendo over arguments over Google’s LaMDA system.

Developer Blake Lemoine sparked the controversy. Lemoine, who worked for Google’s Responsible AI team, had tested whether the large language model (LLM) used harmful speech.

Greetings, Humanoids

Sign up for our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

The 41-year-old told The Washington Post that his conversations with the AI ​​convinced him it had a conscious mind.

“I know someone when I talk to them,” he said. “It doesn’t matter if they have a flesh brain in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I determine what is and isn’t a person.”

Google denied his claims. In July, the company put Lemoine on leave for publishing confidential information.

The episode sparked sensational headlines and speculation that AI is gaining consciousness. However, AI experts have largely rejected Lemoine’s argument.

The Stanford duo shared further criticism this week with: The Stanford Daily.

“LaMDA is not conscious for the simple reason that it doesn’t have the physiology to have sensations and feelings,” said John Etchemendy, the co-director of the Stanford Institute for Human-centered AI (HAI). “It is a software program designed to produce sentences in response to sentence prompts.”

Yoav Shoham, the former director of the Stanford AI Lab, agreed that LaMDA is not aware. He described The Washington Post article “pof course click bait.”

“They published it because for now they could write that headline about the ‘Google engineer’ making this absurd claim, and because most of their readers aren’t sophisticated enough to recognize it for what it is,” he said. .

Distraction Techniques

Shoham and Etchemendy join a growing number of critics who fear the public is being misled.

The hype can drive clicks and bring products to market, but researchers fear it will distract us from more pressing issues.

LLMs cause special alarm. While the models have become adept at generating human text, excitement about their “intelligence” can mask their shortcomings.

Research shows that systems can have huge environmental footprints, reinforce discriminatory language and pose real dangers.

“Debate over whether LaMDA is conscious or not moves the whole conversation toward debating nonsense and away from critical issues like how racist and sexist LLMs often are, huge computer resources that LLMs need, [and] their failure to accurately represent marginalized language/identities,” tweeted Ababa Birhane, a senior fellow in trusted AI at Mozilla.

It’s hard to predict when – or if – truly conscious AI will emerge. But by focusing on that prospect, we are overlooking the real-life consequences that are already unfolding.


More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

CPP Investments commits $205 million to Indospace’s new fund to build industrial, logistics parks in India

Canada Pension Plan Investment Board has committed to invest USD 205 million in IndoSpace's new real estate fund for the development of industrial and...

SAS Fat to Slim: ‘Dangerous and gimmicky’ weight loss start-up gets slammed on Shark Tank India

SAS Fat to Slim is a startup that claims to provide diet plans for weight loss which requires no practice. All judges withdrew from...

Who is Joe Montana’s wife Jennifer Montana? Some untold facts about her

Jennifer Montana is a successful model, actress and jewelry designer. She has appeared in a number of successful TV series including Home &...

Bajaj Finserv posts 42% growth in Q3 net profit to ₹1,782 crore

Bajaj Finserv on Monday posted a profit growth of 42 percent to ₹1,782 crore for the quarter ended 2022, up from ₹1,256 crore in...

In Australia, a small radioactive capsule is missing; mining giant Rio Tinto apologizes

Authorities undertook the daunting task of locating and securing the tiny capsule, which is believed to have fallen from a truck on Jan. 10...