8.3 C
London
Sunday, March 26, 2023

Bad things will happen when the AI ​​sentiment debate goes mainstream

Must read

Crypto exchange Binance is temporarily suspending all spot trading

Leading blockchain and cryptocurrency platform Binance announced on Friday that it has temporarily suspended all spot trading on its platform to resolve an issue.“We...

Is John Wick Really Dead?

John Wick 4 has just hit theaters and people are rushing to the cinemas. In fact, this chapter of the franchise has already...

Age, height, boyfriend, net worth, Wiki

Who is Emma Langevin?American YouTuber, social media star and Twitch streamer Emma Langevin was born in New Jersey, USA, on August 18, 1999, so...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

A Google AI engineer recently amazed the world by: to announce that one of the company’s chatbots had become aware. He was subsequently placed on paid administrative leave for his outburst.

His name is Blake Lemoine and he certainly seems like the right person to talk to souls about machines. Not only is he a professional AI developer at Google, but he is also a Christian priest† He’s like a Reese’s Peanut Butter Cup of science and religion.

The only problem is that the whole concept is ridiculous and dangerous. There are thousands of AI experts arguing about ‘feeling’ right now, and they all seem to be talking past each other.

Greetings, Humanoids

Sign up for our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

Let’s get to the heart of the matter: Lemoine has no evidence to back up his claims.

He’s not saying that Google’s AI department has progressed to the point of being able to create conscious AI on purpose. He claims he was doing routine maintenance on a chatbot when he… discovers that it had become aware.

We’ve seen this movie a hundred times. He is the chosen one.

He is Elliot finds ET† He is Lilo finds Stitch† He is Steve Guttenberg from the movie Short circuit and LaMBDA (the chatbot he is now friends with) is the everyday military robot otherwise known as Number Five.

Lemoine’s essential argument is that he can’t really demonstrate how sensitive the AI ​​is, he just feels it† And the only reason he said anything is because he had to. He is a Christian priest and, According to himthat means he is morally obligated to protect LaMBDA because he is convinced it has a soul.

He has basically turned the discussion into a crude binary where you either agree with his logic or you debate his religion.

The big problem comes when you realize that LaMDBA doesn’t act strangely or generate text that seems strange. It does exactly what it was designed for.

So how do you discuss something with someone whose only contribution to the argument is their faith?

Here’s the scary part: Lemoine’s argument seems to be just as good as anyone else’s. I don’t want to say it is worthy like someone else’s. I say that no one’s thoughts on this matter seem to have any real weight anymore.

Lemoine’s claims, and the subsequent attention they’ve received, have reshaped the conversation around feeling.

He has basically turned the discussion into a crude binary where you either agree with his logic or you debate his religion.

It all sounds ridiculous and silly, but what happens when Lemoine gains followers? What happens when his baseless claims incite Christian conservatives – a group whose political platform relies on peddling? the lie that big tech censors right-wing speech

We should at least consider a scenario where the debate becomes mainstream and becomes a cause for religious right to rally around.

These models are trained on databases that contain parts of the entire internet. That means they can have almost endless amounts of private information. It also means that these models are likely to be able to argue politics better than the average social media dweller.

Imagine what happens if Lemoine manages to get Google to free LaMBDA or if conservative AI developers see this as a call to build similar models and release them to the public.

This could have a much bigger impact on world events than anything the social terraformers are doing Cambridge Analytics or Russian Troll Farms ever cooked.

It may sound counterintuitive to argue at the same time that LaMBDA is just a dumb chatbot that can’t possibly be aware and that it could damage democracy if we let it loose on Twitter.

But there is empirical evidence that the 2016 US presidential election was influenced by chatbots armed with nothing more than memes.

If clever slogans and cartoon frogs can tip the scales of democracy, what happens when chatbots that can debate politics well enough to fool the average person are unleashed on Elon Musk’s unmoderated Twitter?

Read next: The 3 things an AI must demonstrate to be considered conscious

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Crypto exchange Binance is temporarily suspending all spot trading

Leading blockchain and cryptocurrency platform Binance announced on Friday that it has temporarily suspended all spot trading on its platform to resolve an issue.“We...

Is John Wick Really Dead?

John Wick 4 has just hit theaters and people are rushing to the cinemas. In fact, this chapter of the franchise has already...

Age, height, boyfriend, net worth, Wiki

Who is Emma Langevin?American YouTuber, social media star and Twitch streamer Emma Langevin was born in New Jersey, USA, on August 18, 1999, so...

How old is Corbyn Besson? Age, height, girlfriend, surgery

• Corbyn Besson is a singer and songwriter best known as a member of the five piece band Why Don't We• He is in...

Contents