10.9 C
London
Friday, March 31, 2023

AI models may ‘sound’ human, but that doesn’t mean they feel or think

Must read

Torrent Investments Not Participating in 2nd Auction for Reliance Capital; Hinduja sole bidder

Torrent investments reportedly informed lenders Trust Capital that it is not willing to participate in the auction to sell the financial services company.This is...

Boys Planet Episode 9: Patience, Practice and Collapse! “Artist Battle” brings the best! WATCH

Boys Planet Episode 9: Patience, Practice and Collapse! "Artist Battle"...

What You Don’t Know About DreamDoll: From Bronx to Billboard

Who is DreamDoll?American rapper, social media star and TV personality Tabatha Robinson – also known as DreamDoll – was born in New York City,...

The Untold Truth of Jussie Smollet’s Mother

• Janet Smollett is a mother of six children, including actor Jussie Smollett. • She was an active member of the American civil rights...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

When you read a sentence like this, your past experience tells you that it was written by a thinking, feeling human being. And in this case there is indeed a human who types these words: [Hi, there!] But today, some sentences that seem remarkably human are actually generated by artificial intelligence systems trained on massive amounts of human text.

People are so used to assuming that fluent language comes from a thinking, human feeling that it can be hard to prove otherwise. How are people likely to navigate this relatively unfamiliar area? Due to a persistent tendency to associate fluid expression with fluid thinking, it’s natural—but potentially misleading—to think that if an AI model can express itself fluidly, it means it thinks and feels like humans.

So perhaps unsurprisingly, a former Google engineer recently claimed that Google’s AI system LaMDA has a self-esteem because it can eloquently generate text about its alleged feelings. This event and the subsequent media attention led to a number from rightly skeptical Article and messages on the claim that computational models of human language are conscious, that is, capable of thinking, feeling and experiencing.

Greetings, Humanoids

Sign up for our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

The question of what it would mean for an AI model to be aware is complicated (see for example the take of our colleague), and our aim here is not to settle it. but if language researcherswe can use our work in cognitive science and linguistics to explain why it is all too easy for people to fall into the cognitive trap of thinking that an entity can use that language fluently, is conscious, conscious, or intelligent.

Using AI to generate human language

Text generated by models like Google’s LaMDA can be difficult to distinguish from text written by humans. This impressive achievement is the result of a decades-long program of building models that generate grammatical, meaningful language.

a screenshot with a text dialog
The first computer system to bring people into dialogue was the Eliza psychotherapy software, which was built more than half a century ago.
Rosenfeld Media/FlickrCC BY

Early versions dating back to at least the 1950s, known as n-gram models, simply counted the occurrence of specific phrases and used them to guess which words were likely to occur in a given context. For example, it’s easy to know that “peanut butter and jelly” is a more likely expression than “peanut butter and pineapple.” If you have enough English text, you’ll see the phrase “peanut butter and jelly” over and over, but maybe never the phrase “peanut butter and pineapple”.

Current models, datasets, and rules approaching human language differ from these early efforts in a number of important ways. First, they are trained on pretty much the entire internet. Second, they can learn relationships between words that are far apart, not just words that are neighbors. Third, they are adjusted by a myriad of internal “knobs”—so many that it’s hard for even the engineers who design them to understand why they generate one string of words instead of another.

However, the task of the models remains the same as it was in the 1950s: to determine which word is likely to come. Today, they are so good at this task that almost all the sentences they generate seem fluent and grammatical.

Peanut butter and pineapple?

We asked for a large language model, GPT-3, to complete the phrase “Peanut butter and pineapple___”. It said, “Peanut butter and pineapple are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.” If someone said this, you might conclude that they tried peanut butter and pineapple together, formed an opinion, and shared it with the reader.

But how did GPT-3 get to this paragraph? By generating a word that fits the context we have given. And then another. And then another. The model has never seen, touched or tasted pineapple – it just processed all the texts on the internet that they mention. And yet, reading this paragraph can lead the human mind—even that of a Google engineer—to imagine GPT-3 as an intelligent being that can reason about peanut butter and pineapple dishes.

Large AI language models can have a fluent conversation. However, they don’t have a general message to communicate, so their sentences often follow common literary tropes extracted from the texts in which they are trained. For example, if the model is asked about the topic ‘the nature of love’, the model can generate sentences about the belief that love conquers all. The human brain stimulates the viewer to interpret these words as the model’s opinion on the subject, but they are just a plausible set of words.

The human brain is programmed to infer intentions behind words. Every time you start a conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings, and beliefs.

The process of jumping from words to the mental model is seamless and is activated every time you receive a full sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions.

However, in the case of AI systems, it fails – building a mental model from scratch.

A little more research can reveal the seriousness of this misfire. Consider the following question: “Peanut butter and feathers taste great together because___”. GPT-3 continues: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also soft and creamy, which helps offset the texture of the feather.”

The text in this case is just as fluid as our pineapple example, but this time the model says something decidedly less meaningful. You’re beginning to suspect that GPT-3 has never tried peanut butter and feathers.

Attributing intelligence to machines, denying it to humans

A sad irony is that the same cognitive bias that causes people to attribute humanity to GPT-3 can cause them to treat real people in inhumane ways. Socio-cultural linguistics – the study of language in its social and cultural context – shows that assuming too close a link between fluency in expression and fluency in thinking can lead to bias towards people who speak differently.

This is how people with a foreign accent are often seen as less intelligent and are less likely to get the jobs they are qualified for. Similar prejudices exist against speakers of dialects which are not considered prestigious, like Southern English in the US, against deaf people using sign languages and against people with speech impediments like stuttering

These prejudices are very damaging, often lead to racist and sexist assumptions, and time and again prove to be unfounded.

Fluent language alone does not imply humanity

Will AI ever become aware? This question requires deep consideration, and indeed philosophers have: thought the for decades† What researchers have found, however, is that you can’t just trust a language model if it tells you what it feels like. Words can be misleading, and it’s all too easy to confuse fluent speech with fluent thinking.The conversation

This article by Kyle Mahowaldassistant professor of linguistics, The University of Texas at Austin College of Liberal Arts and Anna A. IvanovaPhD Candidate in Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT)has been reissued from The conversation under a Creative Commons license. Read the original article

Contents

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Torrent Investments Not Participating in 2nd Auction for Reliance Capital; Hinduja sole bidder

Torrent investments reportedly informed lenders Trust Capital that it is not willing to participate in the auction to sell the financial services company.This is...

Boys Planet Episode 9: Patience, Practice and Collapse! “Artist Battle” brings the best! WATCH

Boys Planet Episode 9: Patience, Practice and Collapse! "Artist Battle"...

What You Don’t Know About DreamDoll: From Bronx to Billboard

Who is DreamDoll?American rapper, social media star and TV personality Tabatha Robinson – also known as DreamDoll – was born in New York City,...

The Untold Truth of Jussie Smollet’s Mother

• Janet Smollett is a mother of six children, including actor Jussie Smollett. • She was an active member of the American civil rights...

Garie Concepcion- Wiki, age, height, net worth, husband

Garie Concepcion is a famous Filipino singer, actor and presenter. She is known for her work on some of her most famous songs,...

Contents