13.7 C
London
Friday, October 7, 2022

Stop debating whether AI is ‘conscious’ – the question is whether we can trust it

Must read

The Ambani family has been the target of threats for years

This is the second copy of Ambania family gets threats on the hospital landline. Mukesh Ambanic currently has Z-Plus security coverage, provided by...

Purdue University residency students say they were left in the dark about murders for hours

WEST LAFAYETTE, Ind. Several students at the Purdue University residence, where a student was murdered Wednesday, said they didn't know the murder had...

As the market cools, aggressive Tiger Global tries to raise a fund half the size of the previous • londonbusinessblog.com

The assets under management at investment firm Tiger Global have exploded in recent years. Now the company is taking stock and making its...

Everything you wanted to know about Google Fall Event 2022 (but were afraid to ask) • londonbusinessblog.com

To get a roundup of londonbusinessblog.com's biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here. Hello again!...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Over the past month, there has been a torrent of articles, interviews, and other types of media coverage about Blake Lemoine, a Google engineer who told The Washington Post that LaMDA, a large language model created for conversations with users, is ‘aware’.

After reading a dozen different take on the subject, I have to say that the media has become (a bit) disillusioned with the hype surrounding current AI technology. Many of the articles discussed why deep neural networks are not ‘conscious’ or ‘conscious’. This is an improvement from a few years ago when news outlets made sensational stories about AI systems invent their own languagetake over any lane and accelerate to artificial general intelligence

But the fact that we are talking about feeling and consciousness again underscores an important point: we are at a point where our AI systems, namely large language models, are becoming increasingly convincing, while still suffering from fundamental flaws that scientists have pointed out. different occasions. And I know that “AI fools people” has been discussed since the ELIZA chatbot in the 1960s, but today’s LLMs are really on another level. If you don’t know how language models work, Blake Lemoine’s Conversations with LaMDA seem almost surreal even if they were cherry picked and processed.

Greetings, Humanoids

Sign up for our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

However, the point I want to make here is that “feeling” and “awareness” are not the best discussion about LLMs and current AI technology. A more important discussion would be about human compatibility and trust, especially as these technologies are being prepared to be integrated into everyday applications.

Why large language models don’t speak our language

The workings of neural networks and large language models have been discussed at length this past week (I highly recommend reading Melanie Mitchell’s interview with MSNBC for a balanced view of how LaMDA and other LLMs work). I’d like to give a more zoomed in on the situation, starting with the human language, against which LLMs are compared.

For humans, language is a means of communicating the complicated and multidimensional activations in our brains. For example, if two brothers talk to each other and one of them says “mom,” the word is associated with many activations in different parts of the brain, including memories of her voice, face, feelings, and various experiences from the distant past to (possibly) recent days. In fact, there can be a huge difference between the kinds of images the brothers have in their heads, depending on the experiences each of them has had. However, the word “mother” offers a condensed and well-rendered approach that helps them agree on the same concept.

When you use the word “mom” in a conversation with a stranger, the difference between the experiences and memories becomes even greater. But again, you come to an agreement based on the shared concepts you have in mind.

Think of language as a compression algorithm that helps transfer the huge information in the brain to another person. The evolution of language is directly linked to experiences we have had in the world, from physical interactions in our environment to social interactions with other fellow human beings.

Language is based on our shared experiences in the world. Children know gravity, dimension, physical consistency of objects, and human and social concepts such as pain, sadness, fear, family and friendship, even before uttering their first word. Without those experiences, language has no meaning. This is why language usually does not omit common sense knowledge and information that interlocutors share. On the other hand, the level of shared experience and memory determines the depth of conversation you can have with another person.

Large language models, on the other hand, lack physical and social experience. They are trained in billions of words and learn to respond to prompts by predicting the next set of words. This is an approach that has yielded a lot in recent years, especially after the introduction of the transformer architecture

How do transformers manage to make very convincing predictions? They turn text into ‘tokens’ and ’embeddings’, mathematical representations of words in multidimensional space. They then process the embedding to add other dimensions, such as the relationships between the words in a series of text and their role in the sentence and paragraph. With enough examples, these embeddings can give a good approximation of how words should appear in sequences. Transformers have become popular mainly because they are scalable: their accuracy improves as they grow and receive more data, and they can usually be trained by unsupervised learning

But the fundamental difference remains. Neural networks process language by converting it into embeddedness. For people, language is the embedding of thoughts, feelings, memory, physical experience and many other things we have yet to discover about the brain.

Therefore, it is fair to say that despite their enormous progress and impressive results, transformers, large language models, deep neural networks, etc. are far from speaking our language.

Feeling vs Compatibility and Trust

Much of today’s discussion revolves around whether we should assign attributes such as feeling, consciousness, and personality to AI. The problem with these discussions is that they focus on concepts that are vaguely defined and mean different things to different people.

For example, functionalists might argue that neural networks and large language models are conscious because they exhibit (at least in part) the same kind of behavior you’d expect from a human, even though they’re built on a different substrate. Others might argue that organic substance is a requirement for consciousness and conclude that neural networks will never be conscious. You can argue about qualia, the Chinese chamber experiment, the Turing test, etc., and the discussion can go on forever.

However, a more practical question is: how “compatible” are current neural networks with the human mind, and to what extent can we trust them with critical applications? And this is an important discussion to have, because large language models are usually developed by companies that strive for it convert them into commercial applications

With enough training, for example, you can teach a chimpanzee to drive a car. But would you put him behind the wheel on a road that pedestrians will cross? You wouldn’t, because you know that as smart as they are, chimpanzees don’t think in the same way as humans and can’t be held responsible for tasks involving human safety.

Likewise, a parrot can be taught many phrases. But would you trust it to be your customer service representative? Probably not.

Even when it comes to humans, some cognitive impairments disqualify people from taking on certain jobs and tasks that require human interaction or involve people’s safety. In many cases, these people can read, write, speak fluently and remain consistent and logical in long conversations. We don’t doubt their feeling or consciousness or personality. But we know that their decisions can become inconsistent and unpredictable because of their illness (see case of) Phineas Gagefor example).

What matters is whether you can trust the person to think and decide as an average person would. In many cases, we entrust people with tasks because we know that their sensory system, common sense, feelings, goals, and rewards are largely compatible with ours, even if they don’t speak our language.

What do we know about LaMDA? For starters, the world doesn’t feel like us. His ‘knowledge’ of language is not based on the same kind of experiences as ours. Common sense knowledge is built on an unstable foundation because there is no guarantee that large amounts of text will cover all the things we leave out in language.

This given incompatibilityhow far can you go? to trust LaMDA and other major language models, however good they are at producing text output? A friendly and entertaining chatbot program might not be a bad idea, as long as it doesn’t lead the conversation into sensitive topics. Search engines are also a good area of ​​application for LLMs (Google is Using BERT in Search for a few years). But can you entrust them with more sensitive tasks, such as an open customer service chatbot or banking advisor (even if they’ve been trained or refined with a ton of relevant conversation transcripts)?

I think we need application-specific benchmarks to test the consistency of LLMs and their compatibility with human common sense in different areas. When it comes to real applications, there should always be clearly defined boundaries that define where the call becomes off-limits to the LLM and handed over to a human operator.

The Problem Solver’s Perspective

A while back I wrote an essay about “problem seekers” and “problem solvers”. Basically, what I said is that human intelligence is about finding the right problems and artificial intelligence (or the AI ​​we have today) is about solving those problems in the most efficient way.

We have seen time and again that computers can find shortcuts to solve complicated problems without acquiring people’s cognitive abilities. We’ve seen it with checkers, chess, Go, programming contestsprotein folding and other well-defined problems.

Natural language is different in some ways, but also similar to all those other problems that AI has solved. On the one hand, transformers and LLMs have shown that they can produce impressive results without going through the language learning process like a normal human being, which is to first explore the world and understand its basic rules and then acquire the language to interact with others. to go. people on the basis of this general knowledge. On the other hand, they lack the human experience associated with language learning. They can be useful for solving well-defined language-related problems. But we must not forget that their compatibility with the processing of human language is limited and therefore we must be careful to the extent to which we trust them.

This article was originally published by Ben Dickson at: TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the bad side of technology, the dark implications of new technology and what to watch out for. You can read the original article here

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

The Ambani family has been the target of threats for years

This is the second copy of Ambania family gets threats on the hospital landline. Mukesh Ambanic currently has Z-Plus security coverage, provided by...

Purdue University residency students say they were left in the dark about murders for hours

WEST LAFAYETTE, Ind. Several students at the Purdue University residence, where a student was murdered Wednesday, said they didn't know the murder had...

As the market cools, aggressive Tiger Global tries to raise a fund half the size of the previous • londonbusinessblog.com

The assets under management at investment firm Tiger Global have exploded in recent years. Now the company is taking stock and making its...

Everything you wanted to know about Google Fall Event 2022 (but were afraid to ask) • londonbusinessblog.com

To get a roundup of londonbusinessblog.com's biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here. Hello again!...

Government is making regulatory changes in response to Optus data breach, but legislative reform is still needed

In response to Australia's largest-ever data breach, the federal government will: temporarily suspend regulation those telcos stop sharing customer information with third parties. It is...

Contents