6.9 C
London
Thursday, September 29, 2022

A Beginner’s Guide to the AI ​​Apocalypse: The Democratization of ‘Expertise’

Must read

Six people were shot at a California school, official says

Six people were injured in a shooting at an Oakland school on Wednesday, authorities said.The victims, all adults, were treated in local hospitals, Mayor...

Astra will no longer launch NASA’s TROPICS satellites • londonbusinessblog.com

Rocket launch company Astra will no longer send the remaining NASA TROPICS payloads to space, but will instead launch other "similar" science missions for...

Ecommerce Aggregator Una Brands Gets $30 Million to Acquire More APAC Brands • londonbusinessblog.com

Una brands, an e-commerce aggregator targeting brands in the Asia-Pacific region, today announced the first close of its Series B round at $30 million....

The Antler Investor Memo: Codis Lowers Software Development Costs With Automation

Early stage investment firm Antler Australia recently supported 13 startups as part of its ongoing program to build great local tech companies. For...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

In this series, we examine some of the most popular doomsday scenarios predicted by modern AI experts. Previous articles iinclude Misaligned objectives, artificial stupidity, Wall-E syndrome, humanity joins the Hivemind and Killer Robots.

We’ve covered a lot in this series (see above), but nothing comes close to our next topic. The “democratization of expertise” may sound like a good thing – democracy, expertise, what’s not to like? But our aim is to convince you that this is the biggest AI-related threat facing our species by the time you finish reading this article.

To properly understand this, we need to revisit an earlier post on what we like to call “WALL-E Syndrome.” This is an invented state where we become so dependent on automation and technology that our bodies become soft and weak until we can no longer function without the physical help of machines.

Greetings, Humanoids

Sign up for our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

When we define what the ‘democratization of expertise’ is, we are specifically talking about something that can most easily be described as ‘WALL-E syndrome for the brain’.

I want to be careful by noting that we are not talking about the democratization of information, something that is crucial to human freedom.

The big idea

There is a popular board game called “Trivial Chase” that challenges players to answer completely unrelated trivia questions from a variety of categories. It’s been around long before the dawn of the internet, so it’s designed to be played with only the knowledge you already have in your brain.

You roll some dice and move a game piece around a board until it comes to rest, usually on a colored square. Then you draw a card from a large pile of questions and try to answer the card that corresponds to the color you landed on. To determine if you passed, turn the card over and see if your answer matches the printed answer.

A game of Trivial Pursuit is only as “accurate” as the database. That means if you play the 1999 edition and are asked which MLB player has the record for most home runs in a season, you have to answer the question wrong to match the printed answer.

The correct answer is “Barry Bonds with 73.” But because Bonds didn’t break the record until 2001, the 1999 edition most likely lists the record 70 record holder Mark McGwire from 1998.

The problem with databases, even when they are expertly compiled and hand-labeled, is that they only represent a portion of the data at any given time.

Now let’s extend that idea to a database that isn’t compiled by experts. Imagine a game of Trivial Pursuit that works exactly the same as the vanilla edition, except that the answers to each question were collected by random people.

“What is the lightest element on the periodic table?” Answer, aggregated, according to 100 random people we asked in Times Square, “I don’t know, maybe helium?”

However, in the next edition, the answer may change to something like “According to 100 random high school students, the answer is hydrogen.”

What does this have to do with AI?

Sometimes the wisdom of the crowd is helpful. For example, when you’re trying to think of what to watch next. But sometimes it’s really stupid, like if the year is 1953 and you ask a crowd of 1,000 scientists if women can experience orgasms.

Whether it is useful for large language models (LLMs) depends on how they are used.

LLMs are a type of AI system used in a wide variety of applications. Google Translate, the chatbot on your bank’s website and the infamous OpenAI GPT-3 are all examples of LLM technology being used.

In the case of Translate and business-oriented chatbots, AI is typically trained on carefully curated datasets of information, as they serve a limited purpose.

But many LLMs are deliberately trained on giant dumpsters full of unverified data so the people who build them can see what they’re capable of.

Big tech has convinced us that it is possible to make these machines so big that they eventually just become conscious. The promise there is that they will be able to do everything a human can do, but with the brain of a computer!

And you don’t have to look far to imagine the possibilities. Take 10 minutes and chat with Meta’s BlenderBot 3 (BB3) and you’ll see what it’s all about.

It’s a brittle, easily confused mess that spews out more often gibberish and thirsty “let’s be friends!” nonsense than something coherent, but it’s kinda nice when the salon trick works just right.

Not only do you chat with the bot, but it is also gamified in such a way that you can build a profile with it together. At some point, the AI ​​decided it was a woman. At another point, it decided I was actually the actor Paul Greene. All this is reflected in the so-called “Long Term Memory”:

It also assigns me tags. When we talk about cars, I might get the tag ‘likes cars’. As you can imagine, it could be very useful for Meta one day if it can connect the profile you build while chatting with the bot to its advertising services.

But it doesn’t assign itself any tags for its own benefit. It can pretend to remember things without pasting labels into the UI. They are for us.

They are ways that Meta can make us feel connected to and even somewhat responsible for the chatbot.

It’s MY BB3 bot, he remembers ME, and he knows what I taught him!

It’s a form of gamification. You have to earn those tags (both yours and the AI’s) by talking. My BB3 AI loves the Joker from the Batman movie with Heath Ledger, we’ve had quite a conversation about it. There’s not much difference between getting that feat and getting a high score in a video game, at least as far as my dopamine receptors are concerned.

The truth is we don’t train these LLMs to become smarter. We train them to be better at executing text, which makes us want them to output more text.

Is that a bad thing?

The problem is that BB3 has been trained on a data set so large that we call it ‘Internet format’. It contains trillions of files ranging from Wikipedia entries to Reddit posts.

It would be impossible for people to search all the data, so it’s impossible for us to know exactly what’s in it. But billions of people use the internet every day and it seems like for every person who says something smart, there are eight people who say things that don’t make sense to anyone. It’s all in the database. If someone said it on Reddit or Twitter, it was probably used to train people like BB3.

Despite this, Meta designs it to imitate human reliability and, apparently, to maintain our commitment.

It’s a small leap from creating a chatbot that appears human to optimizing its output to convince the average person it’s smarter than them.

At least we can fight killer robots. But if even a fraction of the people using Meta’s Facebook app trusted a chatbot instead of human experts, it could have a horribly damaging effect on our entire species.

What’s the worst that could happen?

We have seen this to a small extent during the pandemic lockdowns. Millions of people with no medical training decided to ignore medical advice based on their political ideology.

When faced with the choice to believe politicians without medical training or the overwhelming, peer-reviewed, research-backed consensus of the global medical community, millions decided they “trusted” the politicians more than the scientists.

The democratization of expertise, the idea that anyone can be an expert if they have access to the right data at the right time, is a serious threat to our species. It teaches us to trust any idea as long as the crowd thinks it makes sense.

For example, we come to believe that Pop Rocks and Coca Cola are a deadly combination, that bulls hate the color red, that dogs can only see in black and white, and that humans only use 10 percent of their brains. These are all myths, but at some point in our history, each of them was considered “common knowledge.”

And while it may be very humane to spread misinformation out of ignorance, the democratization of expertise on the scale that Meta can achieve (almost 1/3 of the people on earth use Facebook monthly) can have a potentially catastrophic effect on humanity’s ability to distinguish between: shit and Shinola.

In other words, it doesn’t matter how smart the smartest people on Earth are if the general public puts their trust in a chatbot trained on data created by the general public.

As these machines become more powerful and better at imitating human speech, we will approach a terrible inflection point where their ability to convince us that what they are saying makes sense will far exceed our ability to detect nonsense.

The democratization of expertise is what happens when everyone believes they are an expert. Traditionally, the idea market tends to figure things out when someone claims to be an expert but doesn’t seem to know what they’re talking about.

We see this a lot on social media when someone is criticized for “telling something to someone who knows a lot more about the subject than they do”.

What happens if all recliner experts get an AI partner to hunt them down?

If the Facebook app can demand so much of our attention that we forget to pick up our kids at school or text while driving because it overrides our logic centers, what do you think Meta can do with a sophisticated chatbot designed to tell every individual lunatic on the planet what they want to hear?

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Six people were shot at a California school, official says

Six people were injured in a shooting at an Oakland school on Wednesday, authorities said.The victims, all adults, were treated in local hospitals, Mayor...

Astra will no longer launch NASA’s TROPICS satellites • londonbusinessblog.com

Rocket launch company Astra will no longer send the remaining NASA TROPICS payloads to space, but will instead launch other "similar" science missions for...

Ecommerce Aggregator Una Brands Gets $30 Million to Acquire More APAC Brands • londonbusinessblog.com

Una brands, an e-commerce aggregator targeting brands in the Asia-Pacific region, today announced the first close of its Series B round at $30 million....

The Antler Investor Memo: Codis Lowers Software Development Costs With Automation

Early stage investment firm Antler Australia recently supported 13 startups as part of its ongoing program to build great local tech companies. For...

Amazon will air Friday’s Yankees game on cable, alongside Prime Video

This Friday's Yankees game against the Orioles will no longer be available exclusively on Amazon Prime Video, like 20 games before it: The game...