13.8 C
London
Thursday, October 6, 2022

Why ‘facial expression recognition’ AI is a total scam

Must read

More than half of Bitcoin volume on crypto exchanges fake: report

More than 51 percent of the total Bitcoin trading volume on various cryptocurrency exchanges this year is fake. According to data from niche...

From Managing The Harvard Alpha Fund To Starting A $1 Billion Hedge Fund — Divya Nettimi’s Journey

Mentioned in the Forbes listNettimi was also on Forbes' list of '30 Under 30' in finance in 2016, while she was an investment analyst...

Upcoming electric cars in India in 2022 – see the expected price and range here

Upcoming electric cars in India in 2022 - see the expected price and range here

Kevin Spacey’s trial begins nearly five years after Anthony Rapp accused him of sexual abuse

The jurors were sworn in Thursday morning in the civil trial of Kevin Spacey, the Oscar-winning star accused by fellow actor Anthony Rapp of...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

A team of researchers from Jilin Engineering Normal University in China recently published: a newspaper indicating that they had built an AI model capable of recognizing human facial expressions.

I’m going to save you some time here: they certainly didn’t. This is currently not possible.

The ability to accurately recognize human emotions is what we would call a “deity level” achievement here at Neural. The only people who really know how you feel at any given moment are you and all the potential omnipotent beings out there.

Greetings, Humanoids

Sign up for our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

But you don’t have to take my word for it. You can come to the same conclusion with your own critical thinking skills.

In front: The research is fundamentally flawed because it mixes facial expressions with human emotion. You can falsify this premise by performing a simple experiment: Assess your current emotional state, then force yourself to make a facial expression that is diametrically opposed to it.

If you feel happy and can ‘act’ sad, you have personally debunked the whole premise of the study. But, just for fun, let’s keep going.

Background: Don’t be fooled by the hype. The researchers do not train the AI ​​to recognize expressions. They train the AI ​​to beat a benchmark. There is absolutely no conceptual difference between this system and a system that tries to determine whether an object is… a hot dog or not

What this means is that the researchers have built a machine that tries… Guess labels. They basically show their AI model 50,000 photos, one at a time, and force it to choose from a series of labels.

For example, the AI ​​might have six different emotions to choose from — happy, sad, angry, scared, surprised, etc. — and no option to say “I don’t know.”

Therefore, AI developers can run hundreds of thousands or even millions of “training iterations” to train their AI. The machines don’t figure it out with logic, they just try every possible combination of labels and adapt to the feedback.

It’s a bit more complicated than that, but the big important idea here is that the AI ​​doesn’t care or understand the data it parses or the labels it applies.

You could show it images of cats and force it to “predict” whether each image was “Spiderman in disguise” or “the color yellow expressed in visual poetry” and it would apply one label or another to each image.

The AI ​​developers would tweak the parameters and rerun the models until it was able to determine which cats were which with enough accuracy to hit a benchmark.

And then you could turn the data back into images of human faces, keep the stupid “Spiderman” and “color yellow” labels, and retrain it to predict which labels match the faces.

The thing is, AI doesn’t understand these concepts. These prediction models are essentially just machines standing in front of buttons and pushing them randomly until someone tells them they got it right.

The special thing about them is that they can press tens of thousands of buttons in seconds and never forget the order in which they pressed them.

The problem: This all seems helpful because when it comes to outcomes that don’t affect people, prediction models are great.

When AI models try to predict something objective, such as whether a particular animal is a cat or a dog, they aid human cognition.

You and I don’t have the time to go through every picture on the internet when we’re trying to find pictures of a cat. But Google’s search algorithms do.

That’s why you can search for “cute cats” on Google and get thousands of relevant photos back.

But AI cannot determine whether a label is really appropriate. If you label a circle with the word “square” and train an AI on that label, it just assumes that anything that looks like a circle is a square. A five-year-old would tell you that you mislabeled the circle.

Neural recording: This is a total scam. The researchers present their work as useful in “fields such as human-computer interactions, safe driving… and medicine,” but there is absolutely no evidence to support their claim.

The truth is that “computer interactions” have nothing to do with human emotions, safe driving algorithms are more effective when they focus on attention rather than emotionality, and there is no room in medicine for weak, prediction-based assessments of individual individuals. circumstances.

The bottom line is simple: You can’t teach an AI to identify human sexuality, politics, religion, emotion, or any other non-intrinsic quality from a photo of their face. What you can do performs prestidigitation with a prediction algorithm in hopes of exploiting human ignorance.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

More than half of Bitcoin volume on crypto exchanges fake: report

More than 51 percent of the total Bitcoin trading volume on various cryptocurrency exchanges this year is fake. According to data from niche...

From Managing The Harvard Alpha Fund To Starting A $1 Billion Hedge Fund — Divya Nettimi’s Journey

Mentioned in the Forbes listNettimi was also on Forbes' list of '30 Under 30' in finance in 2016, while she was an investment analyst...

Upcoming electric cars in India in 2022 – see the expected price and range here

Upcoming electric cars in India in 2022 - see the expected price and range here

Kevin Spacey’s trial begins nearly five years after Anthony Rapp accused him of sexual abuse

The jurors were sworn in Thursday morning in the civil trial of Kevin Spacey, the Oscar-winning star accused by fellow actor Anthony Rapp of...

The software is the thing • londonbusinessblog.com

As with the Pixel Watch, we'll be bringing more in-depth thoughts with a full review in the near future. However, now seems like...