11.1 C
London
Monday, September 26, 2022

Microsoft shuts down controversial facial recognition tool that claims to identify emotions

Must read

The Zepto Boys: Competition that Amazon never saw coming

One year old quick commerce player Zepto is making Amazon and Flipkart look like legacy companies with delivery times of more than a...

New Marilyn Monroe Movie ‘Blonde’ Enjoys Victimizing Her Heroine

In 2010 three chest and pelvic X-rays of Marilyn Monroe sold for $45,000. Famous memorabilia will always fetch a pretty penny, but auctioning...

Interpol issues red notice to Terraform founder Do Kwon • londonbusinessblog.com

Interpol has issued a red alert to Do Kwon, urging law enforcement agencies around the world to locate and arrest the Terraform Labs founder...

Disperse, which brings AI-powered data to construction projects, raises $16 million • londonbusinessblog.com

To spreada UK-based construction technology company that provides an artificial intelligence (AI) powered platform to help project managers track work and capture data from...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Microsoft is cutting public access to a number of AI-powered facial analysis tools, including one that claims to identify a subject’s emotion from videos and images.

Such “emotion recognition” tools have been criticized by experts. Not only are they saying that facial expressions thought to be universal differ between different populations, but that it is unscientific to equate external expressions of emotions with internal feelings.

“Companies can say whatever they want, but the data is clear,” said Lisa Feldman Barrett, a professor of psychology at Northeastern University who did a review on the topic of AI-powered emotion recognition. The edge in 2019. “They can detect a frown, but that’s not the same as detecting anger.”

The decision is part of a major overhaul of Microsoft’s AI ethics policy† The company’s updated Responsible AI Standards (first set out in 2019) emphasize responsibility to find out who uses its services and more human oversight of where these tools are applied.

In practice, this means that Microsoft restrict access to some features from its facial recognition services (known as Azure Face) and others completely. Users must submit a request to use Azure Face for facial recognition, such as telling Microsoft exactly how and where they will deploy its systems. Some use cases with less harmful potential (such as auto-fading faces in images and videos) remain openly accessible.

In addition to removing public access to its emotion recognition tool, Microsoft is also discontinuing Azure Face’s ability to identify “characteristics such as gender, age, smile, facial hair, hair, and makeup.”

“Experts inside and outside the company have pointed to the lack of scientific consensus on the definition of ’emotions’, the challenges in how inferences are generalized across use cases, regions and demographics, and the heightened privacy concerns surrounding these types of capabilities,” he wrote. he. Microsoft’s AI officer in charge, Natasha Crampton, in a blog post announcing the news

Microsoft says it will stop offering these features to new customers starting today, June 21, while revoking access from existing customers on June 30, 2023.

But as Microsoft retires public has access to these features, it will continue to use them in at least one of its own products: an app called see AI that uses machine vision to describe the world to people with visual impairments.

In a blog post, Sarah Bird, Microsoft’s lead product manager for Azure AI, said tools such as emotion recognition “can be valuable when used across a range of controlled accessibility scenarios.” It is not clear whether these tools will be used in other Microsoft products.

Microsoft is also introducing similar restrictions to its Custom Neural Voice feature, which allows customers to create AI voices based on recordings of real people (also known as an audio deepfake).

The tool “has exciting potential in the areas of education, accessibility and entertainment,” Bird writes, but notes that it’s “also easy to envision how it could be used to inappropriately impersonate speakers and to mislead listeners.” In the future, Microsoft says it will restrict access to the feature to “managed customers and partners” and “ensure the speaker’s active participation in creating a synthetic voice.”

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

The Zepto Boys: Competition that Amazon never saw coming

One year old quick commerce player Zepto is making Amazon and Flipkart look like legacy companies with delivery times of more than a...

New Marilyn Monroe Movie ‘Blonde’ Enjoys Victimizing Her Heroine

In 2010 three chest and pelvic X-rays of Marilyn Monroe sold for $45,000. Famous memorabilia will always fetch a pretty penny, but auctioning...

Interpol issues red notice to Terraform founder Do Kwon • londonbusinessblog.com

Interpol has issued a red alert to Do Kwon, urging law enforcement agencies around the world to locate and arrest the Terraform Labs founder...

Disperse, which brings AI-powered data to construction projects, raises $16 million • londonbusinessblog.com

To spreada UK-based construction technology company that provides an artificial intelligence (AI) powered platform to help project managers track work and capture data from...

Marketing Tech Startup Livewire Raises $4.7M Series A

Sydney startup Livewire, which brings gaming marketing to corporate promotions, has raised $4.7 million in a Series A. The round was led by RealVC, with...