Microsoft is gradually banning some artificial intelligence facial analysis tools from public use, including one that claims to be able to identify subjects' emotions from videos and pictures. This "emotion recognition" tool has been criticized by experts. They say it's widely be

2024/05/1713:12:33 technology 1995

Microsoft is phasing out public access to some artificial intelligence facial analysis tools, including one that claims to be able to identify a subject's emotions from videos and pictures.

Microsoft is gradually banning some artificial intelligence facial analysis tools from public use, including one that claims to be able to identify subjects' emotions from videos and pictures. This

This "emotion recognition" tool has been criticized by experts. They say it's widely believed that facial expressions differ among different people, and that it's unscientific to equate external displays of emotion with internal emotions. Because a machine can detect a scowl, but that's not the same thing as detecting anger.

This decision is part of a major overhaul of Microsoft’s artificial intelligence ethics policy. The company's updated standards for responsible AI, first proposed in 2019, emphasize accountability to find out who uses its services and increase human oversight of where these tools are used.

In practice, this means Microsoft will limit access to some features of its facial recognition service, known as Azure Face, and remove others entirely. Users must apply to use Azure Face for facial recognition, for example, by telling Microsoft how and where they will deploy their system. Some less harmful use cases, such as automatic blurring of faces in images and videos, will remain open access.

In addition to removing public access to its emotion recognition tools, Microsoft also removed Azure Face's ability to identify "attributes such as gender, age, smile, facial hair, hair and makeup."

Natasha Crampton, Microsoft's chief AI lead, wrote in a blog post announcing the news: "Experts inside and outside have highlighted the lack of scientific definition of 'emotion' Consensus, challenges in how to extrapolate across use cases, regions, and demographics, as well as heightened privacy concerns around such features "

Starting June 21, Microsoft will stop offering these features to new users. Access rights for existing users will be revoked on June 30, 2023.

Some of Microsoft’s AI apps will still offer emotion recognition.

Although Microsoft will no longer make these features available to the public, it will continue to use them in at least one of its products: an app called Seeing AI, which uses machine vision to provide "seeing" information for the visually impaired. "New capabilities for the world.

Microsoft is gradually banning some artificial intelligence facial analysis tools from public use, including one that claims to be able to identify subjects' emotions from videos and pictures. This

Sarah Bird, a product manager on Microsoft's main Azure AI team, said tools such as emotion recognition are "valuable when used in accessibility-controlled scenarios." It's unclear whether these tools will be used in other Microsoft products.

Microsoft has introduced similar restrictions on its Custom Neural Voice feature, which allows customers to create artificial intelligence voices based on recordings of real people (sometimes called deepfake audio).

The tool "has exciting potential for education, accessibility, and entertainment," but Byrd also noted that "it can also be easily used to inappropriately imitate speakers and deceive listeners." Microsoft says it will limit access to the feature to "managing customers and partners" in the future and "ensure the active participation of the original speaker when creating synthesized speech."

technology Category Latest News