Amazon has unveiled an experimental Alexa feature that allows the AI assistant to mimic the voices of users’ deceased relatives.
The company demonstrated the feature at its annual MARS conference, showing a video in which a child asks Alexa to read a bedtime story in his late grandmother’s voice.
“As you’ve seen in this experience, instead of Alexa’s voice reading the book, it’s the child’s grandmother’s voice,” said Rohit Prasad, Amazon’s chief scientist for Alexa AI. Prasad introduced the clip by saying that adding “human attributes” to AI systems has become increasingly important “in these times of the ongoing pandemic, when so many of us have lost someone we love.”
“While AI can’t take away that pain of loss, it can certainly make their memories lasting,” Prasad said. You can check out the demo for yourself below:
Amazon has not given any indication of whether this feature will ever be made public, but says its systems can learn to imitate someone’s voice from just one minute of recorded audio. In an age of abundant videos and voice memos, that means it’s well within the reach of the average consumer to clone the voices of loved ones — or anyone else they like.
While this particular application is already controversial, with social media users calling the feature “creepy” and a “monstrosity”, such AI voice mimicry has become more common in recent years. Often known as “audio deepfakes”, these imitations are already regularly used in industries such as podcasting, film and TV, and video games.
For example, many audio recording suites allow users to clone individual voices from their recordings. That way, if, say, a podcast host is simplifying her or his line, a sound engineer can edit what they’ve said by simply typing in a new script. Replicating seamless lines of speech requires a lot of work, but very small edits can be made with a few clicks.
The same technology has also been used in film. It was revealed last year that a documentary about the life of chef Anthony Bourdain, who died in 2018, used AI to clone his voice to read quotes from emails he sent. Many fans were disgusted by the application of the technology, calling it “creepy” and “deceitful.” Others defended the technology’s use as similar to other reconstructions used in documentaries.
Amazon’s Prasad said the feature could enable customers to have “lasting personal relationships” with the deceased, and it’s certainly true that many people around the world are already using AI for this purpose. People already have created chatbots that imitate deceased loved onesfor example, training AI based on saved conversations. Adding accurate voices to these systems — or even video avatars — is entirely possible using current AI technology and will likely become more widespread.
Whether customers want their deceased loved ones to become digital AI dolls, however, is another matter.