The new Alexa Guard feature coming later this year is an example of that. To activate it, you’d call out “Alexa, I’m leaving” or a similar phrase to an Echo or other device on your way out the door. If an Echo device in a home hears the sound of breaking glass or a smoke alarm while you’re gone, it will send a notification to your phone with a link to a recording of the sound that triggered the warning.

To avoid having to create a live audio link between a person’s home back to Amazon’s computer systems, Prasad’s team had to create a new machine learning system that lurks inside an Echo device and constantly listens for alarms or smashing sounds. It was trained in part by using audio samples from public domain video, although Prasad says development also involved some destruction. “We did break a lot of glass in our internal testing,” he says.

Amazon’s audio algorithms are also getting better at tracking subtleties of speech. Prasad’s team trained algorithms to detect the characteristically sibilant sounds of whispered speech to enable the whispering upgrade coming later this year. He says Alexa will also get better at analyzing the prosody of what people say. When combined with better text analysis, that will make tasks such as creating shopping lists easier, because the assistant can understand that “add paper towels, peanut butter, and bananas” to my shopping list refers to three separate items, not just one.

Sourced through Scoop.it from: www.wired.com