A more nuanced assistant is arguably more helpful. “The musical elements of speech help you set expectations for what’s coming,” says Laura Wagner, a psycholinguist at Ohio State University. Intonation could lead to more efficient phrasing and less ambiguity. It could also give Alexa an emotional advantage over digital assistants from Apple and Google. “We’re going to love it more if it sounds human,” Wagner says. Evidence suggests that people feel more connected with objects capable of “contingent interaction,” the responsive back-and-forth of talking with another person. “The more human Alexa sounds, the more I’m going to want to trust her and use her,” Wagner says.
That, of course, explains why Amazon wants to make Alexa sound as human as possible.
Mind the (Expectation) Gap
But Amazon risks making Alexa sound too human, too soon. In February, the company unveiled “speechcons”—dozens of interjections like argh; cheerio; d’oh; and bazinga (no, really, bazinga) that Alexa enunciates more expressively than other words. Amazon wants to add a layer of personality to its virtual assistant, but quirks like that could make Alexa less useful.“If Alexa starts saying things like hmm and well, you’re going to say things like that back to her,” says Alan Black, a computer scientist at Carnegie Mellon who helped pioneer the use of speech synthesis markup tags in the 1990s. Humans tend to mimic conversational styles; make a digital assistant too casual, and people will reciprocate. “The cost of that is the assistant might not recognize what the user’s saying,” Black says.
Sourced through Scoop.it from: www.wired.com
Leave A Comment