Has voice control finally started speaking our language?

Amazon Echo’s Alexa and friends help make speech-activated technology more palatable

According to Google, 20 per cent of mobile search queries are now initiated by voice; and it is predicted that this will rise – across all platforms – to 50 per cent by the end of the decade. Photograph: SeongJoon Cho/Bloomberg
According to Google, 20 per cent of mobile search queries are now initiated by voice; and it is predicted that this will rise – across all platforms – to 50 per cent by the end of the decade. Photograph: SeongJoon Cho/Bloomberg

The problem with using the human voice to control computers is well known and well documented: it doesn’t always work. You can find yourself adopting the aggressive tone of a belligerent tourist in a foreign land while digital assistants employ a range of apologetic responses (“I’m sorry, I didn’t quite get that”, “I’m sorry, I didn’t understand the question”). We throw our arms up and complain about their shortcomings. Plenty of us have tried them, plenty of us have dismissed them as a waste of time.

We tend not to hear about them doing the job perfectly well, because few people write impassioned tweets or blog posts about things that work flawlessly. The evidence, however, shows that we are becoming more comfortable with using voice control as its capabilities improve. Back in May, Google announced that 20 per cent of mobile search queries were now initiated by voice; and it is predicted that this will rise – across all platforms – to 50 per cent by the end of the decade.

But it's not phones leading the way in making voice control palatable. That honour goes to the Amazon Echo, the home-based "ambient device" inhabited by a digital assistant called Alexa.

Quietly launched in late 2014, the success of the Echo (and its smaller siblings, the Dot and the Tap) has been described as “unlikely”, but sales have been increasing steadily quarter by quarter, with an estimated five million units sold in the US alone. Compared to the average smartphone, the Echo is comparatively modest: it is mainly dedicated to playing music and only does anything if you call its name. “Alexa, play me a song by Hot Chip. Alexa, can I listen to Radio 2? Alexa, stop.” It does these things efficiently, without complaint, and in doing so engenders a strange kind of affection. “Alexa, goodnight.” “Goodnight,” it replies. “Sleep tight.”

READ MORE

The first successes in getting computers to recognise the spoken word were made in the 1950s, but more significant ground was gained in the early 1970s when two students at Carnegie Mellon University, James and Janet Baker, began to apply statistical modelling techniques to the recognition of speech patterns. These so-called "hidden Markov" models turned out to be perfectly suited to machine learning; by absorbing thousands of examples, the machine became capable (in theory) of handling examples that it hadn't yet seen. The Bakers would go on to found Dragon Systems, which ultimately became Nuance, one of the architects of Apple's Siri voice assistant. In the early days, Dragon's software was used to power specialist accessibility and dictation applications, but when such techniques started being used in computer operating systems – most notably with Apple's PlainTalk in 1993 – our difficult relationship with voice control began. The first version of PlainTalk hogged computer resources, understood a limited number of phrases and wasn't particularly reliable. Ever since, we've subconsciously measured the effectiveness of each iteration of voice control in terms of the time we take to give up on it.

Increases in processor power made things better. “We were improving speech recognition year on year,” says Nils Lenke, senior director of corporate research at Nuance, “but it became tougher and tougher, because the old technology – the hidden Markov models – were nearing the end of their lifetime. When we started to use neural networks, accuracy went up a lot.”

The problem with machine learning techniques, according to Lenke, is that the model reflects what you show it. “You need a lot of data covering all kinds of variants of speech,” he says, “accents, dialects, ages, gender, and different settings, different environments. But when cloud-based speech recognition came along, things got a lot better; now, as people use it, we can see that data on our servers. The right data, covering exactly what people are doing with the technology. Not what we thought people might be doing.”

Not just novelty

As speech recognition improves from 90 per cent to 95 per cent and beyond, the problem encountered by developers of voice assistants is not necessarily one of comprehension; it’s persuading us that it’s not just a novelty and that we should persist beyond uncovering its cute quirks. “Speech has to solve a problem,” says Lenke. “But which problems can it solve? Can I remember which ones it can solve and which it can’t? It can book a cinema ticket, but can it book a flight? We’re not yet able to build a speech recognition system that understands the world, and for human beings that’s difficult to understand.”

The car is perhaps the best example of a single “domain” with well-defined problems (finding petrol stations, demisting windscreens) that can now be dealt with by voice commands. But this kind of understated ambition has, perhaps unwittingly, been the Echo’s strength, too. “In terms of voice technology, it’s not revolutionary,” says Simon Bryant, associate director at Futuresource Consulting, “but people aren’t overwhelmed by it. They get it. The entrypoint is via controlling your media, but once you get comfortable with playing a track, or a radio station, and you’re aware that it’s constantly learning, other applications will piggyback on to that. The potential is huge.”

The Echo’s expandability comes in the form of “skills”, links with third-party services that range from time-wasting ephemera (trivia quizzes) to things that could be genuinely useful if you happened to have the right technology installed in your home (eg controlling room temperature). But the growing affection for the device is also linked to its immobility. Much of our interaction with a smartphone is conducted in public, where talking to it is simply too embarrassing. In the privacy of the home, however, we can explore its capabilities and slowly come to terms with the reality of talking to a machine.

With Apple’s Siri, personality is crucial; behind its development seems to be a belief that the psychological “rule of reciprocation” (where we mirror the behaviour others dish out) also applies to machines. Google Now, meanwhile, is cooler, more utilitarian. “Some people would say that building a persona makes people more comfortable and lowers the barrier to entry,” says Lenke. “Others would say look, it’s a machine, we should make that visible. Both approaches can be right, depending on the task.”

Ultimately, our enthusiasm for voice control may be defined by issues of trust. There’s trust in the device itself – that it will perform in the way we want it to – and there’s trust in the company providing the service. Privacy concerns are never far from debates surrounding voice control; to function properly it requires data to be processed on external servers, but warnings of this within terms and conditions (“if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party”) can end up being interpreted as an Orwellian nightmare.

Driving consumption

These issues can be particularly sensitive when it comes to Amazon, whose ultimate aim with the Echo could reasonably be interpreted as “frictionless purchasing” – ie getting us to buy things as quickly and easily as possible. Detractors warn of an inglorious future where we casually mention that we’re out of washing up liquid, and an hour later a drone drops some off at our door having already billed our credit card. “Amazon’s strategy is far reaching,” says Simon Bryant, “but a big aspect of it is Amazon Prime, which drives consumption of products and keeps people paying on an annual basis.” Behind the cute replies and brisk efficiency of voice assistants is the aim of drawing us into an ecosystem; for example, the Echo’s ability to cue up sounds in response to a murmur is beautifully executed, but if your stash of personal music happens to be with Google Play, you’ll have to move everything over to Amazon’s Music Unlimited if you want to listen to it. The promise of voice control will continue to be restricted by these walled gardens.

Advances in speech recognition could be seen as the fulfilling of a science fiction dream that extends from Star Trek through 2001: A Space Odyssey to Knight Rider and beyond. Its history has been characterised by disappointment, but its key attributes are clear: it is hands-free and fast, devices don’t have to be unlocked and there are no menu structures to navigate. As more TVs and set-top boxes become speech savvy, the remote control will be consigned to history. As devices get smaller and lose their keyboards and screens, voice control will become crucial. And according to Bryant, the knock-on effects are already being seen. “We’re expecting 6.1m units of Echo-like devices to be sold by the end of this year,” he says, “which takes a huge chunk out of the audio market. And it’s going to boost radio audiences, because people are going into rooms and just want something to be playing.”

Alexa's ability to instantly switch on Heart FM falls well short of the kind of rich human-computer relationship that's depicted in the Spike Jonze film Her, and while new apps such as Hound are becoming more adept at having longer conversations and understanding context, there are limits to a computer's ability to deal with conversational interaction, according to Mark Bishop, professor of cognitive computing at Goldsmiths University of London. "Action-focused commands like 'tell me the weather in Seattle' are much simpler things for a machine to parse and interact with than an open-ended narrative," he says. "But there are fundamental problems in AI that, for me, mean that we're some years away from having a machine that can have a meaningful, goal-directed conversation, if it's ever possible at all."

We may not see a convincingly empathetic machine in our lifetime, but in the meantime we can always ask Alexa to “say something nice”. “You have a great taste in technology,” it replies. “But seriously, you rock. I’m glad to know you.” Same here, Alexa. I think. - Guardian service