Wired on Friday: Computers, as every fan of Star Trek knows, don't have emotions. And, if those science fiction tales are to be believed, computers seem particularly obtuse when it comes understanding our alien qualities of "love" and "happiness" in humans.
But perhaps they might, in the interests of getting along, be able to fake it. If they do, they may well save a few lives in the process.
"Affective computing" is one of the cutting-edge developments in the world of software and computer design. The idea is simple: emotions and moods dominate the world of human interaction, so shouldn't we include them in our interactions with our smart digital companions? Up until now, the main pursuit with affective computing has been teaching computers how to spot the emotional state of their user. In these experiments, sensors plot the physical tell-tales of internal states. Attentive cameras on top of PCs watch their operator's pupils, spotting when they dilate or widen. Thermal scans of the face show anger or anxiety.
What can these indicators be used for?
At the Computer-Human Interaction conference in Portland this month, the delegates had plenty of ideas.
Many attendees were looking to mitigate the constant information overload that besieges many knowledge workers. A computer that knew when you were stressed or concentrating on a topic, would know well enough to keep interruptions like e-mail or instant messages away from you. Think of it as a more attentive butler, responding gently to your current state of mind.
But if computers grow to monitor emotions and factor them into their programming, shouldn't they also reflect some moods themselves? On your office desk, or at home, perhaps an overly emotional computer would be somewhat of a liability. It transpires, however, that there are some occasions where a change in tone of a computer voice can have drastic effects on how well the technology makes its point.
Stanford's department of communication has been working with the next generation of in-car navigation systems, which use in-car audio to conversationally relay navigational and traffic cues.
They've found that designing computerised speech systems whose tones mirror the driver's mood can halve accident rates in simulations, and eliminate other negative influences on driving behaviour.
To measure the potential effect of mood on car drivers' receptivity to advice, the researchers invoked feelings of sadness and happiness among their experimental subjects. They used five-minute video clips to do this, then they sent their guinea-pigs on to the hectic streets of the popular driving videogame, Hot Pursuit.
The drivers were guided through the simulation with either an upbeat and cheery audio navigation system, or the same instructions played using a slightly depressed intonation.
Groups whose in-car navigation system mirrored their own mood listened more attentively to the car's suggestions, drove more carefully and had, on average, half as many accidents. Playing a happy voice to a gloomy person - or vice versa - led to an increase in mistakes, though not as high as with drivers who received no advice at all.
The benefits of imitating human moods were not minor. With no voice or an emotionally neutral tone, sad drivers tended to make more mistakes than their happier colleagues: linking the car's mood to its operator makes both groups equally adept.
Emotionally linked cars also eliminate one of the more consistent biases in simulator studies of automobile driving. Contrary to what cab drivers might report, female drivers consistently beat males in laboratory tests of driving ability. With the help of mood-correlated instruction, both groups did equally well.
Ing-Marie Jonsson, one of the leads on the project, says she believes that motorists are more likely to listen sympathetically to those who tangibly share traits with themselves.
Of course, there are plenty of potential problems with putting mood into machinery. We're subtle beasts, and automated reactions to mood are very dependent on the environment. It's not clear whether the researchers would get the same results in another setting, or even across all cultures or ages.
Jonsson - of Stanford - has conducted further research that shows that it's not always similarity that we look for subconsciously. In the lab, for instance, people tend to listen more attentively to people their own age. Young people listen carefully to other young people; old folk prefer a voice from their own setting.
But in a car, listening to a voice that's giving you advice, those results are reversed: old people seem to prefer the extra eyes a young person can give.
Young people are more influenced by a more mature-sounding backseat driver. It may be that human emotions really are too complex and too alien for computers to successfully track or imitate to get consistent results.
The group's next research project will, they hope, shed light on an even pricklier subject. How best should you approach an angry driver with a hint or criticism? Could we be calmed and corrected of our road rage by a passionate advocate in the back seat? Perhaps, no matter how smart our computers get, we'll still remain a little emotionally smarter. "It's a trickier experiment", says Mary Zajicek of Oxford Brookes University, who is collaborating in the future project. "It turns out to be hard to annoy motorists in a consistent fashion".