"The fundamental idea is that by just listening to the human voice you can start predicting interest and engagement with about 75 to 80 percent accuracy," Madan said.
Cognitive psychologists are well aware of such valuable cues.
"I'd say that there are definitely individual linguistic markers that carry across a conversation," said Carnegie Mellon's MacWhinney.
"There are things that will come out very quickly. But how well machines are able to pick that up, I just don't know."
The MIT group put its machine to the test by trying to predict the outcomes of some 200 three-minute conversations. They paired off volunteers and randomly assigned them conversational topics.
Afterwards speakers were asked to rate their level of interest from one to ten, and the Jerk-O-Meter was able to anticipate their interest from 75 to 80 percent of the time.
Big Brother Isn't Listening
The machine is hardly one size fits all, however, and significant hurdles may remain. Simply put, not all of us speak or express emotions in the same way.
Dominic Massaro works extensively on speech recognition at the University of California at Santa Cruz.
Massaro finds the Jerk-O-Meter intriguing but notes that the devil is in the details.
"It's a big challenge to recognize speech correctly, and recognizing emotion is also a challenge," he said. "To recognize the basic emotions like happiness, anger, fear, and disgust, the face is more informative than the voice. The voice by itself can be pretty ambiguous."
"We've done studies where we find cues in the voice, like the overall pitch and the amplitude, but they are not definite," Massaro added. "It's a challenge to recognize what emotion is for both humans and machines."
Future versions of the Jerk-O-Meter could become more effective if software allows the machine to learn more about the speaker through constant feedback. The device would then become more accurate with each conversation.
Habitual jerks and others need have no fear, Madan said. He stresses that the device is intended only for use by its owner and not as a means for rating unwitting speakers on the other end of the line.
But in the realm of self-improvement it could be a valuable tool.
"Lot of people have trouble getting feedback about themselves," Carnegie Mellon's MacWhinney said. "It's a resource problem. [Y]ou have to listen to another person and think of your responses, so it's rather difficult to monitor yourself."
MacWhinney also wonders if we'll like what the machine has to say.
"Seldom have we ever had computers making personality judgments about us," he said. "In some cases they do tell us, You made a mistake, and we're willing to accept that kind of feedback. But people may become a little miffed if it seems to become too invasive."
Free E-Mail News Updates
Sign up for our Inside National Geographic newsletter. Every two weeks we'll send you our top stories and pictures (see sample).
SOURCES AND RELATED WEB SITES