December 3, 2013
New technology allows computers to be programmed to recognize facial expressions — even the most subtle, fleeting expressions. Using frame-by-frame video analysis, computer software can read the muscular changes within people’s faces that indicate a range of emotions. Many predict such software will be used via computer webcams to rate how users respond to certain content — like games or videos — and cater to those users’ perceived needs or desires accordingly.
Affectiva is one such facial expression analysis recognition company using webcams and willing participants to train its face-reading software. According to The New York Times, the company recorded people over the course of two and-a-half years to “accumulate and classify about 1.5 billion emotional reactions” from people as they watched streamed videos.
“These recordings served as a database to create the company’s face-reading software,” NYT explains, noting that the software will be available to mobile software developers starting in mid-January.
How exactly will this information and technology be used?
Winslow Burleson, an assistant professor of human-computer interaction at Arizona State University, told NYT that the software could be useful in education, advertising, and even medicine.
“Once we can package this facial analysis in small devices and connect to the cloud,” Burleson said, “we can provide just-in-time information that will help individuals, moment to moment throughout their lives.” He also told NYT that people with autism, who often struggle to read facial expressions, may benefit from such software.
“Humans are remarkably consistent in the way their noses wrinkle, say, or their eyebrows move as they experience certain emotions,” NYT writes. “People can be trained to note tiny changes in facial muscles, learning to distinguish common expressions by studying photographs and video.” Now, computers can be trained the same way.
Of course, much like humans often do, computers may make errors in reading emotions. But Paul Saffo, a technology forecaster, told NYT that if computers one day can combine facial coding, voice sensing, gesture tracking and gaze tracking, “a less stilted way of interacting with machines will ensue,” reports NYT.