News & Blog

Emerging Technologies Promise to Quantify Emotions

Torque
by George Lawton
September 15, 2014

http://torquemag.io/emerging-technologies-promise-quantify-emotions/?utm_content=buffer29c64&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer


A variety of technologies are emerging for tracking emotions via the Internet using techniques such as text-analytics, speech analysis, and video analysis of the face.

Better tools for tracking emotions holds promise for bringing awareness to our inner state through outside feedback. This kind of technology also promises to make it easier to understand how websites, mobile applications, and ads impact the emotional state of users. “The end goal should be to reengineer business to be truly customer-centric, which was infeasible until emotional analytics entered the picture,” said Armen Berjikly, Founder and CEO of Kanjoya, a sentiment analysis service.

Ken Denman, President and CEO of Emotient, which makes emotional tracking technology for the face, said,

"Fundamentally our motivation is to accelerate the pace of innovation for consumers, patients, and students, by providing actionable insights faster and more accurately than ever dreamed. This will empower product developers, customer, and patient experience owners to more quickly and accurately understand what is working and what is not."

At the same time, it is important to note that Website owners need to be cautious in the ways they measure, analyze, and store data about the emotional state of users. As previously reported, experimenting on the ways that applications, ads, and Websites impact users could help organizations make the world a better place. But organizations need to be thoughtful in the use of emotional information in order not to alienate users.

As Rana el Kaliouby, Co-founder and Chief Science Officer of Affectiva, which has developed the Affdex service for analyzing the facial expression of emotions explained,

"As a company, we understand how critical the data we are collecting is. Our philosophy is no data is ever collected without explicit opt-in. Ever! We also feel a responsibility towards educating the public about what this technology can and cannot do. Facial coding technology can tell you the expression on your face (which a human in the room would have picked anyhow) but it will not tell you what your thoughts are."

Reading into Emotion

Sentiment analysis is based on being able to extract signals of positive and negative sentiment from text, said Aaron Chavez, Chief Scientist at AlchemyAPI, a sentiment analysis service. With more targeted analysis, the goal is not just to look at positive or negative, but to associate it with things in the text. This makes it possible to zero in on specific aspects of a service such as the food being cold, while the waiter was helpful.

This is a challenging problem, and in many cases, it can be subjective. People can come to different conclusions when reading the same text. The biggest challenge tends to be what is unspoken. People don’t always come out and say in the clearest terms how they feel about things, said Chavez. For example, if someone uses sarcasm they may mean the opposite of what they say. Understanding this requires using external information from other sources.

Chavez explained,

"You are pulling in external knowledge of the world in order to recognize when what is stated does not line up when what was intended. This kind of analysis can be more specific with a greater understanding of a person. For example, if they have a political affiliation they might be expressing sentiment in what might otherwise be considered a factual statement."

There are different approaches to sentiment analysis. Rule-based approaches to sentiment analysis create complex rule chains for associating text with sentiment. For example, by creating a special rule for recognizing negations such as when someone uses the word “not” in a sentence. The engine needs to be able to recognize that “not good” is associated with a negative sentiment, while “good” by itself reflects positive sentiment.

In contrast, deep learning systems use neural networks to create a framework for analyzing text that can be robust to understanding in a variety of ways.

Chavez explained,

"There are so many ways of saying the same thing. That is one of the strengths of deep learning, where you can come up with representations where words and phrases are similar. Some of the older systems that are ruled based are susceptible to not seeing the problem when you make a minor modification to what someone is saying. Deep learning has a robust understanding of language that is suitable to minor different ways of saying the same thing."

There can also be cultural differences, and these kinds of techniques can be more explicit when someone is talking in a slightly different context. Thin is good for smart phones and bad for bed sheets. Chavez noted,

"The system needs to use what you know about them to color that sentiment. But you can only take advantage of this when you have access to that person’s history. There is a limit to what you can infer when the system does not don’t know that person’s history. Even if there is access to this history, it can be a challenging problem to solve.

Aggregating this data can make it easier to find more information about what a person means, even though the technology is far from perfect. When you are collecting hundreds of weak signals, they start to coalesce. Any way you can aggregate data, whether it is based on time, location, or a speaker’s history will provide an opportunity for a more reliable signal."

People are using sentiment analysis in a lot of different ways. It is commonly used to understand the voice of the customer where the company can analyze customer interactions and decide whether they are being done well. The technology is widely used for social media monitoring for tracking the progress of new products and deciding whether the latest ad campaign is having an impact on Facebook or Twitter. It is also being used for stock analysis and targeted advertising.

Meanwhile Kanjoya is launching a consumer grade SaaS sentiment analysis service that hooks up to any data source, instantly (and continuously) analyzes it, and automatically provides actionable insights beyond measurement such as promoter/detractor discovery, base-lining and anomaly detection, competitive analysis, and a real-time net promoter score (NPS) analogue that does not require the NPS survey.

Berjikly explained,

"We have proprietary data models built over the last decade to help us model how language and emotion are related, including how that changes depending on the context and background of the speaker. We account for all of those in our technology model, enabling us to decipher emotion at greater than human accuracy, without any training by the end user. There are no special technology requirements, we’ve built our products to work immediately, with intuitive user interfaces, and agnostic to the input data."

Berjikly said that Kanjoya’s sentiment analysis technology is predominantly used to help companies get closer to their customer’s wants, needs, and thoughts. “Humans are inherently emotional decision makers, and companies that acknowledge this qualitative side to the equation, and make it a priority to not just understand it, but act to address it, have a major, often unassailable competitive advantage in the customer experience.”

Sentiment analysis technology is likely to get even better over time, noted Chavez. The notion of positive and negative sentiment is a coarse lens to view this information. “Being able to go beyond positive and negative to determine the correct time to take action is going to be more interesting,” he said.

Hearing Emotions

Speech emotional analytics technology work by analyzing our vocally-transmitted emotions in real-time as we speak.

Dan Emodi, VP Marketing and Strategic Accounts at Beyond Verbal, said this kind of technology can decipher three basic things using a microphone and a network connection via a cloud-based application:

  • The speaker’s mood;
  • The speaker’s attitude towards the subject he speaks about; and
  • The speaker’s emotional decision making characteristics more commonly known as emotional personality.

A consumer version of the Beyond Verbal technology is available for the iPhone, Android, and Web browsers.

"Understanding emotions is adding what is probably the most important non existing interface today,” said Emodi.

"Allowing machines to interact with us on an emotional level has almost unlimited commercial usage from Market Research, to call Centers, to self-improving applications, wellness, media, content and down to Siri that finally understands your emotions. In implementing emotions into daily use it seems we are truly only bound by our own imagination."

Another tool for hearing emotions is EmoVoice, which is freely available as open source software. It uses a supervised machine learning approach that collects a huge amount of emotional voice data for which classifiers are trained and tested. Typically, data is recorded in separate sessions during which users are asked to show certain emotions or interact with a system that has been manipulated to induce the desired behavior. Afterward, the collected data is manually labeled by human annotators with the assumed user emotions. Classifiers that are able to assign certain emotional categories to voice data are computed from this data.

Prof. Dr. Elisabeth André at the University of Augsburg in Germany said techniques for detecting emotions may be employed to sort voice messages according to the emotions portrayed by the caller in call center applications. Among other things, a dialogue system may deploy knowledge on emotional user states to select appropriate conciliation strategies and to decide whether or not to transfer the caller to a human agent.

Methods for the recognition of emotions from speech have also been explored within the context of computer-enhanced learning, added André.

The motivation behind these approaches is the expectation that the learning process may be improved if a tutoring system adapts its pedagogical strategies to a student’s emotional state. Research has been conducted to explore the feasibility and potential of emotionally aware in-car systems. This work is motivated by empirical studies that provide evidence of the dependencies between a driver’s performance and his or her emotional state.

André’s team is also employing techniques for recognizing emotional state for social training within the EU-funded TARDIS project. In this project, young people engage in role play with virtual characters that serve as job interviewers in order to train how to regulate their emotions. This helps them learn how to cope with emotional states that arise in socially challenging situations, such as nervousness or anxiety. The first version used a desktop-based interface in TARDIS, while more recent work focuses on the use of augmented reality, as enabled by Google glass, to give users’ recommendations on their social and emotional behaviors on the fly.

In the German-Greek CARE project, emotion recognition techniques are used to adapt life style recommendations to the emotional state of elderly people.

One of the biggest challenges in teaching computers to recognize a variety of emotions in speech lies in working in natural environments. André said that promising results have been obtained for a limited set of basic emotions that are expressed in a prototypical manner, such as anger or happiness, but more subtle emotional states can be difficult. Also real world environments come with background noise which affects recognition rates.

Another challenge is that people show great individualism in their emotional expression. André explained,

"Many people don’t show emotions in a clear manner. Also in some social situations people don’t reveal their true emotions. For example, when talking to somebody with a high status, you would avoid showing negative emotions, such as anger."

Results comparable to human skills have been obtained for tracking a limited set of emotions when the speech is recorded beforehand and analyzed later, said André. Her team is working on improving the technology to analyze emotions in the wild using non-intrusive microphones and for running the software on mobile devices.

Seeing Emotions

Companies like Affectiva and Emotient are also starting to develop technology for quantifying emotional expression through video analysis of facial expressions. For example, Affectiva’s Affdex technology analyzes facial expressions to discern consumers’ emotions such as whether a person is engaged, amused, surprised or confused.

Affdex employs advanced computer vision and machine-learning algorithms within a scalable cloud based infrastructure to identify the emotions portrayed in a face video. Affectiva has also developed SDKs for developing facial emotional analysis applications on both iPhone and Android devices.

Affdex uses standard webcams, like those embedded in laptops, tablets, and mobile phones, to capture facial videos of people as they view the desired content. Affectiva’s el Kaliouby said, “The prevalence of inexpensive webcams eliminates the need for specialized equipment. This makes Affdex ideally suited to capture face videos from anywhere in the world, in a wide variety of natural settings (e.g., living rooms, kitchens, office).”

First, a face is identified in the video and the main feature points on the face are located, such as eyes and mouth. Once the region of interest has been isolated, e.g., the mouth region, Affdex analyzes each pixel in the region to describe the color, texture, edges and gradients of the face, which is then mapped, using machine learning, to a facial expression of emotion, such as a smile or smirk.


Affdex Dashboard – Valence Trace

Once classification is complete the emotion data extracted from a video is ready for summarization and aggregation, and is presented via the Affdex online dashboard. Expression information is also summarized for addition to a normative database.

Affectiva has amassed about two million facial videos from over 70 countries, which has allowed the company to build a global database of emotion response that can be sliced by geographic region, demographic regions, as well as industries and product categories. This allows companies to perform A/B tests for their content, and also get a better sense of where their ad falls with respect to other content in their vertical or market.

el Kaliouby said,

"Today our technology can understand that a smile can have many different meanings – it could be a genuine smile, a smile of amusement, a smirk, a sarcastic smile, or a polite smile. This is where Affdex is at the moment – we’re training the machine that emotions come in many different nuances / flavors. Where we would like to take this in the future is for an emotion-sensing computer to pick on more subtle cues like maybe a subtle lip purse or eye twitch – it will incorporate head gestures and shoulder shrug and physiological signals. It will know if a person is feeling nostalgic, or inspired."

Emotient’s software is designed to work as a web-based service with any video camera or camera-enabled device. This could be a webcam, camera embedded in a smart screen or digital sign, tablet, or smartphone. The software measures emotional responses via facial expression analysis. Emotient’s approach combines proprietary machine learning algorithms, a self-optimizing data collection engine, and state-of-the-art facial behavior analysis to detect 7 primary emotions, including joy, surprise, sadness, anger, disgust, contempt, and fear, as well as more advanced emotional states including confusion and frustration. The system detects all faces within the field of view of each frame and analyzes the facial expression.

Emotient’s co-founders have spent the past two decades innovating an automated emotion measurement technology based on facial expression analysis. Emotient’s team has published hundreds of papers on novel uses for the technology that include the development of autism intervention games that help subjects mimic facial expressions and identify facial expressions in others, measuring the difference between real and fake pain, analyzing student engagement in an online education setting.

As a business, Emotient has chosen to focus our early commercial efforts around advertising, market research and retail, and are delivering emotion analytics to customers as aggregate, anonymous data that is segmented by demographic.

Emotient’s Denman said early adopters of the software are using it to automate focus group testing and to conduct market research for product and user experience assessment.

"In the past year we have been working with major retailers, brands and retail technology providers who are using Emotient to compile analytics on aggregate customer sentiment at point of sale, and in response to new advertising or promotions in-store or online.

Customer service patterns and trends can be identified, both for training purposes and troubleshooting in areas of the store where assistance is needed, or as a measure at point of sale or point of entry to determine customer satisfaction levels. The resulting analytics can be used in benchmarking the efficacy of a specific display, shelf promotion, advertisement, or overall customer experience. We believe the real value of the emotion analytics we collect and deliver to retailers and brands is in aggregate information segmented by target demographic, and less so by individuals."

Emotient is working with iMotions, which has built a full Attention Tool platform, which includes other biometrics including eye tracking, heart rate measurement, and galvanic skin response (GSR) for improved academic, market, and usability research. Denman said, “These other biometric signals can be valid and helpful but facial expression analysis provides unique context that isn’t possible to capture otherwise.”

New Mirrors for Emotional Reflection

New mirrors for looking inwards could also help to overcome the blinders to recognizing our inner state. Beyond Verbal’s Emodi said,

"Understanding ourselves is something we are much less capable of doing. Many of us have limited capabilities at understanding how we come across and what we transmit to the other side. Emotions Analytics hold great potential in helping people get in tune with their own inner self – from tracking our happiness and emotional well-being to practicing our Valentine pitch, working on our assertiveness or leadership capabilities, and learning to be a more effective sales person."

Western culture makes it easy to push down and hide our emotions from each other, not to mention ourselves. Paul Ekman, Professor Emeritus of Psychology at UCSF found that many people often mute out the full expression of emotions that flash briefly or manifest less intensely as what he calls micro-expression and subtle expressions.

Personal experiments with older versions of Ekman’s training tools for identifying facial expressions made it easier to not only identify these emotions in other people, but to notice them with greater clarity in myself. Using these kinds of tools for emotional analysis promises to provide us with benchmarks for easily tracking our emotional state over time, and perhaps identifying behaviors and events that impact us in beneficial or adverse ways.

But this is not always an easy inquiry. As API Alchemy’s Chavez noted,

"People have used this to look at emails to analyze their own emails for sentiment. What is funny is that people are more willing to shine the light on others and groups of people for marketing, but not as often do they put it on themselves to see if their emails have negative tones or their feet are going in the wrong direction."

Emotient Gains Exclusive Rights to New, Expansive Patent Issued for Automated Facial Action Coding System

September 3, 2014 – San Diego, CA

Emotient, the leading provider of facial expression measurement data and analysis, today announced exclusive rights to a newly issued patent (US 8,798,374 B2) entitled, “Automated Facial Action Coding System.” This expansive patent protects Emotient’s core technology, from face detection to measurement of primary emotions, to detection of facial muscle movements.

“We believe the automated facial action coding patent is very expansive and protects much of our core system,” said Ken Denman, President and CEO, Emotient. “We remain committed to growing our intellectual property portfolio. Over the last 18 months, we have filed 16 patent applications to protect additional aspects of our core technology, as well as innovative vertical applications for the retail, health-care, education, and entertainment industries.”

The automated facial action coding patent inventors include Emotient Co-Founders Dr. Marian Bartlett, Dr. Gwen Littlefort, Dr. Javier Movellan, Dr. Ian Fasel and their colleague Mark Frank from University of New York. The patent was issued to the Regents of the University of California and The Research Foundation of State University of New York on August 5, 2014. Emotient holds exclusive rights to this and other work that the team developed while at UC San Diego, prior to leaving and co-founding Emotient in 2012. The patent covers the system and method to detect faces and facial features, machine learning techniques for feature selection that are trained on spontaneous expressions; deep neural network applications to machine learning-based classifiers trained on spontaneous expressions and determination of the presence of one or more elementary facial muscle movements or Action Units (AUs).

Emotient is focused on delivering its video-based expression measurement and sentiment analysis software to Global 2000 customers in the market research, retail, education and healthcare verticals. Emotient’s software enables businesses to deliver better products, enhance the user experience, and improve patient care. Emotient provides access to critical emotion insights, processing anonymous facial expressions of individuals and groups; the software does not store video or images. Emotient detects and measures seven facial expressions of primary emotion (joy, surprise, sadness, anger, fear, disgust and contempt); overall sentiments (positive, negative, and neutral), advanced emotions (frustration and confusion) and 19 Action Units (AUs). The Emotient system sets the industry standard for accuracy, with the precise ability to detect single-frame microexpressions of emotions.

About Emotient – Automated Facial Expression Analysis
(www.emotient.com)

Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available as an API for Global 2000 companies within consumer packaged goods, retail, healthcare, education and other industries.

Emotient was founded by a team of six Ph.D.s from the University of California, San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. For more information on Emotient, please visit www.emotient.com.

Media Contact
Vikki Herrera | Emotient | Vikki@emotient.com | 858.314.3385

What does the Future of Retail Look Like? Four Young Companies Provide a Glimpse

Forbes logo

Forbes
By J.J. Colao
6/18/2014

http://www.forbes.com/sites/jjcolao/2014/06/18/what-does-the-future-of-retail-look-like-four-young-companies-provide-a-glimpse/

There was a lot to digest at Jason Calacanis’ and Pivotal Labs‘ Launch Beacon conference on Monday, an event devoted to exploring trends around e-commerce, retail, payments and location-based technology.

To this audience member, the most interesting bits came from the morning demos of four companies working on technology with applications for retail and e-commerce.

If these startups have their way, we’ll soon live in a world of interactive in-store displays, ubiquitous mood tracking and effortless shopping on the fly.

Oh, and your smartphone might replace your waiter.

Perch

There aren’t enough interactive screens in your life, so the people of PERCH are here to help. The company sells a projector that produces a customized digital display for shoppers to play with as they browse products in store. In addition to conventional touch-screen interactions, like swiping and scrolling, the technology senses when customers touch or pick up objects placed on the surface.

In the video below, for example, shoppers looking for nail polish get a delightful ping when they touch each bottle, along with information about the shade.

Founded by CEO Jared Schiffman, a veteran of MIT’s Media Lab, in 2012, the company counts Kate Spade, Kiehl’s, Cole Haan and Quirky as customers, leasing the tech for $500 per month. (They previously sold it for $7500.) PERCH can simultaneously update thousands of units across different stores at the same time and brands can group multiple units together for more complex presentations.

With $60K per month in sales, the New York-based company is currently raising a $1.2 million seed round.

Emotient

We humans aren’t all that reliable when it comes to customer feedback. Either innocently or intentionally, we often mischaracterize our perceptions of brands, products and advertisements.

Emotient, based in San Diego, uses facial expression recognition software to cut through the nonsense and figure out out how we really feel. The company can train cameras on focus groups watching a Super Bowl ad for the first time, or at the entrance of big-box stores to gauge customers’ moods as they go in and out. Browsing the shampoo aisle? Emotient can track your reactions to different packaging and brands to see which elicit the most positive responses.

Depending on the assignment, the company measures levels of emotions like joy, anger, sadness, surprise, fear, disgust and contempt. Founded by three PhD’s working at the UC San Diego, it’s run by CEO Ken Denman, who previously ran Openwave, a telecom company.

In a study of fans watching the Super Bowl in February, the company found that the Pistachios ad featuring Steven Colbert prompted the most positive reactions by a wide margin. Viewers responded especially well when Colbert came back on-screen again 30 seconds later, as shown in the graph below.

Emotient pistachios

The company raised $6 million in Series B funding from Intel Capital in March.

Slyce

Cofounder Cameron Chell described Slyce onstage as “Shazam for everything.” The Toronto-based company is trying to monetize those moments out in the wild, when you spot an enviable piece of clothing worn by another. Snap a picture with the technology, soon to be integrated into a public mobile app, and up pops an array of similar items available for purchase.

Chell demoed the product on stage with Calacanis’ shoes and the app conjured up a couple dozen comparable items available for immediate purchase. Amazon Flow and Asap54 offer competing technologies.

Downtown

Downtown CEO Phil Buckendorf, a 23 year-old native of Germany, played professional golf before trying his hand at a startup. His company works with Palo Alto, Calif. restaurants to “distribute the point of sale.”

Translation: Downtown wants to enable customers to order food and pay from wherever they are in a restaurant. Instead of waiting for a server, users can open their phones and find a menu synced to an Estimote beacon, a small sensor that communicates with nearby smartphones. Because each beacon is assigned to a table, restaurant owners can track orders and deliver customers’ food accordingly.

“We want to be the fast line when you consume inside the restaurant,” Buckendorf says. The technology can easily apply to retail purchases as well, allowing shoppers to buy items on the spot instead of waiting in line to checkout.

The company is going for scale before charging restaurant owners, but Buckendorf plans on taking a 3.5-5.5% cut of each transaction.

Stephen Ritter Joins Emotient as Senior Vice President of Product Development

June 19, 2014 – San Diego, CA

Emotient, the leading provider of facial expression analysis software, today announced that Stephen Ritter joined the company as Senior Vice President of Product Development. Stephen brings extensive engineering management experience from successful start-ups and larger enterprises, including Cypher Genomics, Websense and McAfee, an Intel company.

“Steve is a proven technology leader who has demonstrated success in transforming cutting edge technology into highly successful commercial products,” said Ken Denman, President and CEO, Emotient. “We will rely on his expertise as we continue to drive our innovative, emotion-aware software into the healthcare and retail markets, through a cloud-based platform.”

“Emotient is at the forefront of a massive trend in emotion-aware computing,” said Ritter. “Emotient’s founding team is spearheading the development of proprietary technology that sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. I look forward to leading the commercial delivery of new emotion analytics products for retail and healthcare based on our state-of-the-art technology.”

Most recently, Stephen served as CTO at Cypher Genomics, a leading genome informatics company. Previously, he was Vice President, Engineering for Websense, where he led a global team in the development of the market leading web security product, TRITON Web Security Gateway. Prior to WebSense he served as senior director of engineering at McAfee, now an Intel company, where he developed one of the most advanced and scalable security management software systems in the industry.

Ritter began his career as an accomplished software developer/architect and currently holds six patents in the area of information security. He holds a B.S. in Cognitive Science from UC San Diego.

About Emotient – Emotion Recognition Technology (www.emotient.com)
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available to Global 2000 companies within the retail, healthcare and consumer packaged goods industries.

Emotient to Present Emotion Aware Google Glassware at Vision Sciences Society (VSS) 2014

Emotient, UC San Diego and University of Victoria Also Release Web-based Autism Intervention App Called Emotion Mirror

May 12, 2014 – San Diego, CA
Emotient, the leading provider of facial expression recognition data and analysis, today announced it will demonstrate its emotion-aware Google Glass application at the 12th Annual VSS Dinner and Demo Night on May 19, 2014 from 6 - 10 p.m. at Vision Sciences Society 2014 at the TradeWinds Island Resorts, St. Pete Beach, Florida.

The Google Glass application leverages Emotient’s core technology to process facial expressions and provides an aggregate emotional read-out, measuring overall sentiment (positive, negative or neutral); primary emotions (joy, surprise, sadness, fear, disgust, contempt and anger); and advanced emotions (frustration and confusion). The Emotient software detects and processes anonymous facial expressions of individuals and groups in the Glass wearer's field of view. The emotions are displayed as colored boxes placed around faces that were detected, where the color indicates the automatically recognized emotion.

In addition, two of Emotient’s researchers, also affiliated with UC San Diego, together with researchers from the University of Victoria, developed a new version of the Emotion Mirror application, an autism intervention tool that is designed to aid autistic individuals in identifying and mimicking facial expressions in a fun and entertaining way. Emotion Mirror will be used in clinical studies in collaboration with the University of Victoria.

“We are looking forward to sharing our latest emotion-aware software applications at this year’s VSS, including our Google Glassware,” said Dr. Joshua Susskind, Emotient’s Co-Founder, Engineer and Research Scientist, who co-developed the Glass application with Emotient software engineer, Mark Wazny. “We believe there is an opportunity to apply Emotient’s facial expression recognition software to enable autism intervention programs on mobile devices, making it easy to deploy and conduct studies in clinical populations.”

Collaborator Jim Tanaka, University of Victoria Professor of Psychology, added, “The growing availability of wearables leads us to believe our technology can have a huge positive impact on the autism community and more broadly, the healthcare industry.”

The Emotion Mirror application was initially developed in 2010-11 in collaboration between UC San Diego and University of Victoria as an intervention game for improving expression production and perception in kids with Autism Spectrum Disorder (ASD). The new Emotion Mirror application was developed as an HTML5 web application powered by Emotient’s Cloud API. In addition to Emotient’s Dr. Joshua Susskind, Dr. Marian Bartlett and Morgan Neiman, Emotion Mirror’s collaborators include University of Victoria Professor of Psychology, Jim Tanaka and Postdoctoral Researcher, Buyun Xu.

Emotient’s Google Glassware is available as a private beta for select partners and customers in retail and healthcare.

About Emotient – Emotion Recognition Technology (www.emotient.com)
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available as an API for Fortune 500 companies within consumer packaged goods, retail, healthcare, education and other industries.

Emotient was founded by a team of six Ph.D.s from the University of California, San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. For more information on Emotient, please visit www.emotient.com.

Edward Colby Joins Emotient as Senior Vice President of Product and Business Development

May 1, 2014 – San Diego, CA

Emotient, the leading provider of facial expression recognition data and analysis, today announced that Edward (Ed) Colby joined the company as Senior Vice President of Product and Business Development. Colby brings extensive executive experience in technology, product marketing, finance, and international business development at leading consumer electronics and finance companies, including Apple Inc. and Citibank, and as a successful technology venture founder and investor.

“Ed is a well-respected business leader with an impressive background in driving the early success of start-ups, as well as later stage businesses,” Ken Denman, President and CEO, Emotient. “We will rely on Ed’s expertise in business development and product strategy as we continue to drive our innovative, emotion-aware software into the healthcare and retail markets to enable the delivery of better products, enhance the user experience, and improve patient care.”

“Emotient is a true technology pioneer in automated facial expression recognition,” said Colby. “The opportunity for Emotient is tremendous; its technology is uniquely differentiated in accuracy, subtlety, and real-time capability. I am excited at the transformative consumer and business benefits emotion awareness will bring to retail and healthcare.”

Most recently, Ed has been a partner with two global technology venture capital groups. He served as an advisor and venture partner to Quadrille Capital, formerly Quilvest Ventures, a European firm managing $23B. He led private equity and venture investments into technology and healthcare funds and operating companies including Appirio, a cloud services and software industry leader.

Previously, he was a Managing Director at Viventures Partners. Ed also served as founding CEO of venture capital-backed Wayfarer Communications, funded by Sequoia Capital. Wayfarer was acquired by publicly held Vantive, which was ultimately acquired by Oracle.

Earlier in his career, Ed worked for Apple for eight years, directing their product marketing efforts as well as leading a pan-European R&D organization in Paris, France. He holds an MBA from the University of Virginia and a BA in Economics from Princeton University.

About Emotient – Emotion Recognition Technology (www.emotient.com)
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available as an API for Global 2000 companies within consumer packaged goods, retail, healthcare, education and other industries.

Emotient was founded by a team of six Ph.D.s from the University of California, San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. For more information on Emotient, please visit www.emotient.com.

Media Contact
Vikki Herrera | Emotient | Vikki@emotient.com | 858.314.3385

Man vs. Computer: Which Can Best Spot Pain Fakers?

ABC News
Apr 30, 2014

http://abcnews.go.com/blogs/technology/2014/04/man-vs-computer-which-can-best-spot-pain-fakers/

VideoLink

Researchers at the University of California, San Diego, are working on a breakthrough that could change how doctors treat patients and their pain.

Many doctors’ offices have started displaying charts with faces showing various levels of pain but what if a person is faking it?

The UCSD team conducted a simple test, pitting man versus machine.

Courtesy of Kang Lee and Marian Bartlett

They had human observers watch videos of students experiencing pain and faking it and then asked them to guess which person was faking and who wasn’t. Then they ran the videos through an experimental computer vision system.

“The system did pattern recognition and we tried to figure out if these computer and the pattern recognition system could pick up on those differences between real and faked pain and could they do it differently or better than the human observers could,” said Marian Bartlett, a lead author on the study and from UCSD’s Institute for Neural Computation.

The result: The computer system performed a lot better. While humans were right only half the time — that number reached 55 percent with a little training — the computer had a success rate of 85 percent.

“Pretty much the humans were much like guessing and that’s been known for a long time that human observers are not very good, for the most part, at telling many kinds of deception, whether it’s verbal or nonverbal kinds of deception,” Bartlett said. “Human judges really have trouble telling the difference between the two.”

According to Bartlett, when it comes to real pain, muscle movement is much more random, variable and fleeting. She said the fakers in the experiment opened and closed their mouths too consistently.

The computer is much better at detecting and deciphering the patterns of pain than humans are.

Bartlett said her team was evaluating whether a system like theirs could be used in a clinical setting.

“We’re working with a local children’s hospital in San Diego to develop this and evaluate that, in order to see if, for example, with kids can we do better at estimating their pain intensity levels so that their pain can be better managed,” she said.

Researchers from Ohio State University have computers looking at faces and recognizing as many as 21 different expressions.

The goal is that one day this technology might one day replace the polygraph as a lie detector and perhaps be used at airport security.

ABC News’ Nick Watt and Catherine Cole contributed to this story.

Reading Pain in a Human face

NYT logo

Today's New York Times featured Dr. Marian Bartlett, co-founder and lead scientist, and her study on using automated facial expression recognition technology to distinguish between real and fake pain. According to the study, people are basically at chance in detecting real vs fake pain while a predecessor version of Emotient's software developed at UC San Diego, is at 85%. Take the quiz and see how well you do! http://www.nytimes.com/interactive/2014/04/28/science/faking-pain.html

April 28, 2014
By JAN HOFFMAN
Full article http://well.blogs.nytimes.com/2014/04/28/reading-pain-in-a-human-face/?_php=true&_type=blogs&_php=true&_type=blogs&_r=1&

Kang Lee, Marian BartlettCan you tell which expressions show real pain and which ones are feigned? A study found that human observers had no better than a 55 percent rate of success, even with training, while a computer was accurate about 85 percent of the time. (The answers: A. Fake. B. Real. C. Real.)

How well can computers interact with humans? Certainly computers play a mean game of chess, which requires strategy and logic, and “Jeopardy!,” in which they must process language to understand the clues read by Alex Trebek (and buzz in with the correct question).

But in recent years, scientists have striven for an even more complex goal: programming computers to read human facial expressions.

The practical applications could be profound. Computers could supplement or even replace lie detectors. They could be installed at border crossings and airport security checks. They could serve as diagnostic aids for doctors.

Researchers at the University of California, San Diego, have written software that not only detected whether a person’s face revealed genuine or faked pain, but did so far more accurately than human observers.

While other scientists have already refined a computer’s ability to identify nuances of smiles and grimaces, this may be the first time a computer has triumphed over humans at reading their own species.

“A particular success like this has been elusive,” said Matthew A. Turk, a professor of computer science at the University of California, Santa Barbara. “It’s one of several recent examples of how the field is now producing useful technologies rather than research that only stays in the lab. We’re affecting the real world.”

People generally excel at using nonverbal cues, including facial expressions, to deceive others (hence the poker face). They are good at mimicking pain, instinctively knowing how to contort their features to convey physical discomfort.

And other people, studies show, typically do poorly at detecting those deceptions.

In a new study, in Current Biology, by researchers at San Diego, the University of Toronto and the State University of New York at Buffalo, humans and a computer were shown videos of people in real pain or pretending. The computer differentiated suffering from faking with greater accuracy by tracking subtle muscle movement patterns in the subjects’ faces.

“We have a fair amount of evidence to show that humans are paying attention to the wrong cues,” said Marian S. Bartlett, a research professor at the Institute for Neural Computation at San Diego and the lead author of the study.

For the study, researchers used a standard protocol to produce pain, with individuals plunging an arm in ice water for a minute (the pain is immediate and genuine but neither harmful nor protracted). Researchers also asked the subjects to dip an arm in warm water for a moment and to fake an expression of pain.

Observers watched one-minute silent videos of those faces, trying to identify who was in pain and who was pretending. Only about half the answers were correct, a rate comparable to guessing.

Then researchers provided an hour of training to a new group of observers. They were shown videos, asked to guess who was really in pain, and told immediately whom they had identified correctly. Then the observers were shown more videos and again asked to judge. But the training made little difference: The rate of accuracy scarcely improved, to 55 percent.

Then a computer took on the challenge. Using a program that the San Diego researchers have named CERT, for computer expression recognition toolbox, it measured the presence, absence and frequency of 20 facial muscle movements in each of the 1,800 frames of one-minute videos. The computer assessed the same 50 videos that had been shown to the original, untrained human observers.

The computer learned to identify cues that were so small and swift that they eluded the human eye. Although the same muscles were often engaged by fakers and those in real pain, the computer could detect speed, smoothness and duration of the muscle contractions that pointed toward or away from deception. When the person was experiencing real pain, for instance, the length of time the mouth was open varied; when the person faked pain, the time the mouth opened was regular and consistent. Other combinations of muscle movements were the furrowing between eyebrows, the tightening of the orbital muscles around the eyes, and the deepening of the furrows on either side of the nose.

The computer’s accuracy: about 85 percent.

Jeffrey Cohn, a University of Pittsburgh professor of psychology who also conducts research on computers and facial expressions, said the CERT study addressed “an important problem, medically and socially,” referring to the difficulty of assessing patients who claim to be in pain. But he noted that the study’s observers were university students, not pain specialists.

Dr. Bartlett said she didn’t mean to imply that doctors or nurses do not perceive pain accurately. But “we shouldn’t assume human perception is better than it is,” she said. “There are signals in nonverbal behavior that our perceptual system may not detect or we don’t attend to them.”

Dr. Turk said that among the study’s limitations were that all the faces had the same frontal view and lighting. “No one is wearing sunglasses or hasn’t shaved for five days,” he said.

Dr. Bartlett and Dr. Cohn are working on applying facial expression technology to health care. Dr. Bartlett is working with a San Diego hospital to refine a program that will detect pain intensity in children.

“Kids don’t realize they can ask for pain medication, and the younger ones can’t communicate,” she said. A child could sit in front of a computer camera, she said, referring to a current project, and “the computer could sample the child’s facial expression and get estimates of pain. The prognosis is better for the patient if the pain is managed well and early.”

Dr. Cohn noted that his colleagues have been working with the University of Pittsburgh Medical Center’s psychiatry department, focusing on severe depression. One project is for a computer to identify changing patterns in vocal sounds and facial expressions throughout a patient’s therapy as an objective aid to the therapist.

“We have found that depression in the facial muscles serves the function of keeping others away, of signaling, ‘Leave me alone,’ ” Dr. Cohn said. The tight-lipped smiles of the severely depressed, he said, were tinged with contempt or disgust, keeping others at bay.

“As they become less depressed, their faces show more sadness,” he said. Those expressions reveal that the patient is implicitly asking for solace and help, he added. That is one way the computer can signal to the therapist that the patient is getting better.

Pages