News & Blog

What does the Future of Retail Look Like? Four Young Companies Provide a Glimpse

Forbes logo

Forbes
By J.J. Colao
6/18/2014

http://www.forbes.com/sites/jjcolao/2014/06/18/what-does-the-future-of-retail-look-like-four-young-companies-provide-a-glimpse/

There was a lot to digest at Jason Calacanis’ and Pivotal Labs‘ Launch Beacon conference on Monday, an event devoted to exploring trends around e-commerce, retail, payments and location-based technology.

To this audience member, the most interesting bits came from the morning demos of four companies working on technology with applications for retail and e-commerce.

If these startups have their way, we’ll soon live in a world of interactive in-store displays, ubiquitous mood tracking and effortless shopping on the fly.

Oh, and your smartphone might replace your waiter.

Perch

There aren’t enough interactive screens in your life, so the people of PERCH are here to help. The company sells a projector that produces a customized digital display for shoppers to play with as they browse products in store. In addition to conventional touch-screen interactions, like swiping and scrolling, the technology senses when customers touch or pick up objects placed on the surface.

In the video below, for example, shoppers looking for nail polish get a delightful ping when they touch each bottle, along with information about the shade.

Founded by CEO Jared Schiffman, a veteran of MIT’s Media Lab, in 2012, the company counts Kate Spade, Kiehl’s, Cole Haan and Quirky as customers, leasing the tech for $500 per month. (They previously sold it for $7500.) PERCH can simultaneously update thousands of units across different stores at the same time and brands can group multiple units together for more complex presentations.

With $60K per month in sales, the New York-based company is currently raising a $1.2 million seed round.

Emotient

We humans aren’t all that reliable when it comes to customer feedback. Either innocently or intentionally, we often mischaracterize our perceptions of brands, products and advertisements.

Emotient, based in San Diego, uses facial expression recognition software to cut through the nonsense and figure out out how we really feel. The company can train cameras on focus groups watching a Super Bowl ad for the first time, or at the entrance of big-box stores to gauge customers’ moods as they go in and out. Browsing the shampoo aisle? Emotient can track your reactions to different packaging and brands to see which elicit the most positive responses.

Depending on the assignment, the company measures levels of emotions like joy, anger, sadness, surprise, fear, disgust and contempt. Founded by three PhD’s working at the UC San Diego, it’s run by CEO Ken Denman, who previously ran Openwave, a telecom company.

In a study of fans watching the Super Bowl in February, the company found that the Pistachios ad featuring Steven Colbert prompted the most positive reactions by a wide margin. Viewers responded especially well when Colbert came back on-screen again 30 seconds later, as shown in the graph below.

Emotient pistachios

The company raised $6 million in Series B funding from Intel Capital in March.

Slyce

Cofounder Cameron Chell described Slyce onstage as “Shazam for everything.” The Toronto-based company is trying to monetize those moments out in the wild, when you spot an enviable piece of clothing worn by another. Snap a picture with the technology, soon to be integrated into a public mobile app, and up pops an array of similar items available for purchase.

Chell demoed the product on stage with Calacanis’ shoes and the app conjured up a couple dozen comparable items available for immediate purchase. Amazon Flow and Asap54 offer competing technologies.

Downtown

Downtown CEO Phil Buckendorf, a 23 year-old native of Germany, played professional golf before trying his hand at a startup. His company works with Palo Alto, Calif. restaurants to “distribute the point of sale.”

Translation: Downtown wants to enable customers to order food and pay from wherever they are in a restaurant. Instead of waiting for a server, users can open their phones and find a menu synced to an Estimote beacon, a small sensor that communicates with nearby smartphones. Because each beacon is assigned to a table, restaurant owners can track orders and deliver customers’ food accordingly.

“We want to be the fast line when you consume inside the restaurant,” Buckendorf says. The technology can easily apply to retail purchases as well, allowing shoppers to buy items on the spot instead of waiting in line to checkout.

The company is going for scale before charging restaurant owners, but Buckendorf plans on taking a 3.5-5.5% cut of each transaction.

Stephen Ritter Joins Emotient as Senior Vice President of Product Development

June 19, 2014 – San Diego, CA

Emotient, the leading provider of facial expression analysis software, today announced that Stephen Ritter joined the company as Senior Vice President of Product Development. Stephen brings extensive engineering management experience from successful start-ups and larger enterprises, including Cypher Genomics, Websense and McAfee, an Intel company.

“Steve is a proven technology leader who has demonstrated success in transforming cutting edge technology into highly successful commercial products,” said Ken Denman, President and CEO, Emotient. “We will rely on his expertise as we continue to drive our innovative, emotion-aware software into the healthcare and retail markets, through a cloud-based platform.”

“Emotient is at the forefront of a massive trend in emotion-aware computing,” said Ritter. “Emotient’s founding team is spearheading the development of proprietary technology that sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. I look forward to leading the commercial delivery of new emotion analytics products for retail and healthcare based on our state-of-the-art technology.”

Most recently, Stephen served as CTO at Cypher Genomics, a leading genome informatics company. Previously, he was Vice President, Engineering for Websense, where he led a global team in the development of the market leading web security product, TRITON Web Security Gateway. Prior to WebSense he served as senior director of engineering at McAfee, now an Intel company, where he developed one of the most advanced and scalable security management software systems in the industry.

Ritter began his career as an accomplished software developer/architect and currently holds six patents in the area of information security. He holds a B.S. in Cognitive Science from UC San Diego.

About Emotient – Emotion Recognition Technology (www.emotient.com)
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available to Global 2000 companies within the retail, healthcare and consumer packaged goods industries.

Emotient to Present Emotion Aware Google Glassware at Vision Sciences Society (VSS) 2014

Emotient, UC San Diego and University of Victoria Also Release Web-based Autism Intervention App Called Emotion Mirror

May 12, 2014 – San Diego, CA
Emotient, the leading provider of facial expression recognition data and analysis, today announced it will demonstrate its emotion-aware Google Glass application at the 12th Annual VSS Dinner and Demo Night on May 19, 2014 from 6 - 10 p.m. at Vision Sciences Society 2014 at the TradeWinds Island Resorts, St. Pete Beach, Florida.

The Google Glass application leverages Emotient’s core technology to process facial expressions and provides an aggregate emotional read-out, measuring overall sentiment (positive, negative or neutral); primary emotions (joy, surprise, sadness, fear, disgust, contempt and anger); and advanced emotions (frustration and confusion). The Emotient software detects and processes anonymous facial expressions of individuals and groups in the Glass wearer's field of view. The emotions are displayed as colored boxes placed around faces that were detected, where the color indicates the automatically recognized emotion.

In addition, two of Emotient’s researchers, also affiliated with UC San Diego, together with researchers from the University of Victoria, developed a new version of the Emotion Mirror application, an autism intervention tool that is designed to aid autistic individuals in identifying and mimicking facial expressions in a fun and entertaining way. Emotion Mirror will be used in clinical studies in collaboration with the University of Victoria.

“We are looking forward to sharing our latest emotion-aware software applications at this year’s VSS, including our Google Glassware,” said Dr. Joshua Susskind, Emotient’s Co-Founder, Engineer and Research Scientist, who co-developed the Glass application with Emotient software engineer, Mark Wazny. “We believe there is an opportunity to apply Emotient’s facial expression recognition software to enable autism intervention programs on mobile devices, making it easy to deploy and conduct studies in clinical populations.”

Collaborator Jim Tanaka, University of Victoria Professor of Psychology, added, “The growing availability of wearables leads us to believe our technology can have a huge positive impact on the autism community and more broadly, the healthcare industry.”

The Emotion Mirror application was initially developed in 2010-11 in collaboration between UC San Diego and University of Victoria as an intervention game for improving expression production and perception in kids with Autism Spectrum Disorder (ASD). The new Emotion Mirror application was developed as an HTML5 web application powered by Emotient’s Cloud API. In addition to Emotient’s Dr. Joshua Susskind, Dr. Marian Bartlett and Morgan Neiman, Emotion Mirror’s collaborators include University of Victoria Professor of Psychology, Jim Tanaka and Postdoctoral Researcher, Buyun Xu.

Emotient’s Google Glassware is available as a private beta for select partners and customers in retail and healthcare.

About Emotient – Emotion Recognition Technology (www.emotient.com)
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available as an API for Fortune 500 companies within consumer packaged goods, retail, healthcare, education and other industries.

Emotient was founded by a team of six Ph.D.s from the University of California, San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. For more information on Emotient, please visit www.emotient.com.

Edward Colby Joins Emotient as Senior Vice President of Product and Business Development

May 1, 2014 – San Diego, CA

Emotient, the leading provider of facial expression recognition data and analysis, today announced that Edward (Ed) Colby joined the company as Senior Vice President of Product and Business Development. Colby brings extensive executive experience in technology, product marketing, finance, and international business development at leading consumer electronics and finance companies, including Apple Inc. and Citibank, and as a successful technology venture founder and investor.

“Ed is a well-respected business leader with an impressive background in driving the early success of start-ups, as well as later stage businesses,” Ken Denman, President and CEO, Emotient. “We will rely on Ed’s expertise in business development and product strategy as we continue to drive our innovative, emotion-aware software into the healthcare and retail markets to enable the delivery of better products, enhance the user experience, and improve patient care.”

“Emotient is a true technology pioneer in automated facial expression recognition,” said Colby. “The opportunity for Emotient is tremendous; its technology is uniquely differentiated in accuracy, subtlety, and real-time capability. I am excited at the transformative consumer and business benefits emotion awareness will bring to retail and healthcare.”

Most recently, Ed has been a partner with two global technology venture capital groups. He served as an advisor and venture partner to Quadrille Capital, formerly Quilvest Ventures, a European firm managing $23B. He led private equity and venture investments into technology and healthcare funds and operating companies including Appirio, a cloud services and software industry leader.

Previously, he was a Managing Director at Viventures Partners. Ed also served as founding CEO of venture capital-backed Wayfarer Communications, funded by Sequoia Capital. Wayfarer was acquired by publicly held Vantive, which was ultimately acquired by Oracle.

Earlier in his career, Ed worked for Apple for eight years, directing their product marketing efforts as well as leading a pan-European R&D organization in Paris, France. He holds an MBA from the University of Virginia and a BA in Economics from Princeton University.

About Emotient – Emotion Recognition Technology (www.emotient.com)
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available as an API for Global 2000 companies within consumer packaged goods, retail, healthcare, education and other industries.

Emotient was founded by a team of six Ph.D.s from the University of California, San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. For more information on Emotient, please visit www.emotient.com.

Media Contact
Vikki Herrera | Emotient | Vikki@emotient.com | 858.314.3385

Man vs. Computer: Which Can Best Spot Pain Fakers?

ABC News
Apr 30, 2014

http://abcnews.go.com/blogs/technology/2014/04/man-vs-computer-which-can-best-spot-pain-fakers/

VideoLink

Researchers at the University of California, San Diego, are working on a breakthrough that could change how doctors treat patients and their pain.

Many doctors’ offices have started displaying charts with faces showing various levels of pain but what if a person is faking it?

The UCSD team conducted a simple test, pitting man versus machine.

Courtesy of Kang Lee and Marian Bartlett

They had human observers watch videos of students experiencing pain and faking it and then asked them to guess which person was faking and who wasn’t. Then they ran the videos through an experimental computer vision system.

“The system did pattern recognition and we tried to figure out if these computer and the pattern recognition system could pick up on those differences between real and faked pain and could they do it differently or better than the human observers could,” said Marian Bartlett, a lead author on the study and from UCSD’s Institute for Neural Computation.

The result: The computer system performed a lot better. While humans were right only half the time — that number reached 55 percent with a little training — the computer had a success rate of 85 percent.

“Pretty much the humans were much like guessing and that’s been known for a long time that human observers are not very good, for the most part, at telling many kinds of deception, whether it’s verbal or nonverbal kinds of deception,” Bartlett said. “Human judges really have trouble telling the difference between the two.”

According to Bartlett, when it comes to real pain, muscle movement is much more random, variable and fleeting. She said the fakers in the experiment opened and closed their mouths too consistently.

The computer is much better at detecting and deciphering the patterns of pain than humans are.

Bartlett said her team was evaluating whether a system like theirs could be used in a clinical setting.

“We’re working with a local children’s hospital in San Diego to develop this and evaluate that, in order to see if, for example, with kids can we do better at estimating their pain intensity levels so that their pain can be better managed,” she said.

Researchers from Ohio State University have computers looking at faces and recognizing as many as 21 different expressions.

The goal is that one day this technology might one day replace the polygraph as a lie detector and perhaps be used at airport security.

ABC News’ Nick Watt and Catherine Cole contributed to this story.

Reading Pain in a Human face

NYT logo

Today's New York Times featured Dr. Marian Bartlett, co-founder and lead scientist, and her study on using automated facial expression recognition technology to distinguish between real and fake pain. According to the study, people are basically at chance in detecting real vs fake pain while a predecessor version of Emotient's software developed at UC San Diego, is at 85%. Take the quiz and see how well you do! http://www.nytimes.com/interactive/2014/04/28/science/faking-pain.html

April 28, 2014
By JAN HOFFMAN
Full article http://well.blogs.nytimes.com/2014/04/28/reading-pain-in-a-human-face/?_php=true&_type=blogs&_php=true&_type=blogs&_r=1&

Kang Lee, Marian BartlettCan you tell which expressions show real pain and which ones are feigned? A study found that human observers had no better than a 55 percent rate of success, even with training, while a computer was accurate about 85 percent of the time. (The answers: A. Fake. B. Real. C. Real.)

How well can computers interact with humans? Certainly computers play a mean game of chess, which requires strategy and logic, and “Jeopardy!,” in which they must process language to understand the clues read by Alex Trebek (and buzz in with the correct question).

But in recent years, scientists have striven for an even more complex goal: programming computers to read human facial expressions.

The practical applications could be profound. Computers could supplement or even replace lie detectors. They could be installed at border crossings and airport security checks. They could serve as diagnostic aids for doctors.

Researchers at the University of California, San Diego, have written software that not only detected whether a person’s face revealed genuine or faked pain, but did so far more accurately than human observers.

While other scientists have already refined a computer’s ability to identify nuances of smiles and grimaces, this may be the first time a computer has triumphed over humans at reading their own species.

“A particular success like this has been elusive,” said Matthew A. Turk, a professor of computer science at the University of California, Santa Barbara. “It’s one of several recent examples of how the field is now producing useful technologies rather than research that only stays in the lab. We’re affecting the real world.”

People generally excel at using nonverbal cues, including facial expressions, to deceive others (hence the poker face). They are good at mimicking pain, instinctively knowing how to contort their features to convey physical discomfort.

And other people, studies show, typically do poorly at detecting those deceptions.

In a new study, in Current Biology, by researchers at San Diego, the University of Toronto and the State University of New York at Buffalo, humans and a computer were shown videos of people in real pain or pretending. The computer differentiated suffering from faking with greater accuracy by tracking subtle muscle movement patterns in the subjects’ faces.

“We have a fair amount of evidence to show that humans are paying attention to the wrong cues,” said Marian S. Bartlett, a research professor at the Institute for Neural Computation at San Diego and the lead author of the study.

For the study, researchers used a standard protocol to produce pain, with individuals plunging an arm in ice water for a minute (the pain is immediate and genuine but neither harmful nor protracted). Researchers also asked the subjects to dip an arm in warm water for a moment and to fake an expression of pain.

Observers watched one-minute silent videos of those faces, trying to identify who was in pain and who was pretending. Only about half the answers were correct, a rate comparable to guessing.

Then researchers provided an hour of training to a new group of observers. They were shown videos, asked to guess who was really in pain, and told immediately whom they had identified correctly. Then the observers were shown more videos and again asked to judge. But the training made little difference: The rate of accuracy scarcely improved, to 55 percent.

Then a computer took on the challenge. Using a program that the San Diego researchers have named CERT, for computer expression recognition toolbox, it measured the presence, absence and frequency of 20 facial muscle movements in each of the 1,800 frames of one-minute videos. The computer assessed the same 50 videos that had been shown to the original, untrained human observers.

The computer learned to identify cues that were so small and swift that they eluded the human eye. Although the same muscles were often engaged by fakers and those in real pain, the computer could detect speed, smoothness and duration of the muscle contractions that pointed toward or away from deception. When the person was experiencing real pain, for instance, the length of time the mouth was open varied; when the person faked pain, the time the mouth opened was regular and consistent. Other combinations of muscle movements were the furrowing between eyebrows, the tightening of the orbital muscles around the eyes, and the deepening of the furrows on either side of the nose.

The computer’s accuracy: about 85 percent.

Jeffrey Cohn, a University of Pittsburgh professor of psychology who also conducts research on computers and facial expressions, said the CERT study addressed “an important problem, medically and socially,” referring to the difficulty of assessing patients who claim to be in pain. But he noted that the study’s observers were university students, not pain specialists.

Dr. Bartlett said she didn’t mean to imply that doctors or nurses do not perceive pain accurately. But “we shouldn’t assume human perception is better than it is,” she said. “There are signals in nonverbal behavior that our perceptual system may not detect or we don’t attend to them.”

Dr. Turk said that among the study’s limitations were that all the faces had the same frontal view and lighting. “No one is wearing sunglasses or hasn’t shaved for five days,” he said.

Dr. Bartlett and Dr. Cohn are working on applying facial expression technology to health care. Dr. Bartlett is working with a San Diego hospital to refine a program that will detect pain intensity in children.

“Kids don’t realize they can ask for pain medication, and the younger ones can’t communicate,” she said. A child could sit in front of a computer camera, she said, referring to a current project, and “the computer could sample the child’s facial expression and get estimates of pain. The prognosis is better for the patient if the pain is managed well and early.”

Dr. Cohn noted that his colleagues have been working with the University of Pittsburgh Medical Center’s psychiatry department, focusing on severe depression. One project is for a computer to identify changing patterns in vocal sounds and facial expressions throughout a patient’s therapy as an objective aid to the therapist.

“We have found that depression in the facial muscles serves the function of keeping others away, of signaling, ‘Leave me alone,’ ” Dr. Cohn said. The tight-lipped smiles of the severely depressed, he said, were tinged with contempt or disgust, keeping others at bay.

“As they become less depressed, their faces show more sadness,” he said. Those expressions reveal that the patient is implicitly asking for solace and help, he added. That is one way the computer can signal to the therapist that the patient is getting better.

Emotient Named a "Cool Vendor" by Leading Analyst Firm Gartner

Vendors selected for the “Cool Vendor” report are innovative, impactful and intriguing

Gartner Cool Vendor

April 28, 2014 – San Diego, CA
Emotient, the leading provider of facial expression recognition data and analysis, today announced it has been included in the “Cool Vendors in Human-Machine Interface 2014 [1] ” report by Gartner, Inc.

“We are excited to be included as one of Gartner’s Cool Vendors for 2014,” said Ken Denman, President and CEO, Emotient. “We believe it affirms our leadership and innovation in automating the understanding of human behavior through facial expression recognition. Our software is uniquely designed to enable retailers and healthcare companies understand and respond to emotional responses, as well as report sentiment, interest and engagement, in real-time.”

Retail & Healthcare Focus
Emotient is focused on delivering its emotion recognition and sentiment analysis software to Global 2000 customers in the retail and healthcare markets. In retail, the software can detect and track aggregate consumer sentiment via automated facial expression analysis – for in-store retail environments (digital signage, interactive kiosks, intelligent vending machines and point of sale); e-commerce and market research. In healthcare, Emotient’s technology provides the ability to more deeply understand human behavior and discover actionable insights that enhance the quality of patient care. Emotient’s technology can be deployed in any hospital, pharmacy or clinical setting where cameras may be present and appropriate.

Emotient’s software is designed to help businesses deliver better products, enhance the user experience and improve patient care. Emotient provides access to critical emotion insights, processing anonymous facial expressions of individuals and groups; the software does not store video or images. Emotient detects and measures seven facial expressions of primary emotion (joy, surprise, sadness, anger, fear, disgust and contempt); overall sentiments (positive, negative, and neutral), advanced emotions (frustration and confusion) and 19 Facial Action Units (elementary facial muscle movements). The Emotient system sets the industry standard for accuracy, with the highly precise ability to detect single-frame microexpressions of emotions.

About Emotient – Emotion Recognition Technology (www.emotient.com)
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available as an API for Global 2000 companies within consumer packaged goods, retail, healthcare, education and other industries.

Emotient was founded by a team of six Ph.D.s from the University of California, San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. For more information on Emotient, please visit www.emotient.com.

Media Contact
Vikki Herrera | Emotient | Vikki@emotient.com | 858.314.3385

Disclaimer:
Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

[1]Gartner “Cool Vendors in Human-Machine Interface, 2014” by Adib Carl Ghubril, Tuong Huy Nguyen, Anshul Gupta, Brian Blau, April 22, 2014

Computer Software Accurately Predicts Student Test Performance

Study Shows Automatic Recognition of Facial Expressions Can Track Student Engagement in Real Time

April 15, 2014 - San Diego, CA

Emotient, the leading provider of facial expression recognition data and analysis, and the University of California, San Diego announced publication of a joint study by two Emotient co-founders affiliated with UC San Diego, together with researchers from Virginia Commonwealth University and Virginia State University. The study demonstrates that a real-time engagement detection technology that processes facial expressions can perform with accuracy comparable to that of human observers. The study also revealed that engagement levels were a better predictor of students’ post-test performance than the students’ pre-test scores.

The early online version of the paper, “The Faces of Engagement: Automatic Recognition of Student Engagement,” appeared today in the journal, IEEE Transactions on Affective Computing.

“Automatic recognition of student engagement could revolutionize education by increasing understanding of when and why students get disengaged,” said Dr. Jacob Whitehill, Machine Perception Lab researcher in UC San Diego’s Qualcomm Institute and Emotient co-founder. “Automatic engagement detection provides an opportunity for educators to adjust their curriculum for higher impact, either in real time or in subsequent lessons. Automatic engagement detection could be a valuable asset for developing adaptive educational games, improving intelligent tutoring systems and tailoring massive open online courses, or MOOCs.” Whitehill (Ph.D., ’12) recently received his doctorate from the Computer Science and Engineering department of UC San Diego’s Jacobs School of Engineering.

The study consisted of training an automatic detector, which measures how engaged a student appears in a webcam video while undergoing cognitive skills training on an iPad®. The study used automatic expression recognition technology, to analyze students’ facial expressions on a frame-by-frame basis, and estimate their engagement level.

“This study is one of the most thorough to date in the application of computer vision and machine learning technologies for automatic student engagement detection,” said Dr. Javier Movellan, co-director of the Machine Perception Lab at UC San Diego and Emotient co-founder and lead researcher. “The possibilities for its application in education and beyond are tremendous. By understanding what parts of a lecture, conversation, game, advertisement or promotion produced different levels of engagement, an individual or business can obtain valuable feedback to fine-tune the material to something more impactful.”

In addition to Movellan and Whitehill, the study’s authors include Virginia Commonwealth professor of developmental psychology, Dr. Zewelanji Serpell, as well as Yi-Ching Lin and Dr. Aysha Foster from the department of psychology at Virginia State.

Student Engagment
Photo Copyright 2014 IEEE All Rights Reserved

About Emotient – Emotion Recognition Technology (www.emotient.com)
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available as an API for Fortune 500 companies within consumer packaged goods, retail, healthcare, education and other industries.

Emotient was founded by a team of six Ph.D.s from the University of California, San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. For more information on Emotient, please visit www.emotient.com.

Media Contacts
Vikki Herrera | Emotient | Vikki@emotient.com | 858.314.3385
Doug Ramsey | UC San Diego | dramsey@ucsd.edu | 858.822.5825

Pages