News & Blog

Gen Y Consumers Are Hard To Surprise. Here's How To Enchant Them

This article is by Baiju Shah, managing director for strategy and innovation at Accenture Interactive and the global co-lead of Fjord, design and innovation from Accenture Interactive.

Feb. 11, 2015

Perhaps it’s the result of a generation raised on Harry Potter, but Gen Y are hard to surprise. They confidently expect services to be magical, to constantly surprise and delight them; what was seen as incredible only five years ago will be considered run-of-the mill today.

Marketers today are facing a broader competitive threat than they have experienced to date. Traditionally, they have focused on outperforming direct competitors or companies that offered products similar to their own. But now consumer expectations are starting to transcend traditional industry boundaries. Brands are benchmarked not only against their industry peers but also against great customer experiences businesses offer in other sectors – a development we call “liquid expectations.”

The perfect example is Airbnb. Its room-rental model is a template that can be adapted for a wide range of sectors beyond property to include designer dresses and bicycles to cameras and even toys. While this shift should be viewed as a threat by CMOs, it’s also a tremendous opportunity. The challenge for a brand is not just to create innovative services that meet the consumers’ expectations, but to actually drive them as well.

This will require more than just great products or creative advertising. CMOs will need to harness the potential of smart, connected, contextually aware digital services to turn their brand into a living service. Fjord’s recently launched Trends Report 2015 presents a compelling picture of what is possible.

The report shows that the monolithic product and service models of old are increasingly out of step with what consumers expect today and will welcome and value tomorrow. For instance, brands will need to develop a sixth sense, anticipating what customers might want and acting on it. A good example is Square Order— a simple app for buying and receiving a latte or lunch. Square Order learns what you like and places the order as you approach the café. Since it also handles payments, you simply have to pick up your latte, just the way you like it, without any extraneous transaction. Another example comes from a large global online retailer who patented a data-driven service that will ship products to a final geographical area without knowing the exact destination address in advance.

The challenge of liquid consumer expectations will also mean brands making their digital services feel more human by building emotion into the interface. To date, interaction with digital technologies has been largely transactional with impersonal screens and keyboards providing the interface. Now advances in technology enable more natural human-machine interactions.

For example, Emotient has showcased its real-time facial expression recognition software and Alderaban’s new humanoid robot is capable of detecting emotions through both vocal analysis and facial recognition. With emotional sensors becoming even more accurate, it might not be long before machines know how we’re feeling before we do.

Another aspect of meeting constantly shifting consumer expectations is that brands will need to offer seamless digital services that accompany consumers as they move between different devices, platforms and places. Customers often experience gaps when using digitally-enabled products and services, most notably between online and offline experiences. With platforms multiplying, the most important gap to address will be when customers switch between devices. For businesses, identifying inconsistencies in their service across modes and devices will be critical.

While digital technologies will be crucial elements underpinning any service, brands must also be careful not to forget their greatest asset of all – the people within the organization. A key trend highlighted in the report is the welcome return of human beings to high-tech customer services as businesses start to benefit from reintegrating real people into the interface. Brands are beginning to equip service agents with cognitive computing capabilities that can help them deepen relationships with customers rather than letting entirely robotic solutions manage virtually all interactions with consumers. Eyewear startup Warby Parker opened headquarters in Nashville in addition to their online sales, select stores, and headquarters in New York to continue their high-touch customer service, including humans answering the phone without the traditional robotics of phone trees. And a large Australian telecommunications provider recently announced a massive “digital first” initiative that automates all the repetitive, administrative tasks so their colleagues can have more meaningful interactions with customers.

Increasingly, consumers will expect to be enchanted by brands with services that are intuitive and responsive while being able to speak to a real customer services person when they need to. Consequently, offerings will start to wrap around customers, constantly learning more about their needs, intents and preferences, so that they can flex and adapt to make themselves more relevant, engaging and useful. Businesses that embrace the transformative potential of these “living services” have tremendous opportunities for growing their customer base and their customer loyalty.

The Technology that Unmasks Your Hidden Emotions

Using Psychology and Data Mining to Discern Emotions
as People Shop, Watch Ads

Wall Street Journal
Jan. 28, 2015

This Wall Street Journal article and video discusses the promise of and uses for facial expression recognition technology. Includes Emotient application graphics and interview with Dr. Paul Ekman, father of emotion recognition science and member of Emotient’s Advisory Board.

Dr. Marian Bartlett to Present at TedxAmericasFinestCity Event on October 11

Co-Founder Dr. Marian Bartlett will grace the TedxAmericasFinestCity stage next Saturday, October 11 to deliver a talk on the groundbreaking facial expression analysis software she invented with the Emotient team. The event theme is Transformation Through Us, and is positioned as a tool for helping the community drive change. Use promo code AFC_Network for discount passes. @TEDxAFC

San Diegans Sharing Their Big Ideas At Tedx Event

Imagine a world where machines can recognize facial expressions including pain, or a computer game that can teach autistic children how to recognize emotion. That's the world where Marian Bartlett, co-founder and lead scientist at the San Diego biotech Emotient, lives.

Bartlett said her research has shown that the machines do a better job of picking up on human emotion than humans can.

Using machine learning to learn patterns of facial movements that are consistent with real expressions is just one of the big ideas that will be shared at the TEDx America's Finest City conference at SDSU on Oct. 11.

Speakers will share their ideas on transformation on topics ranging from civic engagement to education to inner empowerment.


Emerging Technologies Promise to Quantify Emotions

by George Lawton
September 15, 2014

A variety of technologies are emerging for tracking emotions via the Internet using techniques such as text-analytics, speech analysis, and video analysis of the face.

Better tools for tracking emotions holds promise for bringing awareness to our inner state through outside feedback. This kind of technology also promises to make it easier to understand how websites, mobile applications, and ads impact the emotional state of users. “The end goal should be to reengineer business to be truly customer-centric, which was infeasible until emotional analytics entered the picture,” said Armen Berjikly, Founder and CEO of Kanjoya, a sentiment analysis service.

Ken Denman, President and CEO of Emotient, which makes emotional tracking technology for the face, said,

"Fundamentally our motivation is to accelerate the pace of innovation for consumers, patients, and students, by providing actionable insights faster and more accurately than ever dreamed. This will empower product developers, customer, and patient experience owners to more quickly and accurately understand what is working and what is not."

At the same time, it is important to note that Website owners need to be cautious in the ways they measure, analyze, and store data about the emotional state of users. As previously reported, experimenting on the ways that applications, ads, and Websites impact users could help organizations make the world a better place. But organizations need to be thoughtful in the use of emotional information in order not to alienate users.

As Rana el Kaliouby, Co-founder and Chief Science Officer of Affectiva, which has developed the Affdex service for analyzing the facial expression of emotions explained,

"As a company, we understand how critical the data we are collecting is. Our philosophy is no data is ever collected without explicit opt-in. Ever! We also feel a responsibility towards educating the public about what this technology can and cannot do. Facial coding technology can tell you the expression on your face (which a human in the room would have picked anyhow) but it will not tell you what your thoughts are."

Reading into Emotion

Sentiment analysis is based on being able to extract signals of positive and negative sentiment from text, said Aaron Chavez, Chief Scientist at AlchemyAPI, a sentiment analysis service. With more targeted analysis, the goal is not just to look at positive or negative, but to associate it with things in the text. This makes it possible to zero in on specific aspects of a service such as the food being cold, while the waiter was helpful.

This is a challenging problem, and in many cases, it can be subjective. People can come to different conclusions when reading the same text. The biggest challenge tends to be what is unspoken. People don’t always come out and say in the clearest terms how they feel about things, said Chavez. For example, if someone uses sarcasm they may mean the opposite of what they say. Understanding this requires using external information from other sources.

Chavez explained,

"You are pulling in external knowledge of the world in order to recognize when what is stated does not line up when what was intended. This kind of analysis can be more specific with a greater understanding of a person. For example, if they have a political affiliation they might be expressing sentiment in what might otherwise be considered a factual statement."

There are different approaches to sentiment analysis. Rule-based approaches to sentiment analysis create complex rule chains for associating text with sentiment. For example, by creating a special rule for recognizing negations such as when someone uses the word “not” in a sentence. The engine needs to be able to recognize that “not good” is associated with a negative sentiment, while “good” by itself reflects positive sentiment.

In contrast, deep learning systems use neural networks to create a framework for analyzing text that can be robust to understanding in a variety of ways.

Chavez explained,

"There are so many ways of saying the same thing. That is one of the strengths of deep learning, where you can come up with representations where words and phrases are similar. Some of the older systems that are ruled based are susceptible to not seeing the problem when you make a minor modification to what someone is saying. Deep learning has a robust understanding of language that is suitable to minor different ways of saying the same thing."

There can also be cultural differences, and these kinds of techniques can be more explicit when someone is talking in a slightly different context. Thin is good for smart phones and bad for bed sheets. Chavez noted,

"The system needs to use what you know about them to color that sentiment. But you can only take advantage of this when you have access to that person’s history. There is a limit to what you can infer when the system does not don’t know that person’s history. Even if there is access to this history, it can be a challenging problem to solve.

Aggregating this data can make it easier to find more information about what a person means, even though the technology is far from perfect. When you are collecting hundreds of weak signals, they start to coalesce. Any way you can aggregate data, whether it is based on time, location, or a speaker’s history will provide an opportunity for a more reliable signal."

People are using sentiment analysis in a lot of different ways. It is commonly used to understand the voice of the customer where the company can analyze customer interactions and decide whether they are being done well. The technology is widely used for social media monitoring for tracking the progress of new products and deciding whether the latest ad campaign is having an impact on Facebook or Twitter. It is also being used for stock analysis and targeted advertising.

Meanwhile Kanjoya is launching a consumer grade SaaS sentiment analysis service that hooks up to any data source, instantly (and continuously) analyzes it, and automatically provides actionable insights beyond measurement such as promoter/detractor discovery, base-lining and anomaly detection, competitive analysis, and a real-time net promoter score (NPS) analogue that does not require the NPS survey.

Berjikly explained,

"We have proprietary data models built over the last decade to help us model how language and emotion are related, including how that changes depending on the context and background of the speaker. We account for all of those in our technology model, enabling us to decipher emotion at greater than human accuracy, without any training by the end user. There are no special technology requirements, we’ve built our products to work immediately, with intuitive user interfaces, and agnostic to the input data."

Berjikly said that Kanjoya’s sentiment analysis technology is predominantly used to help companies get closer to their customer’s wants, needs, and thoughts. “Humans are inherently emotional decision makers, and companies that acknowledge this qualitative side to the equation, and make it a priority to not just understand it, but act to address it, have a major, often unassailable competitive advantage in the customer experience.”

Sentiment analysis technology is likely to get even better over time, noted Chavez. The notion of positive and negative sentiment is a coarse lens to view this information. “Being able to go beyond positive and negative to determine the correct time to take action is going to be more interesting,” he said.

Hearing Emotions

Speech emotional analytics technology work by analyzing our vocally-transmitted emotions in real-time as we speak.

Dan Emodi, VP Marketing and Strategic Accounts at Beyond Verbal, said this kind of technology can decipher three basic things using a microphone and a network connection via a cloud-based application:

  • The speaker’s mood;
  • The speaker’s attitude towards the subject he speaks about; and
  • The speaker’s emotional decision making characteristics more commonly known as emotional personality.

A consumer version of the Beyond Verbal technology is available for the iPhone, Android, and Web browsers.

"Understanding emotions is adding what is probably the most important non existing interface today,” said Emodi.

"Allowing machines to interact with us on an emotional level has almost unlimited commercial usage from Market Research, to call Centers, to self-improving applications, wellness, media, content and down to Siri that finally understands your emotions. In implementing emotions into daily use it seems we are truly only bound by our own imagination."

Another tool for hearing emotions is EmoVoice, which is freely available as open source software. It uses a supervised machine learning approach that collects a huge amount of emotional voice data for which classifiers are trained and tested. Typically, data is recorded in separate sessions during which users are asked to show certain emotions or interact with a system that has been manipulated to induce the desired behavior. Afterward, the collected data is manually labeled by human annotators with the assumed user emotions. Classifiers that are able to assign certain emotional categories to voice data are computed from this data.

Prof. Dr. Elisabeth André at the University of Augsburg in Germany said techniques for detecting emotions may be employed to sort voice messages according to the emotions portrayed by the caller in call center applications. Among other things, a dialogue system may deploy knowledge on emotional user states to select appropriate conciliation strategies and to decide whether or not to transfer the caller to a human agent.

Methods for the recognition of emotions from speech have also been explored within the context of computer-enhanced learning, added André.

The motivation behind these approaches is the expectation that the learning process may be improved if a tutoring system adapts its pedagogical strategies to a student’s emotional state. Research has been conducted to explore the feasibility and potential of emotionally aware in-car systems. This work is motivated by empirical studies that provide evidence of the dependencies between a driver’s performance and his or her emotional state.

André’s team is also employing techniques for recognizing emotional state for social training within the EU-funded TARDIS project. In this project, young people engage in role play with virtual characters that serve as job interviewers in order to train how to regulate their emotions. This helps them learn how to cope with emotional states that arise in socially challenging situations, such as nervousness or anxiety. The first version used a desktop-based interface in TARDIS, while more recent work focuses on the use of augmented reality, as enabled by Google glass, to give users’ recommendations on their social and emotional behaviors on the fly.

In the German-Greek CARE project, emotion recognition techniques are used to adapt life style recommendations to the emotional state of elderly people.

One of the biggest challenges in teaching computers to recognize a variety of emotions in speech lies in working in natural environments. André said that promising results have been obtained for a limited set of basic emotions that are expressed in a prototypical manner, such as anger or happiness, but more subtle emotional states can be difficult. Also real world environments come with background noise which affects recognition rates.

Another challenge is that people show great individualism in their emotional expression. André explained,

"Many people don’t show emotions in a clear manner. Also in some social situations people don’t reveal their true emotions. For example, when talking to somebody with a high status, you would avoid showing negative emotions, such as anger."

Results comparable to human skills have been obtained for tracking a limited set of emotions when the speech is recorded beforehand and analyzed later, said André. Her team is working on improving the technology to analyze emotions in the wild using non-intrusive microphones and for running the software on mobile devices.

Seeing Emotions

Companies like Affectiva and Emotient are also starting to develop technology for quantifying emotional expression through video analysis of facial expressions. For example, Affectiva’s Affdex technology analyzes facial expressions to discern consumers’ emotions such as whether a person is engaged, amused, surprised or confused.

Affdex employs advanced computer vision and machine-learning algorithms within a scalable cloud based infrastructure to identify the emotions portrayed in a face video. Affectiva has also developed SDKs for developing facial emotional analysis applications on both iPhone and Android devices.

Affdex uses standard webcams, like those embedded in laptops, tablets, and mobile phones, to capture facial videos of people as they view the desired content. Affectiva’s el Kaliouby said, “The prevalence of inexpensive webcams eliminates the need for specialized equipment. This makes Affdex ideally suited to capture face videos from anywhere in the world, in a wide variety of natural settings (e.g., living rooms, kitchens, office).”

First, a face is identified in the video and the main feature points on the face are located, such as eyes and mouth. Once the region of interest has been isolated, e.g., the mouth region, Affdex analyzes each pixel in the region to describe the color, texture, edges and gradients of the face, which is then mapped, using machine learning, to a facial expression of emotion, such as a smile or smirk.

Affdex Dashboard – Valence Trace

Once classification is complete the emotion data extracted from a video is ready for summarization and aggregation, and is presented via the Affdex online dashboard. Expression information is also summarized for addition to a normative database.

Affectiva has amassed about two million facial videos from over 70 countries, which has allowed the company to build a global database of emotion response that can be sliced by geographic region, demographic regions, as well as industries and product categories. This allows companies to perform A/B tests for their content, and also get a better sense of where their ad falls with respect to other content in their vertical or market.

el Kaliouby said,

"Today our technology can understand that a smile can have many different meanings – it could be a genuine smile, a smile of amusement, a smirk, a sarcastic smile, or a polite smile. This is where Affdex is at the moment – we’re training the machine that emotions come in many different nuances / flavors. Where we would like to take this in the future is for an emotion-sensing computer to pick on more subtle cues like maybe a subtle lip purse or eye twitch – it will incorporate head gestures and shoulder shrug and physiological signals. It will know if a person is feeling nostalgic, or inspired."

Emotient’s software is designed to work as a web-based service with any video camera or camera-enabled device. This could be a webcam, camera embedded in a smart screen or digital sign, tablet, or smartphone. The software measures emotional responses via facial expression analysis. Emotient’s approach combines proprietary machine learning algorithms, a self-optimizing data collection engine, and state-of-the-art facial behavior analysis to detect 7 primary emotions, including joy, surprise, sadness, anger, disgust, contempt, and fear, as well as more advanced emotional states including confusion and frustration. The system detects all faces within the field of view of each frame and analyzes the facial expression.

Emotient’s co-founders have spent the past two decades innovating an automated emotion measurement technology based on facial expression analysis. Emotient’s team has published hundreds of papers on novel uses for the technology that include the development of autism intervention games that help subjects mimic facial expressions and identify facial expressions in others, measuring the difference between real and fake pain, analyzing student engagement in an online education setting.

As a business, Emotient has chosen to focus our early commercial efforts around advertising, market research and retail, and are delivering emotion analytics to customers as aggregate, anonymous data that is segmented by demographic.

Emotient’s Denman said early adopters of the software are using it to automate focus group testing and to conduct market research for product and user experience assessment.

"In the past year we have been working with major retailers, brands and retail technology providers who are using Emotient to compile analytics on aggregate customer sentiment at point of sale, and in response to new advertising or promotions in-store or online.

Customer service patterns and trends can be identified, both for training purposes and troubleshooting in areas of the store where assistance is needed, or as a measure at point of sale or point of entry to determine customer satisfaction levels. The resulting analytics can be used in benchmarking the efficacy of a specific display, shelf promotion, advertisement, or overall customer experience. We believe the real value of the emotion analytics we collect and deliver to retailers and brands is in aggregate information segmented by target demographic, and less so by individuals."

Emotient is working with iMotions, which has built a full Attention Tool platform, which includes other biometrics including eye tracking, heart rate measurement, and galvanic skin response (GSR) for improved academic, market, and usability research. Denman said, “These other biometric signals can be valid and helpful but facial expression analysis provides unique context that isn’t possible to capture otherwise.”

New Mirrors for Emotional Reflection

New mirrors for looking inwards could also help to overcome the blinders to recognizing our inner state. Beyond Verbal’s Emodi said,

"Understanding ourselves is something we are much less capable of doing. Many of us have limited capabilities at understanding how we come across and what we transmit to the other side. Emotions Analytics hold great potential in helping people get in tune with their own inner self – from tracking our happiness and emotional well-being to practicing our Valentine pitch, working on our assertiveness or leadership capabilities, and learning to be a more effective sales person."

Western culture makes it easy to push down and hide our emotions from each other, not to mention ourselves. Paul Ekman, Professor Emeritus of Psychology at UCSF found that many people often mute out the full expression of emotions that flash briefly or manifest less intensely as what he calls micro-expression and subtle expressions.

Personal experiments with older versions of Ekman’s training tools for identifying facial expressions made it easier to not only identify these emotions in other people, but to notice them with greater clarity in myself. Using these kinds of tools for emotional analysis promises to provide us with benchmarks for easily tracking our emotional state over time, and perhaps identifying behaviors and events that impact us in beneficial or adverse ways.

But this is not always an easy inquiry. As API Alchemy’s Chavez noted,

"People have used this to look at emails to analyze their own emails for sentiment. What is funny is that people are more willing to shine the light on others and groups of people for marketing, but not as often do they put it on themselves to see if their emails have negative tones or their feet are going in the wrong direction."

Emotient Gains Exclusive Rights to New, Expansive Patent Issued for Automated Facial Action Coding System

September 3, 2014 – San Diego, CA

Emotient, the leading provider of facial expression measurement data and analysis, today announced exclusive rights to a newly issued patent (US 8,798,374 B2) entitled, “Automated Facial Action Coding System.” This expansive patent protects Emotient’s core technology, from face detection to measurement of primary emotions, to detection of facial muscle movements.

“We believe the automated facial action coding patent is very expansive and protects much of our core system,” said Ken Denman, President and CEO, Emotient. “We remain committed to growing our intellectual property portfolio. Over the last 18 months, we have filed 16 patent applications to protect additional aspects of our core technology, as well as innovative vertical applications for the retail, health-care, education, and entertainment industries.”

The automated facial action coding patent inventors include Emotient Co-Founders Dr. Marian Bartlett, Dr. Gwen Littlefort, Dr. Javier Movellan, Dr. Ian Fasel and their colleague Mark Frank from University of New York. The patent was issued to the Regents of the University of California and The Research Foundation of State University of New York on August 5, 2014. Emotient holds exclusive rights to this and other work that the team developed while at UC San Diego, prior to leaving and co-founding Emotient in 2012. The patent covers the system and method to detect faces and facial features, machine learning techniques for feature selection that are trained on spontaneous expressions; deep neural network applications to machine learning-based classifiers trained on spontaneous expressions and determination of the presence of one or more elementary facial muscle movements or Action Units (AUs).

Emotient is focused on delivering its video-based expression measurement and sentiment analysis software to Global 2000 customers in the market research, retail, education and healthcare verticals. Emotient’s software enables businesses to deliver better products, enhance the user experience, and improve patient care. Emotient provides access to critical emotion insights, processing anonymous facial expressions of individuals and groups; the software does not store video or images. Emotient detects and measures seven facial expressions of primary emotion (joy, surprise, sadness, anger, fear, disgust and contempt); overall sentiments (positive, negative, and neutral), advanced emotions (frustration and confusion) and 19 Action Units (AUs). The Emotient system sets the industry standard for accuracy, with the precise ability to detect single-frame microexpressions of emotions.

About Emotient – Automated Facial Expression Analysis

Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available as an API for Global 2000 companies within consumer packaged goods, retail, healthcare, education and other industries.

Emotient was founded by a team of six Ph.D.s from the University of California, San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. For more information on Emotient, please visit

Media Contact
Vikki Herrera | Emotient | | 858.314.3385

What does the Future of Retail Look Like? Four Young Companies Provide a Glimpse

Forbes logo

By J.J. Colao

There was a lot to digest at Jason Calacanis’ and Pivotal Labs‘ Launch Beacon conference on Monday, an event devoted to exploring trends around e-commerce, retail, payments and location-based technology.

To this audience member, the most interesting bits came from the morning demos of four companies working on technology with applications for retail and e-commerce.

If these startups have their way, we’ll soon live in a world of interactive in-store displays, ubiquitous mood tracking and effortless shopping on the fly.

Oh, and your smartphone might replace your waiter.


There aren’t enough interactive screens in your life, so the people of PERCH are here to help. The company sells a projector that produces a customized digital display for shoppers to play with as they browse products in store. In addition to conventional touch-screen interactions, like swiping and scrolling, the technology senses when customers touch or pick up objects placed on the surface.

In the video below, for example, shoppers looking for nail polish get a delightful ping when they touch each bottle, along with information about the shade.

Founded by CEO Jared Schiffman, a veteran of MIT’s Media Lab, in 2012, the company counts Kate Spade, Kiehl’s, Cole Haan and Quirky as customers, leasing the tech for $500 per month. (They previously sold it for $7500.) PERCH can simultaneously update thousands of units across different stores at the same time and brands can group multiple units together for more complex presentations.

With $60K per month in sales, the New York-based company is currently raising a $1.2 million seed round.


We humans aren’t all that reliable when it comes to customer feedback. Either innocently or intentionally, we often mischaracterize our perceptions of brands, products and advertisements.

Emotient, based in San Diego, uses facial expression recognition software to cut through the nonsense and figure out out how we really feel. The company can train cameras on focus groups watching a Super Bowl ad for the first time, or at the entrance of big-box stores to gauge customers’ moods as they go in and out. Browsing the shampoo aisle? Emotient can track your reactions to different packaging and brands to see which elicit the most positive responses.

Depending on the assignment, the company measures levels of emotions like joy, anger, sadness, surprise, fear, disgust and contempt. Founded by three PhD’s working at the UC San Diego, it’s run by CEO Ken Denman, who previously ran Openwave, a telecom company.

In a study of fans watching the Super Bowl in February, the company found that the Pistachios ad featuring Steven Colbert prompted the most positive reactions by a wide margin. Viewers responded especially well when Colbert came back on-screen again 30 seconds later, as shown in the graph below.

Emotient pistachios

The company raised $6 million in Series B funding from Intel Capital in March.


Cofounder Cameron Chell described Slyce onstage as “Shazam for everything.” The Toronto-based company is trying to monetize those moments out in the wild, when you spot an enviable piece of clothing worn by another. Snap a picture with the technology, soon to be integrated into a public mobile app, and up pops an array of similar items available for purchase.

Chell demoed the product on stage with Calacanis’ shoes and the app conjured up a couple dozen comparable items available for immediate purchase. Amazon Flow and Asap54 offer competing technologies.


Downtown CEO Phil Buckendorf, a 23 year-old native of Germany, played professional golf before trying his hand at a startup. His company works with Palo Alto, Calif. restaurants to “distribute the point of sale.”

Translation: Downtown wants to enable customers to order food and pay from wherever they are in a restaurant. Instead of waiting for a server, users can open their phones and find a menu synced to an Estimote beacon, a small sensor that communicates with nearby smartphones. Because each beacon is assigned to a table, restaurant owners can track orders and deliver customers’ food accordingly.

“We want to be the fast line when you consume inside the restaurant,” Buckendorf says. The technology can easily apply to retail purchases as well, allowing shoppers to buy items on the spot instead of waiting in line to checkout.

The company is going for scale before charging restaurant owners, but Buckendorf plans on taking a 3.5-5.5% cut of each transaction.

Stephen Ritter Joins Emotient as Senior Vice President of Product Development

June 19, 2014 – San Diego, CA

Emotient, the leading provider of facial expression analysis software, today announced that Stephen Ritter joined the company as Senior Vice President of Product Development. Stephen brings extensive engineering management experience from successful start-ups and larger enterprises, including Cypher Genomics, Websense and McAfee, an Intel company.

“Steve is a proven technology leader who has demonstrated success in transforming cutting edge technology into highly successful commercial products,” said Ken Denman, President and CEO, Emotient. “We will rely on his expertise as we continue to drive our innovative, emotion-aware software into the healthcare and retail markets, through a cloud-based platform.”

“Emotient is at the forefront of a massive trend in emotion-aware computing,” said Ritter. “Emotient’s founding team is spearheading the development of proprietary technology that sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. I look forward to leading the commercial delivery of new emotion analytics products for retail and healthcare based on our state-of-the-art technology.”

Most recently, Stephen served as CTO at Cypher Genomics, a leading genome informatics company. Previously, he was Vice President, Engineering for Websense, where he led a global team in the development of the market leading web security product, TRITON Web Security Gateway. Prior to WebSense he served as senior director of engineering at McAfee, now an Intel company, where he developed one of the most advanced and scalable security management software systems in the industry.

Ritter began his career as an accomplished software developer/architect and currently holds six patents in the area of information security. He holds a B.S. in Cognitive Science from UC San Diego.

About Emotient – Emotion Recognition Technology (
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available to Global 2000 companies within the retail, healthcare and consumer packaged goods industries.

Emotient to Present Emotion Aware Google Glassware at Vision Sciences Society (VSS) 2014

Emotient, UC San Diego and University of Victoria Also Release Web-based Autism Intervention App Called Emotion Mirror

May 12, 2014 – San Diego, CA
Emotient, the leading provider of facial expression recognition data and analysis, today announced it will demonstrate its emotion-aware Google Glass application at the 12th Annual VSS Dinner and Demo Night on May 19, 2014 from 6 - 10 p.m. at Vision Sciences Society 2014 at the TradeWinds Island Resorts, St. Pete Beach, Florida.

The Google Glass application leverages Emotient’s core technology to process facial expressions and provides an aggregate emotional read-out, measuring overall sentiment (positive, negative or neutral); primary emotions (joy, surprise, sadness, fear, disgust, contempt and anger); and advanced emotions (frustration and confusion). The Emotient software detects and processes anonymous facial expressions of individuals and groups in the Glass wearer's field of view. The emotions are displayed as colored boxes placed around faces that were detected, where the color indicates the automatically recognized emotion.

In addition, two of Emotient’s researchers, also affiliated with UC San Diego, together with researchers from the University of Victoria, developed a new version of the Emotion Mirror application, an autism intervention tool that is designed to aid autistic individuals in identifying and mimicking facial expressions in a fun and entertaining way. Emotion Mirror will be used in clinical studies in collaboration with the University of Victoria.

“We are looking forward to sharing our latest emotion-aware software applications at this year’s VSS, including our Google Glassware,” said Dr. Joshua Susskind, Emotient’s Co-Founder, Engineer and Research Scientist, who co-developed the Glass application with Emotient software engineer, Mark Wazny. “We believe there is an opportunity to apply Emotient’s facial expression recognition software to enable autism intervention programs on mobile devices, making it easy to deploy and conduct studies in clinical populations.”

Collaborator Jim Tanaka, University of Victoria Professor of Psychology, added, “The growing availability of wearables leads us to believe our technology can have a huge positive impact on the autism community and more broadly, the healthcare industry.”

The Emotion Mirror application was initially developed in 2010-11 in collaboration between UC San Diego and University of Victoria as an intervention game for improving expression production and perception in kids with Autism Spectrum Disorder (ASD). The new Emotion Mirror application was developed as an HTML5 web application powered by Emotient’s Cloud API. In addition to Emotient’s Dr. Joshua Susskind, Dr. Marian Bartlett and Morgan Neiman, Emotion Mirror’s collaborators include University of Victoria Professor of Psychology, Jim Tanaka and Postdoctoral Researcher, Buyun Xu.

Emotient’s Google Glassware is available as a private beta for select partners and customers in retail and healthcare.

About Emotient – Emotion Recognition Technology (
Emotient, Inc., is the leading authority in facial expression analysis. Emotient’s software translates facial expressions into actionable information, thereby enabling companies to develop emotion-aware technologies and to create new levels of customer engagement, research, and analysis. Emotient’s facial expression technology is currently available as an API for Fortune 500 companies within consumer packaged goods, retail, healthcare, education and other industries.

Emotient was founded by a team of six Ph.D.s from the University of California, San Diego, who are the foremost experts in applying machine learning, computer vision and cognitive science to facial behavioral analysis. Its proprietary technology sets the industry standard for accuracy and real-time delivery of facial expression data and analysis. For more information on Emotient, please visit