The scientific study of emotions was pioneered by Charles Darwin in the 19th century. Later, psychologists such as Paul Ekman focussed research into the role of facial expressions in displaying emotions. Their work principally used images of faces. This work led to the categorisation of emotional facial expressions into discrete archetypes (happy, sad, fear, anger surprise and disgust).
It is important to note that facial expressions are generated by the contractions of muscles, which in turn may (or may not) deform the overlying skin. Therefore, changes in the position of facial features occur after muscle activation. Researchers used a technique called electromyography [EMG] to measure the electrical activation of muscles beneath the skin, to determine whether early computer vision systems could detect subtle expressions (Cohn & Schmidt, 2004).
Early work on facial feature tracking and coding involved using facial EMG to detect the onset of an expression. Adapted from Schmidt, Cohn and Tian 2003
Electromyography involves using electrodes like little microphones which “listen” for muscle activation 2000 times per second (unlike cameras which sample at 30-60 times per second). EMG is highly sensitive and can even pick up micro-expressions which are not observable. Unlike cameras, which rely on the indirect measurement of skin overlying the muscle, EMG can also detect changes in baseline muscle tone and directly record electrical activity.
Measuring facial expressions and emotional responses using EMG is a fundamental research method that, until recently, was confined to the laboratory. With a combination of multi-sensor arrays, active noise cancellation and advanced algorithms, emteq labs is liberating this powerful tool and making it available to researchers, content creators and developers. By combining this with non-invasive heart rate and heart rate variability sensing, the emteqPRO offers a “lab-in-a-box” solution for conducting remote studies. This offers significant potential for researchers in media, marketing, gaming and psychology.
Virtual reality provides a powerful paradigm for measuring behaviour and simulating controlled realistic environments. However, the most salient facial information is covered by the headset, hence a different approach is needed.
Eye tracking heatmap of face to face interaction. In virtual reality, this important area is largely under cover.
According to the basic emotions model, the face exhibits the feelings of happiness, sadness, fear, anger, surprise and disgust. Positive valence is immediately recognisable in a smile, whereas negative valence is indicated by a frown of anger or disgust. A series of studies in the 1980’s and 1990’s demonstrated that the facial muscle activation which causes a frown (corrugator supercilii) is related to a decrease in valence. Conversely, the facial muscle activation of a smile (zygomaticus major), is correlated with an increase in positive valence.
Illustration of the relationship between the facial muscles and that indicate positive (zygomaticus major) and negative (corrugator supercilii) valence.