Science. The thinking behind our products.

Our Technology
We use a range of sensing techniques including electrical facial muscle activity, heart rate features, head movement and eye tracking with frequencies up to 1000 times per second to identify emotional and stress response to stimuli. Our cloud AI engine translates that information and provides insights into emotional state & response on demand.
Our approach
Perceptions of positivity and negativity associated with a stimulus are fundamental mechanisms that underpin much of our emotional experience. Depending on context, positivity will be indexed via relative activation of the zygomaticus major muscle (associated with smiling) whereas negativity is represented by activation of the corrugator supercilii (associated with frowning). The relative activation of each muscle group is measured to understand the unconscious generation of emotional experience as an adaptive, nuanced and flexible system.
Virtual Reality presents a unique way to understand and measure human reactions by monitoring behaviours and responses to known stimuli. As such, it offers a more cost-effective way to generate ecologically valid environments for research.

Early work on facial feature tracking and coding involved using facial EMG to detect the onset of an expression. Adapted from Schmidt, Cohn and Tian 2003
Perceptions of positivity and negativity
Our approach to quantifying emotional responses is to use continuous interpretive systems such as the Dimensional Model, which considers both valence (positivity and negativity) as well as arousal (activation of the sympathetic nervous system). emteq lab’s object tagging system captures context (stimulus) combined with individualised response measures, allowing application of both Evaluative Space and Constructed Emotion models.

The Science behind our technology
The scientific study of emotions was pioneered by Charles Darwin in the 19th century. Later, psychologists such as Paul Ekman focussed research into the role of facial expressions in displaying emotions. Their work principally used images of faces. This work led to the categorisation of emotional facial expressions into discrete archetypes (happy, sad, fear, anger surprise and disgust).
It is important to note that facial expressions are generated by the contractions of muscles, which in turn may (or may not) deform the overlying skin. Therefore, changes in the position of facial features occur after muscle activation. Researchers used a technique called electromyography [EMG] to measure the electrical activation of muscles beneath the skin, to determine whether early computer vision systems could detect subtle expressions (Cohn & Schmidt, 2004).

Early work on facial feature tracking and coding involved using facial EMG to detect the onset of an expression. Adapted from Schmidt, Cohn and Tian 2003
Electromyography involves using electrodes like little microphones which “listen” for muscle activation 2000 times per second (unlike cameras which sample at 30-60 times per second). EMG is highly sensitive and can even pick up micro-expressions which are not observable. Unlike cameras, which rely on the indirect measurement of skin overlying the muscle, EMG can also detect changes in baseline muscle tone and directly record electrical activity.
Measuring facial expressions and emotional responses using EMG is a fundamental research method that, until recently, was confined to the laboratory. With a combination of multi-sensor arrays, active noise cancellation and advanced algorithms, emteq labs is liberating this powerful tool and making it available to researchers, content creators and developers. By combining this with non-invasive heart rate and heart rate variability sensing, the emteqPRO offers a “lab-in-a-box” solution for conducting remote studies. This offers significant potential for researchers in media, marketing, gaming and psychology.
Virtual reality provides a powerful paradigm for measuring behaviour and simulating controlled realistic environments. However, the most salient facial information is covered by the headset, hence a different approach is needed.

Eye tracking heatmap of face to face interaction. In virtual reality, this important area is largely under cover
Research using facial EMG ranges from neuroscience, psychology, human computer interaction, gaming, content analytics and training. You can read a selection of research articles using facial EMG here. Researchers at Emteq Labs and our collaborators have validated the use of our technology to assess facial EMG in a range of contexts. To read more, you can find articles here.

Illustration of the relationship between the facial muscles and that indicate positive (zygomaticus major) and negative (corrugator supercilii) valence.

EmteqPRO HMD insert

Our patents
The company has a growing patent portfolio covering core requirements for behavioural measurement and analysis for XR which includes the following:
Patent No: GB 2518113, US10398373 (Granted: UK 2018, US: 2019).
Wearable apparatus for providing muscular biofeedback. The apparatus comprises biosensors that are able to detect activity of a set of facial muscles and a pattern in the sensor data that is characteristic of a facial muscle imbalance can be detected. When such a pattern is detected, feedback may be provided to a wearer. This informs a user that their facial expression is imbalanced, allowing them to attempt to correct for the imbalance
Patent No: GB 2552124, US10398373 (Granted: UK: 2018, US: 2019, Published with WIPO on 2014).
Wearable apparatus for providing muscular biofeedback. The apparatus comprises sensors arranged to detect activity of the posterior auricular muscles (behind the ear), and by identifying patterns in this activity, the apparatus can infer a zygomaticus muscle (a cheek muscle used when smiling) is also active. Thus, the system provides an indirect and unobtrusive means by which the activity of the zygomaticus can be detected.
Patent No: GB1703133 (Application filed, pending).
A wearable system for detecting facial muscle activity. The system comprises several optical flow sensors arranged to image an area of the skin. Each imaged area of skin is associated with one or more facial muscles. Thus, a processor can be configured to determine the activity of facial muscle by analysing how the images vary over time.
The company has also filed several other applications for protection of its IP, especially with regard to the proprietary sensors used by OCOSense and is in the process of preparing additional applications
The company has also filed several other applications for protection of its IP, especially with regard to the proprietary sensors used by OCOSense and is in the process of preparing additional applications
Browse our whitepapers
Discover the science of facial electromyography to uncover hidden insights.
See All Whitepapers