FaceReader 7 is the premier professional software for automatic analysis of the basic facial expressions (happy, sad, scared, disgusted, surprised, angry, neutral, and contempt). With the new face Deep expressions under more challenging circumstances. FaceReader also provides gaze direction, head orientation, and personal characteristics, such as gender and age.
Moreover, detailed analysis of 20 commonly used facial action units is available. The software immediately analyzes your data, saving valuable time. It is available as desktop software, and also as an online application
How does it work?
FaceReader automatically analyzes 6 basic facial expressions, as well as neutral and contempt. It also calculates gaze direction, head orientation, and person characteristics. The Project Analysis Module is ideal for advanced analysis and reporting: you quickly gain insight into the effects of different stimuli. And, analysis of Action Units is available.
Who uses it?
FaceReader is used worldwide at more than 500 universities, research institutes, and companies in many markets, such as consumer behavior research, usability studies, psychology, educational research, and market research.
Deep face model
With this new classification engine, FaceReader is able to analyze an expression under challenging circumstances; for example, when part of the face is hidden. The Deep Face Model continues where the Active Appearance Model stops, giving more robust analysis than ever before!
FaceReader works in three steps:
- Face finding – the face is automatically found in a capture window.
- Face modeling – the Active Appearance Model is used to apply an artificial face model, which describes the location of over 500 key points as well as the texture of the face. These outcomes are now combined with the results of the Deep Face algorithm to achieve a higher classification accuracy. When Face modeling is not successful, for example when a hand is covering the mouth but both eyes can be found, the Deep Face algorithm, based on Deep Learning, takes over by still supplying most of the classifications available in FaceReader.
- Face classification – output is presented as seven basic expressions, and one neutral state, with frame-by-frame accuracy.
FaceReader 7 offers you a complete solution! The Project Analysis Module can be used for advanced analysis and reporting. This module allows you to compare responses to different video or image stimuli in one view, offering faster insights into the effects of stimuli. You can calculate mean intensities of facial expressions of single participants, but also for a group of participants, or during certain episodes.
With FaceReader 7, you can now use images as the source of a stimulus as well! When you add the video or an image stimulus of your choice, the Stimulus Presentation Tool automatically synchronizes displaying the stimulus movies or images to a test participant, while FaceReader is recording the facial expressions. This occurs via the stimulus trigger within FaceReader, all in perfect sync!
Measure heart rate remotely
We developed and integrated a remote heart rate measure into FaceReader as an add-on module, based on remote Photo-plethysmography (PPG) . This technique measures the small changes in color caused by changes in blood volume under the skin epidermis. The PPG data is used to determine the subject’s heart rate. This can be particularly useful as an additional indicator of arousal for subjects or situations where there is little variation in facial expressions.
FaceReader contains a wide variety of visualization options to make the data easily accessible for users.
- Continuous Expression Intensities – FaceReader outputs the basic expressions as continuous intensity values between zero and one.
- Action Unit Detection – The basic emotions are only a fraction of the possible facial expressions. A widely used method for describing the activation of the individual facial muscles is the Facial Action Coding System (Ekman 2002). This is an add-on module.
- Circumplex Model of Affect – The circumplex model of affect describes the distribution of emotions in a 2D circular space, containing arousal and valence dimensions.
- Expression Summary – A summary of the expressions during a single analysis can be viewed in an easily understandable pie chart, showing overall responses. Different subparts of the analysis can be selected to view the summary of the expressions.
- Heart rate (at the top of the screen) and Heart Rate Line Chart (at the bottom of the screen).
FaceReader can now communicate with other Noldus products using N-Linx, the new standard protocol for integrating systems for behavioral research. With N-Linx you can, for example, easily connect FaceReader and The Observer XT for real time control, synchronization, and data exchange in combination with other applications such as eye trackers or DAQ systems.
FaceReader 7 is now available for use via site license. With this type of license, a hardware key is no longer needed, and you can login anytime you want, anywhere you like.
FaceReader is the ideal and affordable tool for advertising and market research. How do people respond to a commercial? What about package design? Emotions are essential in predicting the potential success of new products and services. Measuring emotions is, therefore, more and more important in this type of research. Assess facial expressions with FaceReader and learn more about what your respondents like and dislike.