Exploring Biases in Facial Expression Analysis using Synthetic Faces

Dec 2, 2022



Automated facial expression recognition is useful for many applications, but models are often subject to racial biases. These racial biases may be hard to reveal due to complexity and opacity of the complex networks needed for state-of-the-art performance. Racial biases are also hard to demonstrate due to the inability to fully match facial expressions across real people. In this paper we use artificially created faces where facial expression can be carefully manipulated and matched across artificial faces with different skin colors and different facial shapes. We show that several public facial expression models appear to have a racial bias. In future work, we will work towards using the artificial data to help understand the basis of these biases and remove them from facial expression models.


Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%


Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2022