Mitigating gender-bias in speech emotion recognition

Mitigating gender-bias in speech emotion recognition

Presented By John Kane
John Kane
John Kane
Distinguished Scientist at Cogito

Dr. John Kane is Distinguished Scientist in the area of machine learning at Cogito and has nearly a decade of expertise in speech science and technology. At Cogito he leads the research and development of machine learning algorithms to enable real-time processing of audio, speech and other behavioral signals which powers applications in healthcare and in the call centre. John is an active member of the speech research community, contributing as a reviewer for leading journals and conferences in the space and as a maintainer of open source speech processing tools.

Presentation Description

Machine learning can unintentionally encode and amplify negative bias and stereotypes present in humans, be they conscious or unconscious. This has led to high-profile cases where machine learning systems have been found to exhibit bias towards gender, race, and ethnicity, among other demographic categories. Negative bias can be encoded in these algorithms based on: the representation of different population categories in the model training data; bias arising from manual human labeling of these data; as well as modeling types and optimization approaches used. In this talk I will discuss the problem of negative bias in machine learning generally and also specifically the case of gender bias in the applied area of emotion recognition from speech. I will demonstrate that lower recall for emotional activation in female speech samples can be attenuated by applying an adversarial de-biasing training technique.

Presentation Curriculum

Mitigating gender-bias in speech emotion recognition
36:22
Hide Content