Stanford researchers find that automated speech recognition is more likely to misinterpret black speakers

Tests by Stanford researchers show that five leading speech recognition programs make twice as many errors with African American speakers as with whites. (Image credit: Getty Images)
Mar 23 2020
Members Only

Five speech recognition technologies tested by Stanford Engineering researchers found error rates twice as high for blacks as for white. Researchers speculate that machine learning systems used to train these systems likely rely heavily on databases of English spoken by white Americans. Speech recognition is used by more companies to screen job applicants with automated online interviews and people unable to use their hands to access computers. To make technology inclusive, audits of new technologies need to be done to search for hidden biases that may exclude people who are already marginalized.