Effects of Talker Dialect, Gender & Race on Accuracy of Bing Speech and YouTube Automatic Captions
Rachael Tatman,C. Kasten
TLDR
This project compares the accuracy of two automatic speech recognition systems–Bing Speech and YouTube’s automatic captions–across gender, race and four dialects of American English, finding that neither system had a reliably different WER between genders.
Abstract
This project compares the accuracy of two automatic speech recognition (ASR) systems–Bing Speech and YouTube’s automatic captions–across gender, race and four dialects of American English. The dialects included were chosen for their acoustic dissimilarity. Bing Speech had differences in word error rate (WER) between dialects and ethnicities, but they were not statistically reliable. YouTube’s automatic captions, however, did have statistically different WERs between dialects and races. The lowest average error rates were for General American and white talkers, respectively. Neither system had a reliably different WER between genders, which had been previously reported for YouTube’s automatic captions [1]. However, the higher error rate non-white talkers is worrying, as it may reduce the utility of these systems for talkers of color.
