There’s no denying that Artificial Intelligence is one of the hottest buzzwords, not just in IT, but across the technological landscape. For a lot of people, AI is synonymous with voice interactions with virtual assistant personas, like Siri or Alexa. One of the problems these interfaces can run into are issues of accents. In the English-speaking world, accents can range a wildly within countries, let alone between them.
This presents a challenge for AIs that interact via voice. While choosing specific localization options help, it can be difficult to determine on the fly which contextual accent data to use to interpret speech. Cambridge Consultants put together a demo to show how effective AI can be in determining this from just a simple sentence. As a side benefit, I’m sure they’re hoping to get lots of people using it, thus providing lots of raw material to use for improving their AI inference.
The end result is pretty fast, and just a little fun to try and game. What’s interesting is not just the raw number they provide (my northeast Ohio accent was deemed 89% American), but also the weight behind this number. Each word in the sentence is broken by color based on how much it helped determine your accent.
Interestingly in my result, by number, I had more words identify as vaguely British. However of the words that identified with an American accent, the association was understandably much stronger. It does make me really self-conscious about how I’m saying the word “call”.
Beyond being a fun little demo, it’s also interesting to see how much information an AI can extract from a user even with a sentence free of all context. I’m sure Cambridge Consultants and anyone else making AI are finding a multitude of ways of turning vocal metadata into meaningful insights. That’s also a little scary.
Leave a Comment