Apple (NASDAQ:AAPL) has developed a unique speech recognition system that could make future iterations of Siri less likely to misunderstand a user’s language, reports Apple Insider. Although Apple’s voice-activated personal assistant can understand most users’ natural language queries, it is more likely to misinterpret nonstandard dialects or uncommonly used word sequences.
In a patent titled, “Automatic input signal recognition using location based language modeling,” Apple outlines a system that would combine “location-based information” and “local language models” to create a “hybrid language model.” This “hybrid language model” would allow Siri to more accurately interpret a particular user’s input.
As an example, Apple notes that the phrase “goat hill” would most likely be interpreted by Siri as “good will,” since the latter phrase is a much more commonly spoken word sequence. However, if the user was in a city with a café named Goat Hill, Siri would incorporate this geographic information and give increased weight to the likelihood that the user intended to say “goat hill.”
However, Apple also notes that this speech recognition system’s “location-based language modeling” could skew Siri’s misinterpretations in the opposite direction. In other words, Siri might automatically assume that a user means to say “goat hill,” when they are actually saying “good will.”
Geographic regions that feature different word sequences or nonstandard dialects are sometimes closely located to each other, which can also create confusion. Apple notes that locations close to the borders of neighboring regions would be especially prone to incorrect language recognition.
Apple proposes several ways to solve some of these complex language modeling issues. One method allows the user to manually preselect a local language model. Another method assigns various “weights” to local language models based on the user’s proximity to the geo-region’s “centroid.” The further a user is from the centroid, the more likely Siri is to interpret the user’s inputs based on a global language model.
Although it is not known if Apple has any plans to incorporate this system into upcoming versions of Siri, the Cupertino-based company is rumored to be actively developing new voice recognition technologies. According to a recent report from Xconomy, Apple has assembled a small team of speech technology experts that are based in the Boston area.
If Apple does implement an improved speech recognition system for Siri, it could give the iPhone maker a strategic advantage over competing virtual personal assistant systems such the Google (NASDAQ:GOOG) Now system.
Here’s how Apple has traded over the past five days.
Follow Nathanael on Twitter (@ArnoldEtan_WSCS)