We recently tested four smart speakers by asking Alexa, Siri, Google Assistant, and Cortana 800 questions each. Google Assistant was able to answer 88% of them correctly vs. Siri at 75%, Alexa at 73%, and Cortana at 63%. Last year, Google Assistant was able to answer 81% correctly vs. Siri (Feb-18) at 52%, Alexa at 64%, and Cortana at 56%.
As part of our ongoing effort to better understand the practical use cases of AI and the emergence of voice as a computing input, we regularly test the most common digital assistants and smart speakers. This time, we focused solely on smart speakers Amazon Echo (Alexa), Google Home (Google Assistant), HomePod (Siri), and Invoke (Cortana).
See past comparisons of smart speakers here and digital assistants here. We separate digital assistants on your smartphone from smart speakers because, while the underlying tech is similar, the use cases and user experience differ greatly. Therefore, it’s less helpful to compare, say, Siri on your iPhone and Alexa on an Echo in your kitchen.
We asked each smart speaker the same 800 questions, and they were graded on two metrics: 1. Did it understand what was said? 2. Did it deliver a correct response? The question set, which is designed to comprehensively test a smart speaker’s ability and utility, is broken into 5 categories: Read more
Live Now : Irine