In many ways Google Assistant is far superior to Amazon Alexa. Although both are essentially AI-based smart voice assistants, each has a specific strength. For example, Assistant has the support of Google’s powerful search engine and ecosystem of services, while Alexa has the massive Amazon retail catalog to tap into, and the ability to control a variety of smart home devices.
From a viewpoint of specific skills, however, Amazon moved quickly with Alexa to invite developers to create their own skills using Alexa Voice Service API (AVS API). In its most basic form, AVS is a programming-language-agnostic service that allows developers to integrate Alexa capability into their own hardware.
Google Assistant, on the other hand, can effect what are known as ‘Actions’ that work with specific third party services using the Conversation API. They’ve very similar to Alexa Skills that work with the AVS API, and allow developers to integrate Google Assistant capabilities into their products.
Android Headlines reported last night that Assistant was updated with more than a dozen new actions this week. Some of these include Chefling (already available on Alexa) for keeping track of expiration dates on grocery items and suggesting recipes based on what ingredients you have, controlling Neato robot vacuums, access data about horse races, getting IMDB ratings and more.
The integrations for Google Assistant works in a slightly different way than those for Amazon Alexa. In Amazon’s case, the developer builds the Skill and can put it on a marketplace, from where Amazon Echo users can enable them. With Google Assistant, these commands are approved and enabled by Google on their cloud servers, from where users of Google Home, Google Pixel and Google Allo can access them. When Google Assistant is integrated into a third-party product, it can access these Actions.
If you compare the capabilities of Google Assistant and Amazon Alexa, you’ll notice that Google only has a limited choice of actions – both direct and conversation. But it’s not really fair to compare, since Google Assistant only opened up to developers in late 2016. Amazon, on the other hand, had a significant head start of about 18 months, having launched on Jun. 25, 2015.
But in that 18 months or so, Amazon has aggressively pushed the service to developers, and quickly scaled up to over 10,000 skills, and counting. Google isn’t there yet, but more than a hundred developers have already released their Actions into the Android ecosystem.
Amazon will be looking to maintain its lead over Google as far as the number of third-party integrations are concerned. It will be their only way to ensure traction for the product over the long term. On the other side, Google needs to push extra hard to even get close to the skills that Alexa currently has.
In reality, a lot of these skills and actions are meant specifically for users of those third party services or devices, and most people rarely use more than a handful of voice commands. A lot of them are just random fun stuff that you’ll quickly get bored with.
The greater need now is for Amazon and Google to take their smart assistant capabilities beyond just playing music, asking about the weather, ordering groceries or controlling smart home devices – these are what Google calls Direct Actions. Amazon is pushing on the retail front, obviously, with capabilities like voice shopping for Amazon Prime users. Meanwhile, Google is working on pushing out more useful actions to users of Google Assistant on the many platforms that it is currently available on.
From a forward-looking perspective, both companies have the potential to disrupt the connected device market by introducing voice interaction and conversational capabilities to such devices. But they each have their strengths and weaknesses. Alexa, for example, isn’t that good at making conversation or putting information in front of you, but she’s great with specific voice commands. Google Assistant, on the other hand, is excellent with information-based queries and semantics because that’s where its origins are, but lacks the advantage of being able to access Amazon’s retail business.
It’ll be some time before we can start comparing their capabilities on a skill for skill basis, but suffice it to say that both of them have a long way to go before mass adoption can happen. Right now, not many Echo or Google Home owners are even aware of the breadth of commands they can use on their devices. As a result, most end up using just a few – kind of like having satellite TV with hundreds of channels that you never watch.
Tip: If you’re not sure what your Google Assistant or Amazon Alexa can do, just ask! At the rate at which skills and actions are being developed, there might just be one for the job you need to get done. You can also refer to skills charts like this one developed by Smart Home Solver that you can print and paste next to your device.
That’s the area that both companies need to work on, otherwise they will be heavily dependent on third-party developers and device makers for any meaningful growth in the adoption of their smart assistants. For Alexa, that means more skills like voice shopping and other partner-based skills that they can directly control; for Google its Direct Actions. Having a ton of third-party developers working on skills and actions is great, but it tends to become directionless after a while.
The real strength of these smart assistants will be integrations with widely used services. Like we said, there’s a long way to go before that can happen, and both companies need to push equally hard before Apple or Microsoft start their own aggressive pushes into this space. Or even Samsung, for that matter.
Thanks for reading our work! We invite you to check out our Essentials of Cloud Computing page, which covers the basics of cloud computing, its components, various deployment models, historical, current and forecast data for the cloud computing industry, and even a glossary of cloud computing terms.