Bixby AI on Samsung Galaxy S8 and S8 Plus Ready for Pre-test, Launching June 2017

Bixby pre-test United States users Samsung Galaxy S8 and S8 Plus (S8+)

Owners of Samsung Galaxy S8 and S8+ (S8 Plus) in the United States can now pre-test Bixby, Samsung’s own AI assistant that was initially to appear on these devices at their launch earlier this year. The feature officially launches in English later this month, but users can get an early preview of the experience.

Galaxy S8 and S8+ owners can go to Samsung’s website and access the pre-registration page, where they can sign up to test the service. The company has already been recruiting users for these pre-tests, and it will presumably help Samsung fine-tune the deep learning algorithms that power Bixby.

Bixby is currently available in Korean. Once the English version is made available in the United States later in June 2017, we expect it to roll out to other international markets. The next language phase after that will be Chinese, according to Samsung.

The AI-based smart assistant, Bixby, can be accessed via voice, text and touch. It is also a part of the camera application, where it acts to recognize imagery and pull up any relevant information about the object in the image from the Internet.

On the Samsung Galaxy S8 and S8+, a single click on the side button below the volume controls will activate Bixby and put it in listening mode. Pressing the button twice will take you to the Bixby Home screen. You can also swipe right on the lock screen to get a custom feed of information, the day’s schedule and so on. It’s very similar to what you see on a typical Android smartphone with the Google app activated.

Like Google Assistant, Amazon Alexa, Siri, Cortana and other digital smart assistants, Bixby is also dependent on Internet connectivity to the cloud. The information is captured in the device, then sent to a cloud data center for processing, after which the result or response is sent back to the user’s device. But all of this happens in a matter of milliseconds or less.

The current challenge is to cut this ‘latency’ down even further by building AI components directly into the device rather than have it completely depend on the cloud for data processing. Smartphones haven’t gotten there yet, but there are devices like the Essential Home smart speaker from Android co-creator Andy Rubin that are beginning to explore these possibilities.

In future, we’ll probably be seeing self-sufficient AI on more consumer devices. The shift is already happening, and it is essentially a move from traditional cloud computing methods to something called Edge Computing. It basically means that the intensive compute tasks are de-centralized, occurring at the point of data origin.

As we progress, this is where AI is going to be – not in the cloud, but embedded in smart devices themselves. That’s the future of smart consumer devices, and it’s an exciting one because it represents a major shift for artificial intelligence, bringing it into our homes and putting it literally in our hands.

Thanks for visiting! Would you do us a favor? If you think it’s worth a few seconds, please like our Facebook page and follow us on TwitterIt would mean a lot to us. Thank you.