What is a Chatbot?

Hand holding smartphone showing chatbot interactions and applications

One of the most rapid practical manifestations of artificial intelligence has been seen in the emergence of chatbots, a derivative of the term “chat robots.” But what is a chatbot and what does or can it do?

Definition of Chatbot

A chatbot is a software program typically used to initiate or carry out a dialog via text or speech. It is known by numerous other names, such as artificial conversational entity, chatterbot or, simply, Bot.

How Chatbots Work

Not all chatbots are intelligent, but all of them do exhibit some level of automation because of how they’re programed. The simpler ones use keyword matching or pattern matching between the input text or words and their reference data, and simply pull out the best match.

That’s why they’re not always coherent even though they’re built on machine learning algorithms and related statistical models, especially when you ask an off-the-track question. This is especially true of low-level chatbots, such as those used for basic customer service queries and other tasks of limited scope.

The really intelligent ones that can carry on a conversation with a human being use speech recognition, natural language processing, natural language understanding, natural language generation and other sophisticated methods to converse. And they keep learning from each interaction. These chatbots can answer random questions and keep the conversation going without breaking down or spewing out irrelevant responses. Well, mostly.

Are Amazon Alexa, Google Assistant, Siri and Cortana Chatbots?

Amazon puts it like this:

“Amazon Lex is a service for building conversational interfaces into any application using voice and text. With Amazon Lex, the same deep learning technologies that power Amazon Alexa are now available to any developer, enabling you to quickly and easily build sophisticated, natural language, conversational bots (“chatbots”).”

It would be more accurate to say that Alexa and other virtual assistants are built on top of chatbot technologies like Amazon Lex, and therefore are not necessarily chatbots themselves. Of course, it comes down to semantics and what exactly comprises a chatbot, which is why there is much disagreement around this.

The important thing to remember is that a chatbot should not be human-operated, although it will most certainly be human-trained. It then utilizes this training and its own growing resource pool to answer queries or respond to the person talking to it. It can be re-trained for new situations or applications, but while in operation it has to pretty much be on autopilot.

Messaging Apps: The Perfect Breeding Ground for Chatbot Development

Since messaging apps are primarily text-based, this is the perfect romping ground for developers working on new chatbots. Most large companies will have customer service chatbots on Facebook Messenger, and many of them have now started testing out chatbots in WhatsApp.

As of May 2018, there were over 300,000 Facebook chatbots, but even today most users are unaware of the power they can bring to digital interaction. But not everyone is in favor of chatbots. Google, for example, tried out chatbots for its Assistant, but decided that visual cards are more suited to a voice-enabled future.

According to Chris Perry, the Google product manager who leads Assistant on Android:

“When we built the Assistant, you can clearly see inspiration from Allo in what we did, in this chatty back-and-forth model where you’re talking with an intelligent assistant. And we found that was somewhat restrictive of a model for us. It ended up constraining us in a number of different ways.”

Ye-Jeong Kim, Google’s user experience manager for Search and Assistant, says this:

“If you think about visual–unlike spoken or written conversation–visual doesn’t have to be so ephemeral. It’s lingering, and helping to aid a conversation.”

Other Areas of Chatbot Application

Another emerging use case for chatbots is for contact centers. Amazon Connect, for example, is a contact center solution that allows companies to automate their contact flows by leveraging methods like skills-based routing of incoming queries, outbound call API for scheduled calls to customers and more.

Amazon Lex can be used as part of this contact center solution to build chatbots for various purposes – all on the Amazon Connect platform.

Other use cases for chatbots include internal corporate processes, educational toys, training and several areas where accurate information flow is important.

What is the Future of Chatbots?

Messaging apps are going to support chatbots for a long time because they are currently the most popular type of app on an average smartphone. Apps themselves are being replaced by bots that are able to execute the same tasks. For example, you can invoke a bot in a messenger conversation using the @ symbol – @Uber, get me a cab at my house for 6pm – and it’s done without opening the Uber app or a browser tab.

The fact that chatbots make such interactions more convenient means they’re here to stay. The more popular they become, the greater the skills they’ll acquire. Some day in the future, you might not even have to look at your phone to use it to make calls, order pizza and do other tasks.

Gadgets like the Amazon Echo, Amazon Echo Dot, Google Home and Google Home Mini are already taking over tasks like this, and they’re essentially piggy-backing on various bots to carry out these tasks.

But chatbots of the future will be more voice-oriented than text oriented. Of course, text is never going to go away altogether, but voice-aural interactions are quickly taking over now that they’re a lot more intelligent than they were even five or ten years ago.

In an article on Chatbots Magazine by Maruti Techlabs, an important point is made about voice interaction:

“Conversation using words is a natural form of communication for people, VUI makes it more exciting. The ability to associate a voice with the technology being used is what makes the experience more personalized/human and less mundane especially when human brains are basically wired to interpret the supply of speech as human. This goes on to show that interactive voice systems are a more natural means of interaction than visual interfaces for the majority of users and by removing a visible interface and exchanging it with voice, users are placed in a far more acquainted context.”

Voice interaction layered on top of a visual platform seems to be the magic mix for conversational AI of the future. That’s why Amazon, Google and other companies are now integrating screens into their smart home speaker products like Echo Spot and Google Home Hub.

The Emergence of Virtual Touch Screen (VTS) Technology

One important driving factor of this voice-visual blend will be virtual touch screens and other gesture-based input technologies. Today, you still need to pull out your smartphone or sit next to your Echo Spot to interact with it. But there is a class of virtual screens now being developed that could eliminate this extra and unnatural step of having to tap your choice on a real input interface.

One such project by Bosch Sensortec debuted at CES 2019 earlier this month. The BML100PI interactive projection module can turn nearly any surface into a touch-sensitive screen. It uses a focus-free laser beam to project the image, and then scans it line by line to detect finger movements and other gestures.

bosch interactive projector module 1
Source: Bosch Sensortec
Source: Bosch Sensortec

Here’s a video that shows how it works:

The Virtual Touch Screen is essentially a way to turn a non-interactive surface or medium into an interactive one that can track touch and other gestures. The technology behind it is what Microsoft uses for its Kinect technology. Although that version only allows choosing menu items virtually or play games, the tech behind Kinect has been used for initiatives like radiology and medical image exploration, among other things.

In future, it could all be voice and virtual touch. Imagine sitting in your self-driving car and watching a movie on a large virtual screen, being able to take calls on the same screen and even interact with apps on your phone, but all without having to hold your device. You can issue voice commands to Echo Auto or whatever technology is powering the infotainment system, and then switch over to talk to your car’s navigation system and tell it to take you somewhere.

This is where the benefits of artificial intelligence are manifesting themselves: Real-world applications that can help us today and tomorrow. The risks of uncontrolled AI research are of an equal, if negative, magnitude. There’s no doubt about that. But, as long as we as a collective society can guide the development of AI inventions towards creative ventures rather than destructive ones, we’ll be around to enjoy it for centuries to come.