There was a time when you could talk about artificial intelligence or AI and everyone in the room knew what you were referring to. Today, it’s a blanket term that encompasses so many technologies and approaches that you need to be a lot more specific.

This article tries to clarify the various terms used in AI so even a lay person can better understand them. As a matter of fact, you’d be surprised to learn that even industry leaders often confuse these various terms because of their own understanding of artificial intelligence and its many facets.

What is Artificial Intelligence?

The meaning of AI is interpreted in different ways. Here are some of the most common definitions:

“Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans.” – Techopedia

“the theory and development of computer systems able to perform tasks that normally require human intelligence.” – Dictionary

“The science of training machines to perform human tasks.” – SAS

“machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.” – Brookings Institution

“In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals.” – Wikipedia

“Artificial intelligence is the effort to create computers capable of intelligent behavior.” – Vox

“AI traditionally refers to an artificial creation of human-like intelligence that can learn, reason, plan, perceive, or process natural language.” – Internet Society

“a branch of computer science dealing with the simulation of intelligent behavior in computers.” – Merriam-Webster

As you can see, even the experts describe AI in their own ways. But the common thread running through these definitions is the transference of human or animal intelligence to a machine in order for that machine to be able to simulate certain types of behavior.

Now, the word “intelligence” in the context of AI is not necessarily the same as you’d use when describing individual intelligence…as in, Person A is more “intelligent” than Person B. This is not about IQ. Rather, it’s about replicating the basic human or animal ability to think about and react to its surroundings.

From that viewpoint, even a single-celled amoeba could be called an “intelligent agent” because it can move towards a positive stimulus such as a food particle, or move away from an unpleasant stimulus, such as light, or a strong acid or alkali in the surrounding water.

Related image
Amoeba engulfing food with pseudopodia

Another analogous phenomenon is the different kinds of tropisms in plants. Plants respond to various stimuli such as light, the presence of water, gravity and even touch.

Plant exhibiting phototropism

So, it logically follows that artificial intelligence is a machine’s ability to respond to its surroundings independent of any outside entity other than the stimulus.

Since this ability is not innate in non-living things, a machine – like a computer – has to be “programmed” or “trained” to respond in this manner. That brings us to what computer scientists call machine learning

What is Machine Learning?

Machine learning is the method of transference of said intelligent behavior from human to machine. The behavior can be directly coded, but that wouldn’t be machine learning because the machine isn’t actively learning something new without human intervention.

A machine learning purist would say that the correct approach would be to “train” the computer based on examples – usually thousands of them – and show them the appropriate output in each of those instances so the computer can later process new data that it hasn’t encountered before and react appropriately.

To explain with an example, let’s say you wear a coat on days when the temperature outside is at or below a particular point. Your training set might look like this:

Outside TemperatureWear a Coat?
10 degrees celsiusYes
15 degrees celsiusYes
16 degrees celsiusNo
20 degrees celsiusNo

This training set basically comes with a rule that if the outside temperature is 15 degrees or lower, then wear a coat. It’s called rule-based machine learning. Therefore, when you apply test data that is different from your training data, your algorithm should be able to make the decision of whether or not to wear a coat based on the temperature input.

So, if the temperature input is 13 degrees celsius, the program should make the correct decision based on the training and the rule; i.e., wear a coat, even though you didn’t specifically train it for that exact temperature. Pictorially, this is how it works.

How Machine Learning Works - What is Machine Learning- Edureka

This is probably the simplest representation of machine learning, but it shows you how it works. The program is the algorithm, or a mathematical operation that leads the program to the correct output.

There are several forms of machine learning, including supervised and semi-supervised, unsupervised and reinforcement learning, each more complex than the one before. Each of these uses certain types of algorithms for the decision-making process. Here’s a list of the most popular ones:

  • Nearest Neighbor

  • Naive Bayes

  • Decision Trees

  • Linear Regression

  • Support Vector Machines (SVM)

  • Neural Networks

  • k-means clustering

  • Association Rules

  • Q-Learning

  • Temporal Difference (TD)

  • Deep Adversarial Networks

There are also different techniques and processes by which these algorithms are applied to make them perform better. For example, anomaly or outlier detection is often used in identifying bank fraud by spotting events that deviate from the standard.

So, What is Deep Learning?

Deep learning is one type of machine learning that uses large neural networks as opposed to simpler artificial neural networks that were all the rage ten, twenty years ago.

The key difference is that it can work with very large amounts of data – Big Data, if you will – and perform better than traditional neural networks, whose performance tends to plateau as the datasets get larger and larger, according to Andrew Ng, co-founder of Google Brain.

Why Deep Learning?

Jeff Dean, who worked on Google Brain and products like TensorFlow, defines deep learning as follows:

“When you hear the term deep learning, just think of a large deep neural net. Deep refers to the number of layers typically and so this kind of the popular term that’s been adopted in the press. I think of them as deep neural networks generally.”

What Exactly is a Neural Network?

A neural network is simply a type of machine learning that is modeled after the human brain and how it functions. Computer vision and image recognition, for example, use neural networks to classify new objects based on a massive training dataset of images, often millions of them.

Rather than being programmed directly for the task, they are trained to evaluate new inputs based on cognitive processes, much the way the human brain learns over time. That brings us to another catchphrase: Cognitive Computing.

What is Cognitive Computing?

Cognitive computing, which is currently one of IBM’s area of expertise, seeks to deal with complex decision-making situations using a combination of cognitive processes like natural language processing, vision and even machine learning.

The key assumption of cognitive learning is that learning happens faster when there are effective cognitive processes in place, such as thinking skills, memory and problem-solving skills.

Some of the principles of cognitive learning include answering questions like “how does this new concept relate to the concepts I already know”, “how should I use this new concept” and “how is this concept different from other concepts I know,” and so on.

Is All This Artificial Intelligence?

Yes and no. All of these are the manifestations, methodologies, structures and applications comprising AI. The term artificial intelligence is bigger than all of them because it includes a lot of components or phenomena that haven’t been included here.

There are several other manifestations of AI such as AGI (artificial general intelligence) and ASI (artificial superintelligence). AGI refers to a machine that can perform any intellectual task that an average human being with average intelligence can. ASI is often defined as a software-based system that exceeds the cognitive performance of humans in all domains of interest. Important difference there.

Some say AGI and ASI bring in the elements of consciousness and self-awareness – the knowledge that entities other than ourselves are capable of reasoning and thinking for themselves, and are capable of beliefs different than their own.

One interesting experiment that shows how this type of intelligence works is the Sally-Anne Test:

Click to open larger version in a new tab

In the experiment, Sally has a basket and Anne has a box. Sally puts a marble into her basket and goes for a walk. In her absence, Anne takes the marble out of the basket and puts it in her box. The question: When Sally returns from her walk, where will she look for the marble?

The answer points to whether or not the entity possesses Theory of Mind, or ToM. Children under the age of four rarely pass the test, which involves answering the belief question correctly and pointing to the basket, where Sally still thinks here marble is.

Theory of Mind

Among other things, this refers to Type III artificial intelligence and, to a certain extent, the capabilities of AGI. If a robot is to be considered “socially intelligent”, it must be able to answer the belief question correctly. That shows that it can understand what a false belief is, and that other entities may have beliefs that are different to their own.

Self-Awareness

The highest form of artificial intelligence is what ASI entities should possess – Type IV AI. This is the ability not only to identify our own beliefs as opposed to those of others, but an understanding that we ourselves are more than just our thoughts and actions.

One good example to elaborate this is the understanding of the difference between “I want this” and “I know I want this.” That’s a big leap that artificial intelligence entities are unable to make at this point in time.

When that happens, we will achieve what is called Technological Singularity. Sigh, another term to define! Well, here goes…

The Singularity

Often contracted to just The Singularity, this phenomenon refers to intelligent machines that have surpassed ToM and are in the realm of self-awareness. More about this here.

The singularity will be the most important future event in the history of mankind. It will mark the deviation of machine from man, or the creation of machines that are able to create machines in their own image.

Some AI experts say that a superintelligent agent with ASI will be the last invention of humans. We don’t know whether or not that will eventually be prophetic, but we do know that we are even now racing towards that point. Estimates vary wildly, with most guessing that this point of no return in human history will be achieved anywhere between the next thirty years and one thousand years.

There are so many variables here that it would be impossible – even for an advanced intelligent agent – to compute the timeline of this event. Even though artificial intelligence is now officially more than 80 years old, we don’t know what sort of impact it will have on our lives over the next several decades and centuries.

There are as many theories about this as there are theorists, unfortunately. But one thing we can be certain about is that AI will be an integral part of our future. The question is, will AI entities of the future want us to be an integral part of theirs?