Artificial intelligence might arguably be the newest frontier of human experience, but there’s no denying that man has been fascinated with the concept for millennia. From the mythical stories of Hephaestus creating mechanical servants and brazen-footed bulls that puffed fire from their mouths, to the talking heads of the 13th century, to IBM Watson and modern forms of AI, the subject has been bubbling on the surface of human consciousness.
The time is now here for AI to come of age; and, in many ways, it already has.
But now there’s a new problem, and it’s not one of how AI can be implemented, as has been the major challenge in the past. AI has now sprouted into a plethora of forms, each rivaling the other in an attempt to showcase its superior capabilities. Intelligent robots are being churned out like assembly-line products out of labs stretching from Toronto to Tokyo and back.
Machines have even gone head-to-head against man in several fields of interest including computer games, board games, trivia quizzes and even poker. In 1996, when chess legend Gary Kasparov lost to IBM’s Deep Blue in 1997, his words from a year ago – when he beat the supercomputer – came back to haunt him: “I could feel — I could smell — a new kind of intelligence across the table.”
To repeat, the new problem is not one of how AI can be implemented. It is a much heavier one that cuts across the realms of morality and ethics.
Can a machine make the kind of complex decisions based on logic as well as morality that the average human is capable of? Can it tell right from wrong? Indeed, does it even have a frame of reference for right and wrong?
And that brings us to something called amplified human intelligence. Just like bees amplify their intelligence by behaving as swarm, human “swarms” can now leverage the collective “hive mind” to make complex decisions; or even very simple ones for that matter. The biggest difference between amplified human intelligence and traditional artificial intelligence seems to be the human element; while the former uses it as an inherent part of the decision-making process, the latter approaches the problem-solving process from an almost purely mathematical point of view. And there lies the problem of where and how ethics and morality can be woven into the equation.
Enter Swarm AI, a revolutionary platform that leverages collective human intelligence and amplifies it to achieve astounding predictive results. But the results themselves are not the point of this article. True, Swarm AI has been able to predict the Oscars, the French presidential elections, the U.S. elections, several high-profile sporting and gaming events and much more, and with stunning accuracy. In fact, the company behind Swarm AI – Unanimous AI, founded by prolific inventor, author, speaker and usually the smartest guy in the room, Dr. Louis Rosenberg – keeps achieving accolade after accolade across a wide array of interests.
But what I was really interested in during my conversations with Dr. Rosenberg and his team, including Unianimous AI’s Community and Communications Officer Joe Rosenbaum, is a new area that Swarm AI seems to be ideally suited for: creating a framework of ethical and moral conduct for traditional AI systems to be guided by or programmed with.
Before I give an example of Unanimous AI’s work in that area, let me describe the mechanics of most AI systems today, as relevant to our discussion.
AI is essentially a collection of cognitive resources that work together through an interconnected network to make simple or complex decisions through various types of computation. That’s as bare-bones as you can go on defining AI. In short, one or more data capture tools deliver that data to a processing unit that then “crunches” the data into meaningful and actionable intelligence. The output could be as simple as completing a Rubik’s Cube in under 20 seconds, or as complex as landing a fully loaded passenger aircraft in stormy weather on a short runway with poor visibility.
What’s important to understand here is that the processing unit is where the decisions are made. That processing is typically in the form of algorithms or mathematical and logical formulae that guide the decision process between input and output.
Until now there’s been very little moral or ethical guidance for how a machine accomplishes its processing task; it’s essentially been just a set of rules that entirely depend on the developer working on the system. In fact, you might not even see the need for that kind of guidance in a game of chess, or while doing a Rubik’s Cube, or even while landing an aircraft.
The problem is, AI is advancing so rapidly that it is being used for enormous tasks like autonomous weaponry and self-driving car technology. These are areas where ethical questions can often be ignored by current AI systems, and they can potentially have disastrous effects, to say the least.
To draw a worst-case-scenario picture, imagine a fully weaponized ballistic missile rocketing its way to a supposed military facility of a foreign nation. Now, imagine that the “intelligent” missile receives new local intel at the very last minute that the location is actually a public school. Now compound the problem with the fact that this location is in an isolated spot within a highly populated area. And assume that the missile is now left with two options: kill 300 children at the local primary school that was thought to be a military installation, or kill 700 adult civilians at a nearby sports stadium.
That’s the kind of situation where traditional AI would have a figurative meltdown without the presence of a guiding set of rules to help it make these tough decisions. That the missile launch itself might have been a wrong decision is another moral dilemma, but the situation is a perfect example of why we need some sort of foundation that has some consensus among human beings.
That’s where it appears Swarm AI can contribute in a big way.
How Exactly Can Swarm AI Help?
Last weekend, on March 10, 2017, the Unanimous AI team was invited to participate in an exercise conducted by the Massachusetts Institute of Technology – one of the most highly respected educational institutions in the world – as part of the MIT Moral Machine at this year’s SXSW event. The object of the visit was to see how the “hive mind”, or the collective participants of the swarm, would respond to ethical or moral questions that are extremely hard to answer, but require an answer anyway.
As an example, one of the questions was for Swarm AI’s participants to decide who must die if an autonomous car lost its brakes and had to hit two of several subjects. Some of the subjects were: a boy, a girl, a baby in a pram, a pregnant woman and two male doctors.
So, this question was put to the swarm’s participants – some of the most brilliant minds at MIT – and the results were astonishing, with the hive mind deciding that it would be the boy and the two male doctors that would have to be sacrificed.
You might perceive that decision as a collective bias against human males, or collective compassion towards females and infants or unborn children. Irrespective of that perception, the hard truth is that life is full of these tough decisions. And for an autonomous car as much as a speeding ballistic missile, these choices are bound to present themselves at some point.
Not all of life’s ethical and moral decision-making is that hard, but some can be even more challenging. I believe Swarm AI can help create a “framework of moral reference”, if you will, that can help AI developers program their systems to react morally and ethically in various situations.
There is, of course, the question of which decision is morally justifiable, and that will crop up in every such situation where the lesser of two evils must be chosen as the course of action. I see that as a subjective and, therefore, fluid element that can only evolve over time.
Regardless of the correctness of each choice, such an AI Code of Conduct is absolutely critical at this point in time, when AI systems are prolifically being released into the wild, as it were. As artificial intelligence systems get smarter and more complex, they will permeate deeper and more thoroughly into the daily lives of humans. I already know several people who find Amazon Alexa an indispensable part of their day, and that’s only going to expand over time.
There are thousands of examples I could give you of smart technology having ingrained itself into human activities, and a thousand more will soon be available to cite. But for the really important decisions – important from an overall humanity perspective – there must be some sort of guiding light or beacon to bring the ships home. If not, the predictions of Elon Musk, Bill Gates and the recently departed Stephen Hawking may well come true, and the human race will forever be lost at sea.