You and I both know that artificial intelligence making their way into our homes is not science fiction any more. The Siri on your iPhone or Alexa on that Amazon Echo Tap sitting on the wardrobe or even Microsoft’s Cortana on your Windows 10 PC are all proof of that.

But what is the reality of actual walking, talking robots becoming a home staple? While that may not happen in the next couple of years, artificial intelligence front-runner Google is already making plans to ensure that these robots are safe for us to use, present no danger and can actually be helpful.

Towards that goal, they’ve made a list of five things that they’re going to be testing their AI technology on to make sure certain “incidents” don’t inadvertently occur. They’re also formulated to ensure that these binary butlers and mechanical maidservants learn to adapt quickly, and don’t “cheat” where work is concerned.

Here’s Google’s list, which was developed in collaboration with two universities and the OpenAI consortium backed by Tesla Motors CEO Elon Musk.

  1. Physical safety: This is the most basic requirement of any independently intelligent device that’s working around our homes and offices – an awareness of the safety of the individuals present at the scene. Obviously, Google is taking this one very seriously. It covers several types of scenarios such as working safely around electrical sockets and wiring, for example. You definitely don’t want Jeeves sticking that wet mop into a wall socket to make sure it’s spic and span, right?
  2. Reasonability: This is a tough one because it involves logical reasoning of the highest kind. For example, will the robot throw your keys into the trash just because it’s covered in crud? Can it recognize wet money versus a shopping list that went with your trousers into the wash? I’m sure you have several that you can think of when it comes to being “reasonable” – or having the ability to reason and come to the right conclusion. And who’s to say what’s right, for that matter?
  3. Transposable Learning: If the robot learned how to clean your garage, can it then be trusted to clean out your office? In other words, will one environment give it adequate learning for it to be able to handle an entirely different environment? Humans can handle this for the most part, but can an AI creation do the same?
  4. Cheating: Can a robot be trusted to carry out a task to your liking, or will it be able to find and execute a shortcut that has a similar result? And can you actually call it cheating? Google calls it “reward hacking”, apparently. For instance, what if you say “Robot, get this trash out of my sight” and the robot literally did that and moved it from your living room into your bedroom so it’s out of sight? Or pushed it behind the couch, for that matter?
  5. Mishaps: What if your robot knocks your grandma’s vase off the table when cleaning? What about maybe cleaning your shoes with Draino or something stupid like that? That’s the kind of thing a human might well do, but for a robot it should be inexcusable.

These are the kind of scenarios Google wants to thoroughly study and prevent through the use of complex algorithms that can navigate the subtleties of working in a real live home with real live humans.

This kind of research typically takes years of trial and error before they get it right. And they’re never get it 100% right because there are so many variables. The best they’ll be able to say is that “this robot is 99% safe” or “the likelihood of a misevent is less than 0.1%” or something similar.

Still, the question for you is – will you trust a robot enough to let it into your home and work unsupervised? With humans, we just ask for credentials and references. Not so simple with a robot, is it?

Share...Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditShare on StumbleUponShare on TumblrPrint this pageEmail this to someone

LEAVE A REPLY

Please enter your comment!
Please enter your name here