From Road Rage to Robot Rage – Bullying a Self-Driving Car is Easy, LSE Survey Shows

Self-driving cars - practical considerations

New survey results from the London School of Economics show that the majority of the 12,000 survey respondents think that it is the driver that needs to be in control of the car, not the car itself. This despite the fact that 90% of road accidents are caused by human error. That raises a very important question: are autonomous vehicles a practical application of artificial intelligence, or is it still wishful thinking?

What Does Self-driving Actually Involve?

To understand this, we need to look at driving on roads as a social activity. Rarely are traffic conditions “ideal” enough for a 100% logically driven electronic “brain” to control a car effectively. In fact, the BBC seems to think that it gives human drivers an opportunity to “bully” self-driving cars into submission. As an example, it cites a situation where a truck is unloading on a two-way, two-lane road. If a driverless car were behind that truck, it would inevitably wait until opposing traffic completely cleared before overtaking the truck.

In a real-world situation, a human driver in a car behind that truck might not be that patient, gradually nosing into the opposing lane until someone stopped to give them way. The BBC argues that in such a situation, a self-driving car could be bullied into staying right where it is, even as traffic builds up behind it.

That’s a valid point when it comes to autonomous vehicles. And it brings under the spotlight the fact that driving requires social interaction between drivers – eye contact being an important part of that interaction.

The survey was commissioned by tire-maker Goodyear. As Carlos Cipolitti, general director of the Goodyear Innovation Centre in Luxembourg, puts it:“The road is a social space.”

But that seems to be one of the aspects that autonomous driving is trying to address. The BBC also cites the fact that Google, one of the pioneers of autonomous vehicle technology, has filed for several patents that will allow self-driving cars to recognize and respond to flashing headlights, sirens from a police car or even an ambulance. They even have patents for tech that can spot and evade instances of reckless driving, or road rage.

Human Interaction with Self-driving Cars

The reliability of such tech, however, is still an unanswered question. Can humans “trick” self-driving cars into giving them the way? Will autonomous vehicles be able to react in thick traffic and other situations where common sense is often more important than driving skills? And what about recognizing human driver nuances such as a driver raising a hand to let you know that you can go first? Will self-driving cars be able to understand such innately human gestures?

The technology is there, no doubt. From gesture recognition to interpreting headlight flashes, artificial intelligence systems are designed to “deep learn” their way into understanding these subtleties. But who is going to put it together?

As a loose analogy, Microsoft recently laid claim to the fact that their speech recognition technology is “as good as or better than” a human professional’s.

See also: Microsoft AI Achieves ‘Human-level’ Accuracy in Speech Recognition

From this, we know that AI can surpass humans in terms of performance. In fact, self-driving cars can greatly reduce the number of road accidents caused by human error. We’re not arguing that point.

Our contention is merely that self-driving cars will not be able to handle every possible driving situation. At least, not within the next few years, and certainly not within the framework and standards that regulators have set for autonomous vehicle technology.

And that sentiment was brought out by the survey as well, with an overwhelming 80% of respondents holding the opinion that a self-driving car should have a steering wheel so a human driver can take over when necessary.




In the future, we may well have truly intelligent cars that can negotiate rush-hour city traffic or bumper-to-bumper highway jams. But for now, even though the technology exists, no company has been able to build an autonomous vehicle with that kind of capability.

Is Google Getting There?

There’s no doubt that this is where Google wants to eventually be. In their latest self-driving car monthly report in September 2016, Dmitri Dolgov, Head of Google’s self-driving technology, says:

“When I first learned to drive, every mile I spent on the road was crucial. It was only through practice that I learned how to move with the flow of traffic, anticipate people’s behavior, and react to unexpected situations. Developing a truly self-driving car is no different. A self-driving car that can get you safely from door to door has to understand the nuances of the road, which only comes with experience.”

Dolgov continues:

“That’s why we now spend the vast majority of our time on complex city streets, rather than simpler environments like highways. It takes much more time to accumulate miles if you’re focused on suburban roads; still, we’re gaining experience at a rapid pace: our first million miles took six years to drive, but our next million took just 16 months. Today, we’re taking a look at how our last million miles has brought us closer to making a truly self-driving car a reality.”

So far, Google has deployed 58 self-driving cars that have driven a total of more than 2 million miles on autonomous mode. They’re clearly getting better at it with each additional mile driven, and that knowledge is being enhanced by the several minutiae of information collected by their test drivers:

“With each mile we drive, our test drivers provide feedback on the car’s movements — things like how quickly we accelerate and brake, the distance we keep from other cars and pedestrians, or the speed and angle we turn. With each piece of feedback, our engineers tweak our software and calibrate our driving behavior, making our self-driving car feel more natural on the road.”

So now we’re left with the question: “how long before that happens?”

Most human drivers can master the basics of driving in a few weeks, but it takes years to become an expert with a spotless safety record. From that perspective, it’s easy to see why autonomous vehicle technology presents such a massive challenge to even the best AI systems in the world.

But there will be a point where self-driving technology from one of the majors will be come “safe enough” to have a better safety record than the “average” human driver. That will be the day this goes mainstream. That could be two years down the road or five years down the road, and unless self-driving technology is properly regulated and standards of “acceptability” are set, that’s not about to happen.

The current standards for autonomous vehicle technology are outdated and superficial. We need a new set of regulations for such an innovative technology. And unless regulators move to create such standards, self-driving cars will inevitably languish within the R&D departments of companies like Google.

And it is this situation that presumably prompted Tesla CEO Elon Musk to push self-driving hardware into all future Tesla cars, in order to provide regulators with the data they need to make that decision.

Related: Tesla to Put Self-Driving Capability on All Its New Cars

Musk hopes that once they have adequate data about various driving scenarios, they will be in a better position to smoothen the transition of self-driving cars from science fiction to science fact.

One thing is certain: self-driving technology is an inevitable part of our future. The only uncertainty now is around how quickly that transition will start to take place.

Thanks for reading our work! Please bookmark 1redDrop.com to keep tabs on the hottest, most happening tech and business news from around the world.