Artificial Intelligence is going to be the biggest buzz-phrase of 2019. Machine learning this, deep neural networks that, voice shopping here, self-driving cars there. We’re neck-deep in AI, and there’s no denying it. But how far has it ‘invaded’ our private space, and are we comfortable with this?
It’s not too early to ask that question. In fact, it might be a little late considering we don’t yet have a global norm or a framework for AI development. We’ve got policies and protocols for cyber security and for national security, but we don’t have an AI bible.
The bigger issue is that there’s a problem on the ground that no new legislation can eliminate: the fact that AI is already an indispensable part of our lives. It is privy to our deepest secrets. Alexa KNOWS what song I’m in the mood for, OMG! My Google Home just told me my fly was open. And I swear I hear a muffled snicker when I looked and it wasn’t. And my smart security system knows when I’m usually home and when I’m out.
AI knows you at a deeper level than you’d be comfortable admitting. Simple Sherlock-style deduction. She frequently searches for girls shoes, she’s got a credit card, so she must be over 18 with a school-going daughter, so she’s probably looking for cute handbags and those funky hairbands, too, since 68.8% of all women matching her profile bought them. Recommendation engines powered by machine learning. Nothing creepy there, right?
It’s how Amazon and every online retail store worth its salt works, and that AI-driven experience comes to us at the cost of our privacy.
Google is probably the biggest company ever built on the fruits of voyeurism. The data they collected over the years was like honey to advertisers, and they came in the millions, putting Google in the awkwardly comfortable position of being considered nothing more than a fancy digital ad agency.
Facebook? Well, there’s not much that can be said other than Facebook is great at portraying the errant but irresistible mischief maker brat pack heir to the social empire who periodically and very vocally atones for his sins but continues on his revenue growth march unabated. How do they target their audience so well that the advertiser-bees are even more attracted to this pot of honey?
In one word: Data
The shakedown of the last few years has resulted in the implementation of GDPR and the adoption of GDPR-like sentiments around the world. It’s not just indiscriminate data sharing they object to. It’s what’s being done with that data and how that affects the privacy of the original owner of that data.
Legally, it’s a nightmare. By late November 2018 the EU had already received numerous complaints from consumer groups against Google for “deceptive practices” when it comes to its location-tracking feature. Google could once again be facing a huge fine for this violation.
But we’re not here to comment on government vs. corporate or right vs. wrong. We’re here to ask one question:
Are you comfortable with the level of digital privacy you have when engaging with the practical side of AI? Which is very little to begin with.
The problem with artificial intelligence is that it thrives on data. No, it actually survives on data. Nothing else matters other than the data and the set of instructions or the algorithm it is modeled after. Whether it’s a simple reactive machine or one with limited memory, data is what it processes. Can’t have a number-cruncher if you don’t have the numbers.
That means we need to give express permission for Alexa or GA or Siri to enter our lives. Once that consent is there, AI can do what it does best. But the price we pay is our data. The moment we make that barter, there’s no such thing as privacy.
Of course, you can split hairs and say “Oh, but there’s a difference between a company using the data for itself and one that shares it.”
No, there isn’t, unless you think corporations are hacker proof. It doesn’t matter where the data resides. Countries make a fuss about data sovereignty but, in reality, localizing data is far more hazardous than distributing it to multiple locations. Fish in a barrel ring a bell?
The point I’m trying to make is that your consent for whether a company can or cannot use that data is based on the misconception that controlled data is safe and that your privacy will be preserved if you say no.
In reality, though, the average consumer is safe from being targeted maliciously. It’s the commercial side you have to worry about. All these recommendation engines and price-matching and deep online discounts is great for the economy. Not so much your wallet.
The real threat to privacy is our need for more. Now that we’re hooked on what this data-and-AI-driven feels like, we’re never going back. Eventually, we’re going to put our physical safety in the hands of autonomous machines and trust that they’ll play by the book. Never mind that the book hasn’t even been written yet.
Are you comfortable with that? It’s inevitable that this will happen. We’ve already given up our data to virtual assistants, smart gadgets, autonomous machine learning bots and about a thousand different products already. The top AI companies of the world account for the bulk of practical AI applications, and we’re waiting open-armed and data-ready to feed them what they need so they’ll give us what we need.
“Alexa, buy milk” sounds a lot less appealing when you realize that Alexa can access your Amazon account, your credit card details, your CVV number, your preference of fat content, the brand you typically buy, how much milk you’ve ever bought online and everything else needed to make that transaction.
Are you comfortable with that? If you are, then the path of artificial intelligence technology holds no horrors for you. If not, you’d better expect a Big Brother nightmare like nothing you’ve seen before.