Can Artificial Intelligence Make Cyber Security Attacks Even More Terrifying?

Poor cyber security practices prevalent globally

So far we’ve only seen the good side of artificial intelligence via its applications in the domains of healthcare analytics, autonomous driving technology, diagnostics and therapy assistance, smart home control and so on. But a new, evil side of artificial intelligence may soon rear its ugly head.

Cyber security is already a huge market that is racing past the $100 billion a year mark, and according to Cyber security Ventures, it is estimated that the five-year period between 2017 and 2021 will see a spend level in excess of $1 trillion in this market.

Security is obviously a major consideration in our increasingly “connected” modern world. With much of the world’s software being served via the cloud over the internet, applications security is taking the lead and is expected to be a significant portion of the estimated $200 billion market that cybersecurity will become by the year 2021.

And this is potentially the market opportunity for cyber criminals, hackers and tech ‘no-goodniks’ in general. But the prospect of having artificial intelligence attacking our security systems is a scary one, and one that is reminiscent of science fiction movies. The majority of sci-fi films have long portrayed artificial intelligence as the ‘bad guy’ that eventually takes over the human race.

That might be stretching things to their limit, but any focused use of artificial intelligence for the purpose of carrying out criminal activities could be infinitely dangerous. So far, human hackers have been using laborious processes and teams of people to actually do any damage. With the exponential capability enhancement that artificial intelligence will bring to the table, hackers will be able to work much faster and reach their goals that much quicker.

How Bad Can It Be?

That’s an interesting question, but hopefully an analogy given by Dave Palmer, director of technology at $500 million British cyber security firm Darktrace will help us understand the potential dangers of AI in cybercrime.

“Nadia’s got something on her laptop that can read all her emails, reads her messages, can read her calendar, and then sends people messages in the same communication style she uses with them. So Nadia’s always very rude to me so she’ll send jokey messages … but to you she’ll be extremely polite. So you would receive, maybe, a map of this location of where to meet from Nadia — because it can see in her calendar that we’re due to meet. And you’d open it, because it’d be relevant, it’d be contextual — but that map would have a payload attached to it.”

That “payload” is what will help spread the cyber attack wider and wider. Think of it as a sophisticated version of a “trust attack” where a genuine-sounding message is sent from someone you trust, and a link is embedded in the message that will download a virus to your system if clicked on.

If you think that might be just a random type of cyber attack, ask the FBI, which in April this year warned of a new type of attack called “CEO fraud”, essentially “e-mail scams in which the attacker spoofs a message from the boss and tricks someone at the organization into wiring funds to the fraudsters.” More importantly, the FBI estimated that companies lost over $2.3 billion from such types of messages over the past three-year period.

But the FBI warning in April might actually be lowballing the actual figure. The attacks, called Business Email Compromise (BEC) scams, could actually be much worse. In June, 2016, two months after the first report, the FBI’s Internet Crime Complaint Center (IC3) released a Public Service Announcement on its website. Here’s an excerpt:

“A total of 22,143 victims in the United States and other countries worldwide have reported BEC scams to date, to a total combined exposed dollar loss of $3,086,250,090, IC3 says. Between October 2013 and May 2016, IC3 received 15,668 complaints from domestic and international victims, for a combined exposed dollar loss of $1,053,849,635. Of these, 14,032 were US victims, who reported a total exposed dollar loss of $960,708,616.”

But the risk is certainly not limited to large companies alone. It stands to reason that an artificial intelligence program capable of sending personalized scam messages will also be capable of affecting individuals.

October is “National Cyber Security Awareness Month”, and has been since its inception in 2004. But ten years after it was conceived, in 2014, the President of the United States gave it fresh relevance by officially proclaiming it:

“NOW, THEREFORE, I, BARACK OBAMA, President of the United States of America, by virtue of the authority vested in me by the Constitution and the laws of the United States, do hereby proclaim October 2014 as National Cybersecurity Awareness Month.  I call upon the people of the United States to recognize the importance of cybersecurity and to observe this month with activities, events, and training that will enhance our national security and resilience.”

Two years on, and the problem has only gotten worse. And, now, with artificial intelligence creeping into the equation, the implications on cyber security – our online security – for individuals as much as organizations is being highlighted like never before.

Thanks for reading our work! If you’re reading this on Apple News, please favorite the 1RedDrop channel to add us to your news feed, or Like our page on Facebook. Please bookmark our site for more insightful articles on current and future technologies that are changing our lives.