Research Shows How Hackers Could Use AI for Malicious Purposes

Artificial intelligence applications have seen an incredible boost in recent years. More and more big companies and developers are investing in projects linked to artificial intelligence, while consumers around the globe have started to widely embrace semi-autonomous cars powered by AI. As the industry is set to grow even further in the next few years, many professionals have begun to express concern over the lack of focus on cybersecurity issues. Could AI be the next big gamechanger for hackers to exploit?

As AI Applications Spread, Research Warns of Malicious Uses

The rise of artificial intelligence is no surprise for people within the industry. Its applications are endless and once we have properly harnessed its potential, we will see the market rise even more. Companies of all kinds are turning to AI, with 72% of business leaders claiming that using AI is a business advantage – which might explain why the market is set to rise to more than $100 billion in value by 2025. In fact, 42% of global CEOs agree – and another 21% “strongly agree” – that AI will have a more profound impact on the world than the advent of the Web.

Infographic: A.I. Revolution: What Do Business Leaders Think? | Statista

You will find more infographics at Statista

Yet, a recent study carried out by researchers from universities such as Cambridge, Yale, and Oxford concluded that AI can also be used to launch malicious attacks and aid cybercriminals. The results of the study highlight that it is pivotal for the industry to focus more on cybersecurity issues when moving forward. Implementing tried and trusted solutions can be part of this strategy. A WAF can be used to protect AI-powered web applications from major threats like SQL injection, cross-site scripting, or other OWASP Top 10 threats. Implementing data encryption and raising cybersecurity awareness among users are also instrumental. However, it is also conceivable that as AI applications evolve, cybersecurity mechanisms will evolve and adapt with them. This could lead to cybersecurity tools that are unique to AI – and to solutions that rely on harnessing artificial intelligence in order to combat AI-powered hacker attacks.

Cybersecurity Should Become Top Concern for AI Industry

As the study conducted by experts from top universities reveals, attackers could use advanced technology to hack into AI devices such as autonomous cars or drones. They could then use them as weapons – for example, by instructing a car to misinterpret signs in order to cause accidents, or by using drones for coordinated attacks or unauthorized surveillance. On a more sophisticated level, AI could perform menial yet necessary tasks more effectively and carry out more widespread attacks.

Photo by Oleksandr Pidvalnyi from Pexels

For instance, a phishing campaign could rely on artificial intelligence in order to personalize each message according to the targeted victim on a level we haven’t seen before. AI could even create political turmoil by hackers who aim to cause instability to further their own agendas. Some AI applications allow users to create fake yet highly realistic videos, also known as “deepfakes”. This practice could be used to target politicians or public figures, resulting in smear and extortion campaigns or in spreading inflammatory and inaccurate information. As the researchers behind the report conclude, these scenarios are hypothetical for now – and we might even see some malicious AI applications we did not expect.

This is just the latest example that technological development can cut both ways – which means that consumers, businesses, and governments have to always stay one step ahead of cybercrime.