After spending decades protecting companies against cyberattacks, Jörg Asma, Partner and Cyber Security Leader at PwC Germany, is an expert on information security. Here, he discusses why artificial intelligence is making life easier, but also exposing us all to as yet unimagined risks.
There is no denying that artificial intelligence (AI) can fundamentally improve people’s everyday lives. Any program with the potential to automate tasks will liberate people from time-consuming duties so that they can focus on the work that really matters. And many enterprises have already recognized that AI can also give a company a commercial advantage; almost two-thirds of German businesses are already working with it. AI can automate and streamline business processes, accelerate the development of new products and services, and manage supply chains and ordering systems. But dystopian science-fiction films and novels get one thing right: the technology can be used to do bad things, too.
AI poses complex risks to cybersecurity. For a start, hackers and unscrupulous competitors can exploit the vulnerabilities of a company’s AI system. Almost every industry, from finance and automotive to IT, has adopted AI technology, which means it is more susceptible to a cyberattack. Since a company’s AI will make automated decisions based on machine learning, complicated algorithms and a huge volume of data, the program’s patterns can be totally opaque to the humans overseeing it; this makes the detection of an attack difficult.
How AI will increase the threat of hacking exponentially
A more dangerous threat, however, is the way AI can be deployed to do the attacking. Cybersecurity is still largely conducted by humans, who can recognize around 17 or 18 different kinds of attack. But human detection is easily outmaneuvered by AI, which can launch up to 30 different types of cyberattack in quick succession. In addition, AI-driven malware can ‘learn’ which methods are most successful and mimic the network they have infiltrated, or even human behavior, to avoid detection.
The scale of AI attacks is constantly evolving. Only five years ago, cybercrime centered around data espionage and putting pressure on individuals. But the stakes have been raised to full-scale data and equipment destruction. Attackers can take an entire industry offline, damage robots and facilities, and destroy the hardware that is used to manage the production process. A company’s security system is only as robust as its weakest link, and AI attacks often target critical infrastructure, such as supply chains.
Why advances in AI will require companies to spend more on defense against cyberattacks
This is what makes AI such a powerful weapon for hackers, activists, criminals and unscrupulous businesses looking for an unfair advantage over their competitors. It is also big business for malware and ransomware developers – and a cost-effective weapon for a small country with a point to score. When a fighter jet can cost up to $90 million (€80 million), a destabilizing, untraceable bit of malware for $10 million (€8.9 million) represents a great return on investment.
We can be sure that AI-based attacks will increase dramatically, and that companies will have to respond to them. It is an expensive uphill battle, but public authorities and private businesses are now developing AI-driven security tools to defend themselves. For example, AI can be deployed to monitor IT systems and detect anomalies, distinguishing whether irregular events have been caused by the company’s own automated systems or as the result of a cyberattack. And where technical defence systems can sometimes trigger ‘false positives’ that will preoccupy a company’s whole cyber-response team, there are AI tools specifically trained to detect false positives and prevent a company’s internal system from sounding the alarm unnecessarily.
Why businesses need to tread carefully when incorporating AI into their systems
It is important to weigh up the risks and advantages of introducing autonomous systems to your business. From a commercial point of view, implementing an AI system that classifies electronic data is invaluable. ‘Learning’ AI technology is able to sift, store and delete a huge volume of semantic text at a speed that no human can match. However, self-learning systems can develop behaviors we did not predict or intend. Take an innocuous example, such as using an AI program to sort emails. Over time, you will train it to distinguish between the messages you want to archive and the ones you want to delete. But if you delete a message unintentionally, your AI system learns from you and might destroy all future emails from the same sender – before you have seen them. In the same way that machine-learning can be used against you, autonomous AI attacks can cause unintended damage.
AI poses both a huge opportunity for an organization and a serious threat to its security. Do its risks outweigh the benefits? Not for most companies, who might rely on AI to manage their business processes and to bolster their cybersecurity systems. The best course of action for a company is to become familiar with the AI tools that will improve its business and establish ways to use them benevolently. But no business should ever leave its learning machines to their own devices. For a successful partnership, humans must have oversight of their AI – we need to be controlling the machines, not the other way around.
Jörg Asma is a Partner at PwC Germany and its Cyber Security Leader.
This article first appeared in the May 2019 edition of WERTE, the client magazine of Deutsche Bank Wealth Management.
Read this article in Italian.