Artificial intelligence, a risk for cybersecurity ?
Artificial intelligence (AI ) can be difficult to define and to understand. However, it has become a real strategic issue for organisations and states alike. Many companies use machine learning on a daily basis to speed up and optimise their work processes. In the United States, AI predictive algorithms are also being used by the police and reports on the possible introduction of predictive policing are underway in France.
Yet, as AI is difficult to define, the potential flaws and vulnerabilities it includes are also complicated to understand. If the power and functionality of AI are being exploited by all sectors, what about cybercrime?
Find out in this article whether artificial intelligence is a danger to cybersecurity.
Cyber attacks augmented with AI
4 times as many cyber attacks in 2020
According to the ANSSI (French National Authority for Information Systems Security), the number of cyber attacks in France increased fourfold in the year 2020.
This increase is mainly due to the lack of awareness of cybersecurity, the shortage of experts in this field and the digitalisation of working methods (particularly teleworking), which have increased the potential for security breaches tenfold.
But AI has also boosted cyber attacks in both senses of the word. On the one hand, the number of attacks using AI techniques and algorithms has increased and, on the other hand, these cyber attacks have been boosted in the sense that they are much more sophisticated.
These AI-powered attacks, according to Avast's CEO, started to appear in 2019 and have since been massively adopted by hackers, as the technologies are accessible and available. Moreover, these cyber attacks are not limited to simple viruses, but also include social engineering and phishing.
How can cybercriminals use AI?
Data poisoning is probably the best known and simplest attack on AI algorithms. Cyber attackers can, among other things
● manipulate the data sets used to train the AI;
● make small changes to the settings;
● develop scenarios carefully designed to avoid arousing suspicion while gradually steering the AI in the desired direction.
If cybercriminals do not have access to data, they can also resort to evasion. This technique consists of playing on the application's input data in order to obtain a decision different from the one that would normally be expected. An attacker can, for example, change the data points of a face to fool a facial recognition application.
System and network attacks
Cyber attackers can also use artificial intelligence technologies to create intelligent malware. These are able to propagate autonomously on a network or system until they reach the target defined by the hackers.
In 2021, cybersecurity researchers also proved that it is possible to embed malicious code in an artificial intelligence neural network. The team of specialists embedded 36.9 MB of malware in a 178 MB template for the AlexNet artificial intelligence (an image detection AI). This attack led to a 1% loss of accuracy and the intrusion was not detected by the antivirus systems.
More dangerous ransomware and phishing
AI also allows hackers to automatically generate fake information (or "deepfakes"). These can then lend credibility to phishing attempts and other social engineering methods with, for example:
● fake videos: tampered videos are almost undetectable. They allow attackers to impersonate a person in order to request access to secure data, for example;
● personalised and automated messages: this content contains more credible and specific information to improve phishing mails;
● personal data: AI can be used to create or generate fake evidence and then send automated threatening and blackmailing messages;
● fake articles: hackers can use machine learning to make the AI write propaganda articles based on the content and data they provide.
However, while AI is full of resources and features that can be exploited by cybercriminals, these same tools can be used by cyber security experts.
AI being used for cybersecurity
Artificial intelligence has immense potential in the field of cyber security. Indeed, if properly exploited, AI systems can automatically prevent threats, protect sensitive data and identify new types of malware. Here are some major applications of AI in cybersecurity.
Modelling user behaviour
Companies use AI to monitor and model the behaviour of system users. Their objective is to monitor the interactions between the IS and its users and to immediately detect takeover attacks or possible information theft by malicious employees.
Machine learning allows the AI to learn about the daily activities of users and therefore to see unusual behaviour as anomalies. Secondly, AI systems can be set up to immediately lock out suspicious user accounts or to instantly alert system administrators.
The integration of AI into antivirus software
AI-enhanced antivirus software is able to identify anomalies on a network or system by detecting programs that are exhibiting unusual behaviour. These "AI antiviruses" also use machine learning tactics to understand how "legitimate" programs interact with an IS.
As soon as a malicious program is introduced into a network, the antivirus can immediately neutralise it by preventing it from accessing resources and data. These antivirus programs no longer rely solely on a database of signatures, but can detect new threats themselves.
Automated analysis tools
Automated analysis of network or system data allows for continuous monitoring with rapid detection of intrusion attempts.
This continuous analysis is a major reason for the use of AI in corporate cybersecurity.
According to the Capgemini research institute, 69% of companies do believe that AI is vital for security, as the increasing number of cyber attacks makes traditional cybersecurity methods ineffective. Security teams are overwhelmed by the volume of threats and the shortage of cybersecurity professionals makes companies more vulnerable.
Intelligent phishing detection tools
E-mail is the preferred method of communication for cybercriminals to send phishing attempts. A study by Symantec indicates that 54.6% of e-mails received are spam and may contain malicious attachments or links.
AI emails, also known as anti-phishing emails, use machine learning and anomaly detection techniques to identify suspicious activity across all sender features. They can also better analyse attachments, links, message bodies, etc.
Artificial intelligence is not a danger as such, but a double-edged sword that can be used both as a security solution and as a means to increase cyber attacks. The difference is likely to be once again at the human level. Faced with cybercriminals who organise themselves into sprawling networks like "Emotet", companies are struggling to find cybersecurity experts to deal with them.
And what do you think about the contribution of AI to cybercrime and cybersecurity? Do you think that artificial intelligence represents a danger or, on the contrary, a potential tool to prevent and reduce attacks? Tell us what you think in the IT forum!
Institut Sapiens: Artificial intelligence and security, a sovereignty issue : https://www.institutsapiens.fr/intelligence-artificielle-et-securite-un-enjeu-de-souverainete/
The ANSSI manifesto: https: //www.ssi.gouv.fr/uploads/2020/01/anssi-manifeste-2020.pdf
The Capgemini study: https: //www.capgemini.com/fr-fr/wp-content/uploads/sites/2/2019/07/2019_07_11_Capgemini-_AI-in-Cybersecurity_CP_FR.pdf
The Symantec report: https: //www.securityweek.com/spam-rate-hit-55-september-symantec
Log in or create your account to react to the article.