As artificial intelligence (AI) transforms all industries, including cybersecurity, and permeates our daily lives, it is providing solutions for effectively combating cybercrime. At the same time, cybercriminals are refining their strategies, using these same technologies to carry out ever more sophisticated attacks.

AI, a vector for cyberthreats?

AI can automate threat detection, analyse massive amounts of data in real time, and anticipate cyber attacks using predictive models. That said, cybercriminals are also exploiting AI to carry out more sophisticated attacks, such as automating malware and manipulating detection algorithms. The major challenge lies in the balance between improving security systems and managing the risks involved in using AI for malicious purposes.

Deepfake techniques are a perfect example of this drift. They make it possible to manipulate audio and video to create extremely realistic forgeries. By imitating influential figures, such as company directors, these forgeries facilitate fraudulent transactions and the extraction of sensitive information.

AI can automate threat detection, analyse massive amounts of data in real time, and anticipate cyber attacks using predictive models.
Djamel Khadraoui

Djamel KhadraouiHead of Reliable Distributed Systems UnitLIST

Similarly, AI is perfecting phishing tactics by generating particularly convincing and targeted emails and websites. By analysing large amounts of data, AI can create fake communications that closely mimic genuine ones, making it difficult for individuals to distinguish between real and malicious interactions.

Powered by AI, DDoS attacks overwhelm networks, causing major disruptions to business and other operations. Able to adapt, they make traditional defence mechanisms less effective.

Another threat is AI-powered ransomware attacks. Cybercriminals are using AI to refine their attacks, target certain victims and fine-tune encryption techniques, making these ransomwares not only more devastating, but also more difficult to counter.

Towards multi-faceted security approaches

In the face of these evolving threats, a protean approach is needed to secure IT systems with the support of AI, combining technical, strategic and ethical solutions. But this requires greater confidence in the AI systems to be used.

AI models are based on vast and complex data sets. In cybersecurity, as in many critical fields and applications, securing data pipelines is therefore an essential step. Guaranteeing the integrity of these pipelines requires data encryption, strict access controls and regular audits to prevent unauthorised access and falsification of information.

At the same time, increasing the robustness of systems in operation (networks or other) in the face of hostile attacks is a major challenge. These attacks modify the protection system's input data to generate erroneous results. To counter this threat, models need to be exposed to modified data during the training phase, a method that improves their resistance to hostile manipulation. Protection approaches are needed to make them more resistant to this kind of attack.

Furthermore, while AI is likely to be used for cyber attacks, it is also a formidable weapon for reinforcing the security of systems. AI-powered security information and event management (SIEM) systems detect anomalies and suspicious patterns in real time, with a view to reacting quickly to threats.

AI-powered threat intelligence platforms play an important role in aggregating data from a variety of sources, such as social media and dark web forums. They offer early warning signals about emerging threats, to give organisations a head start on cybercriminals.

Finally, the automation of responses to security incidents, facilitated by AI, is an effective solution. This technology can isolate affected systems or quickly block malicious IP addresses, reducing response time and minimising damage.

Cybersecurity, a guarantee of confidence in AI-based solutions

The growing use of AI in highly regulated areas such as healthcare requires enhanced levels of trust and security, especially when it comes to the sharing of sensitive medical data.

One of the main concerns is cyber security. By federating data, each participant (a hospital, for example) retains its own locally while contributing to a centralised global model at an aggregator. However, the integrity of this model can be compromised if a client injects poisoned data or deliberately corrupts updates to the model. In a medical environment, this could distort diagnoses or predictions made by AI, directly affecting patients.

The increasing use of AI in highly regulated areas such as healthcare requires enhanced levels of trust and security, especially when it comes to the sharing of sensitive medical data.
Djamel Khadraoui

Djamel KhadraouiHead of Reliable Distributed Systems UnitLIST

LIST develops advanced solutions in the field ofTrusted FederatedMachine Learning, offering robust methods for securing exchanges in decentralised AI systems. These innovations aim to guarantee users, such as hospitals, that the global model remains reliable and free from any corruption, while optimising the efficiency of the anomaly detection process.