Monica Verma, hacker turned CISO, delivered a keynote speech on cybercrime and artificial intelligence at the Cross-Border Distribution Conference on 16 May 2024, organised by Deloitte Luxembourg and Elvinger Hoss Prussen with the support of the Financial Times. Photo: Nelson Coelho

Monica Verma, hacker turned CISO, delivered a keynote speech on cybercrime and artificial intelligence at the Cross-Border Distribution Conference on 16 May 2024, organised by Deloitte Luxembourg and Elvinger Hoss Prussen with the support of the Financial Times. Photo: Nelson Coelho

For Monica Verma, the use of artificial intelligence is making it easier for cyber criminals to launch attacks. The hacker-turned-CISO had a few suggestions for the financial industry to consider when it comes to AI and cybercrime.

From assistance in carrying out surgeries to behavioural analysis, artificial intelligence is becoming a larger and larger part of our lives. But going forward, we won’t only need to be worried about humans being influenced by other humans, but also being influenced by AI. “How do we know that your actions in your day-to-day work, in your professional life, are based on your cognitive abilities--or actually influenced by AI?” That was some food for thought, presented by hacker-turned-CISO Monica Verma during her keynote speech at the Cross-Border Distribution Conference on 16 May 2024, organised by Deloitte and Elvinger Hoss Prussen with the support of the Financial Times.

An informal poll showed that the majority of the room was “excited” about AI, but that was some worry as well. Verma herself? She was a “tad bit worried. Not because AI is taking over, not because an army of AI is coming over and taking over everything. But we’ve already seen examples where AI is being used for one of the very core fundamental traits that we have--decision-making--and not only augmenting, but also replacing to a certain extent that decision-making.”

Artificial intelligence models are trained on data to understand what intelligent output looks like, explained Verma. But it doesn’t always produce correct information--for reasons such as biased or incorrect data, it can produce inaccurate outcomes. “AI is ignorant. It is ignorant. But it’s still being used for decision-making.”

When AI is used--beyond augmenting human decision-making--questions should be asked, she argued: to what extent is AI augmenting or replacing human decisions? What are the possible unintended repercussions? Who is held accountable for the consequences?

$20 to hire a cybercriminal

“We are just scratching the surface of the future that we have created when it comes to artificial intelligence and what it will mean for cyber attackers,” Verma continued. Cyber hackers and cyber criminals are “abusing” AI to their own advantage. Now, “you don’t even have to know the traditional programming languages,” making it more accessible for cyber criminals. AI can be used to create apps and websites for businesses, but it can also be used to create malware. “And it’s also being used against human behaviour analysis and to influence our behaviour. That’s what makes it so much more accessible for hackers and cyber criminals today.”

With cyber criminals able to use AI to impersonate people--down to the accent and dialect--and trick others into authorising money transfers, for instance, distinguishing between what is fake and what is real is becoming more difficult.

And it’s not even that expensive to hire a cyber criminal, added Verma. For a bit more than the price of a Netflix subscription--$20--you could subscribe to a service on the dark web and hire cyber criminals to attack your rivals. “It’s illegal. Don’t do that. But you could--theoretically.”

Your risk is no longer within the four walls of your organisation. It’s across the entire supply chain
Monica Verma

Monica VermaCISO

“Cyber criminals, the number one motivation they have is financial gain,” she argued. Espionage comes in second place. “You guys have the money and the data,” she said, addressing the conference attendees. “And data is money. So you’ve got money--and money.” The financial industry is a popular target, but risks exist all along the supply chain. Norway’s sovereign wealth fund, for example, was in 2021 affected by the Solarwinds attack. It was one of thousands of companies impacted by a malware. “Your risk is no longer within the four walls of your organisation. It’s across the entire supply chain. So you have to really understand what that entails.”

Cyberattacks are no longer just a technology risk or just a business risk. “It’s a systemic financial risk.”

Five recommendations to “thrive”

To “thrive” in this “unpredictable world” being shaped by AI and cyberattacks, Verma offered five suggestions.

“Number one: basic is still the best,” she argued. “It’s boring, but it’s still the best.” This can include training, password management, patching software, protecting customer data and other “cyber hygiene” practices. “Don’t use password as password!”

“Number two: the more you start using AI in decision-making and augmenting and replacing, to a certain extent, you need to answer the questions of what we are augmenting, to what extent we are replacing, what are the repercussions? Do we understand that? Do we understand the unintended consequences if the decisions that are made--or the actions that are executed on behalf of that--are wrong?” The sooner these questions are addressed, the better.

“There are two types of organisations in this world,” said Verma. One is the organisation that has already been attacked; the second is the organisation that doesn’t know it yet. “It takes an average of 200 days to know if you’re under a cyberattack,” she said. “The faster you’re able to detect, the faster you’re able to respond, the better crisis management you’re able to do. Prevention is not enough, you need response.”

Point four concerns accountability. “You can avoid risk, you can transfer risk, you can mitigate risk, but you can never get relieved of liability.”

And the last point to consider: ethical implications. “People are already doing financial analysis of the market with AI,” she said. But if AI is used to actually make decisions, questions arise: are these decisions ethical? Was someone deined access to something because of bias or discrimination?

“So you need to [be able to] answer. The more questions you answer… the better your posture for defending against cyberattacks.”

Need for more discussion around AI

In Europe, the AI act has been adopted, and regulations are already being put in place, Verma said in response to a question regarding “hopes” of controlling AI. “Regulations are always a two-edged sword,” she added, with critics saying they stifle innovation while others say it’s necessary for protection.

But why not both?

“AI is going to obviously change the way we think, we build, we create, we live--in a good way, there’s a lot of upside to it, don’t get me wrong,” she continued. “But the hope comes from each and every one of us.” It’s not just the responsibility of a chief AI officer. More discussion around the ethical implications is the only way “to do things the right way.”