I remember trying to explain to people what I was doing when undertaking a PhD in AI with Microsoft in the 1990s. This was before AI was even a household term.
After many years in the public sector, joining ANZ as Chief Information Security Officer last August was an exciting opportunity. My new role comes as AI is proving itself crucial to financial institutions and their security. It is also an important time in the efforts to protect customers from online hazards.
The Australian Signals Directorate (ASD) Cyber Threat Report 2022-23 highlights the risk of malicious cyber activity in Australia is increasing in frequency, cost and severity compared to the previous year.
Last financial year there was a 23 per cent increase in cyber reports, compared to the previous financial year (2021-2022), with one report being received on average every six minutes.
The danger of cybercrime is still growing rapidly, with the World Economic Forum ranking it as the eighth most-severe global risk over the next decade.
The cyber risk we all face, makes innovation critical – we need to consider how to create opportunities to address these growing risks.
AI holds some of the keys and has the potential to revolutionise and improve every aspect of our lives. Of course, AI is not new, gaining its name in the 1960’s.
However, now is the time for AI to thrive because we have access to massive amounts of data we’ve been collecting for decades. The internet has connected us in an unprecedented way and is increasingly pervasive – and our technology is more powerful than ever before.
As I see it, there are three key areas where security risk can be turned into an AI opportunity.
Complexity of the challenge
As attacks on computing systems and infrastructure grow in complexity, speed, frequency, and scale, so too must the speed and scale of our response.
Among the biggest drivers is the rise of ransomware, where threat actors lock companies out of their systems unless they pay to regain access. Ransomware attacks have exploded in number, with a 7 per cent increase in 2023 compared to 2022.
Another challenge is dealing with sophisticated state-sponsored cyber-attacks – which could be focused on espionage, financial disruption or infrastructure disruption – and can be difficult to detect.
The opportunity
We can use AI for what is called ‘decision advantage’. That is, augmenting the workforce with AI for large-scale and more rapid detection, prioritisation and response to improve 24×7 operations and assist with workforce gaps.
AI can be used across all stages of cyber security. This can free up cyber security teams to focus their specialised expertise on critical thinking, creative problem solving and further innovation.
AI presents an incredible opportunity to transform our business as we explore ways to harness its power. ANZ uses these capabilities across our security systems, using machine learning and other AI to operate at scale, ingesting over 12 billion data points each day as part of monitoring, detecting and responding to potential events.
AI available to friends and foes
AI can work for us and against us, and it is developing very quickly. Over the next 12-18 months, there’ll be challenges in regulating AI, recognising it’s level of accuracy is based on the quality of data it’s trained on and the resilience of the models that are created.
We also need to be aware of how AI can be used to spread disinformation, while at the same time creating efficiencies across our businesses through automation, speed and innovation.
The potential for large-scale AI-powered misinformation is increasing with the use of generative AI and this technology can be easily influenced with inaccurate input data.
The opportunity
If we are dealing with AI-powered attacks then we also need AI-powered defence, because without this, humans will be outmatched.
To help prepare for this we must watch the changing use of AI by threat actors. We need to liaise with government and other organisations to understand tactics. Sharing information in real-time across sectors is crucial.
We also need to be prepared for disinformation campaigns, such as preparing decision-making frameworks and communication plans. And rehearsing coordinated responses to large-scale offensive AI across boundaries.
AI has vulnerabilities too
Our increased reliance on AI, which is inherently more complex and unpredictable than traditional software, opens the door for new types of attack, or ‘Adversarial AI’.
This refers to exploiting vulnerabilities in AI itself, using traditional techniques or new ones targeted specifically at AI, such as manipulating AI algorithms or the data used by them.
The opportunity
We need to design AI to be resilient and brought into projects early. For example, NIST’s AI Risk Assessment Framework, and holding security reviews throughout projects using AI. This includes training and collaboration to encourage this thinking.
If we maintain a catalogue of all AI/Machine Learning technology used, including in proprietary software, we can ensure AI can be continually monitored for performance issues such as deviations to ‘normal’ limits of behaviour.
Other major steps include, thinking about verification practices for AI, maintaining an overview of adversarial AI techniques and supporting research and development in trustworthy, robust AI.
People first
As always, our people are key to identifying AI opportunities and addressing the risks.
Collaboration and knowledge sharing between traditionally diverse skills is needed if we want to succeed. Designers, data scientists and engineers must embrace the importance of different perspectives. Silos between these groups will stifle innovation.
To use AI effectively, we must first understand the most important problems to solve. People skilled in Design Thinking can support this selection, and help agree on the end goal. Otherwise, AI projects may starting looking like a “solution looking for a problem”.
Design thinking skills can also identify the right stakeholders and build consensus and trust in the use of data and AI. Equally, we need data scientists to get the most value from data, including the best data sources and approaches to analytics.
To complement, we need privacy and ethics specialists to collect and ingest data in a trustworthy manner. And importantly, we need engineers to turn experimental models into sustainable, repeatable and performant solutions.
Of course, AI is not a silver bullet. Solutions in the world of cyber security rarely are.