speech

Lindy Cameron at Cyber 2023, Chatham House

Invalid DateTime
Lindy Cameron, NCSC CEO
Lindy Cameron, NCSC CEO, delivering the speech at Chatham House

Introduction

Good morning and thank you Chatham House for inviting me to speak today.

The last year has been a busy one for the NCSC.

We have seen threats in cyberspace grow and develop – from Russia’s persistent use of cyber operations in support of the illegal invasion of Ukraine to cyber criminals’ ongoing attempts to scam and extort the UK public and businesses.

And over the last year, the NCSC has worked with government, business, academia, the public…and the Eurovision song contest…to increase the resilience to these threats.

But as technology develops, the cyber security threat we face is rapidly evolving too. And I want to talk to you today about the cyber security challenges that will come with one of the most rapidly developing technologies…artificial intelligence (AI), machine learning (ML) and large language models (LLMs).

Since the launch of ChatGPT, the hype about AI has taken off dramatically. But at the NCSC, we have been focused on it for years. And in this speech, I will give a summary of how we think about the challenges of securing AI.


Insights from the development of the internet

The dramatic development of AI, ML and LLM technology in many ways feels like the explosion of the internet back in the 1990s.

I was working as a management consultant at the time and everyone was excited about using the new technology and its infinite possibilities. There were start ups popping up everywhere ready to take advantage of the new technology and it was revolutionising how we lived our lives.

And the accepted wisdom was that the internet would be a benign place, where people would act ethically and responsibly. Little concern was given the need to build in security to the new technology.

How wrong we were.

The hostile state actors and cyber criminals that my organisation deals with day-in-day-out are proof that we cannot simply trust that technology will be safe. And in some cases, they are taking advantage of vulnerabilities that have existed since the very start of the internet.

But at the NCSC, we are committed to working with our international counterparts, as well as the public and private sector here in the UK realise the benefits provided by AI and ML systems.

We are determined that the UK makes the most of new technology as it develops – from discovering superbug killing antibiotics to dramatically improving people efficiency at work.


Secure by design

So…if we are to grasp the opportunities provided by the incredibly exciting growth of AI, we cannot be as naive as we were at the start of the internet.

We cannot rely on our ability to retro-fit security into the technology in the years to come nor expect individual users to solely carry the burden of risk. We have to build in security as a core requirement as we develop the technology.

Like our US counterparts and all of the Five Eyes security alliance, we advocate a ‘secure by design’ approach where vendors take more responsibility for embedding cyber security into their technologies, and their supply chains, from the outset.

This will help society and organisations realise the benefits of AI advances but also help to build trust that AI is safe and secure to use.

We know, from experience, that security can often be a secondary consideration when the pace of development is high.

Much of the digital architecture we rely on today was never designed with security at its heart. It was built on foundations that are flawed and vulnerable. And unless we act now, we risk building a similarly flawed ecosystem for AI.

AI developers must predict possible attacks and identify ways to mitigate them. Failure to do so will risk designing vulnerabilities into future AI systems.


UK leadership

The UK is well placed to safely take advantage of the developments in artificial intelligence.

We are a global leader in AI – ranking third behind the US and China.

The UK already has an AI sector that contributes £3.7 billion to the economy and employs 50,000 people.

And we have world leading academic institutions researching AI like Cardiff University’s Centre for Cyber Security Research. They are an NCSC Gold Award winning Academic Centre of Excellence in Cyber Security Education (ACE-CSE) and are using cutting-edge data science, artificial intelligence and statistical methods to develop innovations that can predict and classify risks and threats.

That’s why the Prime Minister’s AI Summit comes at a perfect time to bring together global experts to share their ideas.


Definitions

Before I come on to how the NCSC is approaching the security of AI, let’s start with the basics. What do we mean when we talk about AI, ML and LLMs?

Artificial Intelligence (AI) is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Modern AI is usually built using Machine Learning (ML) algorithms. The algorithms allow a system to learn its own rules by finding complex patterns in data.

Large language models (LLMs) use algorithms trained on a large amount of text-based data, typically taken from the internet. The algorithms analyse the relationships between different words and where they appear in a sentence and turn that into a probability model. A prompt or question asked of that model will provide an answer based on the statistical relationships of the words in the model.

Importantly, even the creators of LLMs don’t fully know what happens inside that model. This lack of 'explainability' is one of the key safety and security challenges that we are working on.

All clear? Are you still with me? Good.


Securing artificial intelligence (AI) and machine learning (ML) systems

Being secure is an essential pre-requisite for ensuring that our AI is safe, ethical, explainable, reliable and as predictable as possible.

This is becoming increasingly important as the use of AI technology grows.

Users need reassurance that machine learning is being deployed securely, without putting personal safety or personal data at risk.

And as the UK’s national technical authority on cyber security, the National Cyber Security Centre (NCSC) has a clear remit to ensure the integrity of AI and ML systems through effective cyber security.

So…in addition to the overarching need for security to be built into AI and ML systems, and for companies profiting from AI to be responsible vendors, the NCSC is focusing on three elements to help with the cyber security of AI.

First, we believe it is essential that organisations using AI need to understand the risks they are running by using it – and how to mitigate them.

The NCSC has already produced a set of security principles on machine learning, as well as cyber security guidance on LLMs.

It’s vital that people and organisations using these technologies understand the cyber security risks – many of which are novel.

For example, machine learning introduces an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.

And LLMs pose entirely different challenges. For example - an organisation's intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts.

Our advice provides pragmatic steps that can be taken to secure AI as it is implemented. But we need to – and will – go much further.

For example, as the disruptive power of AI becomes increasingly apparent, CEOs at major companies will be making investment decisions about AI and we need to ensure that security considerations are central to these deliberations.

At the NCSC, we will be agile and develop our advice as the technology and our unique understanding of the threat changes.

Second, we need to maximise the benefits of AI to the cyber defence community.

AI has the potential to improve cyber security by dramatically increasing the timeliness and accuracy of threat detection and response.

And we need to remember that in addition to helping make our country safer, the AI cyber security sector also has huge economic potential. The UK is the largest tech economy in Europe and the tech sector is at the heart of the Prime Minister’s priority to grow the economy. As he said this week, he “feels a sense of urgency and responsibility” to seize the opportunities that will make the UK the best place for tech businesses to invest and grow, thereby growing the economy and creating jobs.

And third, we need to understand how our adversaries – whether they are hostile states or cyber criminals – are using AI and how we can disrupt them.

We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft.

We know that China is positioning itself to be a world leader in AI. And if successful, we must assume that it will use this to secure a dominant role in global affairs.

LLMs also present a significant opportunity for states and cyber criminals too. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing emails much easier for foreign nationals without strong linguistic skills.


Conclusion

Amid the huge dystopian hype about the impact of AI, I think there is a danger that we miss the real, practical steps that we need to take to secure AI.

This will not be easy – but it is worth the dramatic benefit that AI will bring to our economy and society.

At the NCSC, we will be there to understand the cyber security threats we face in AI and will advise on how to increase our collective security.

I’m delighted that the UK government is showing such strong international leadership led by the Prime Minister. I look forward to working with colleagues in government, international partners and industry to ensure we meet the challenge that we face.

And I will end with one final comment. No… ChatGPT did not write this speech.