Being secure is an essential pre-requisite for ensuring that our AI is safe, ethical, explainable, reliable and as predictable as possible.
This is becoming increasingly important as the use of AI technology grows.
Users need reassurance that machine learning is being deployed securely, without putting personal safety or personal data at risk.
And as the UK’s national technical authority on cyber security, the National Cyber Security Centre (NCSC) has a clear remit to ensure the integrity of AI and ML systems through effective cyber security.
So…in addition to the overarching need for security to be built into AI and ML systems, and for companies profiting from AI to be responsible vendors, the NCSC is focusing on three elements to help with the cyber security of AI.
First, we believe it is essential that organisations using AI need to understand the risks they are running by using it – and how to mitigate them.
The NCSC has already produced a set of security principles on machine learning, as well as cyber security guidance on LLMs.
It’s vital that people and organisations using these technologies understand the cyber security risks – many of which are novel.
For example, machine learning introduces an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.
And LLMs pose entirely different challenges. For example - an organisation's intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts.
Our advice provides pragmatic steps that can be taken to secure AI as it is implemented. But we need to – and will – go much further.
For example, as the disruptive power of AI becomes increasingly apparent, CEOs at major companies will be making investment decisions about AI and we need to ensure that security considerations are central to these deliberations.
At the NCSC, we will be agile and develop our advice as the technology and our unique understanding of the threat changes.
Second, we need to maximise the benefits of AI to the cyber defence community.
AI has the potential to improve cyber security by dramatically increasing the timeliness and accuracy of threat detection and response.
And we need to remember that in addition to helping make our country safer, the AI cyber security sector also has huge economic potential. The UK is the largest tech economy in Europe and the tech sector is at the heart of the Prime Minister’s priority to grow the economy. As he said this week, he “feels a sense of urgency and responsibility” to seize the opportunities that will make the UK the best place for tech businesses to invest and grow, thereby growing the economy and creating jobs.
And third, we need to understand how our adversaries – whether they are hostile states or cyber criminals – are using AI and how we can disrupt them.
We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft.
We know that China is positioning itself to be a world leader in AI. And if successful, we must assume that it will use this to secure a dominant role in global affairs.
LLMs also present a significant opportunity for states and cyber criminals too. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing emails much easier for foreign nationals without strong linguistic skills.