top of page
DRbacks1.png

Our Insights

DRSection.png

How to ensure your organisation deploys AI services safely and securely

Updated: Aug 10, 2023


1 Introduction


Recent advancements in the field of Generative Artificial Intelligence have created numerous opportunities for organisations to build new and exciting products and services. Organisations are also exploring how they can leverage these services to increase operational efficiency and effectiveness. The aim of this article is to provide guidance on how to integrate AI services safely and securely. While the frameworks and principles discussed apply broadly, we will focus on the type of Large Language Models (LLMs) that are receiving so much attention at the moment, such as ChatGPT and its corporate counterpart, Microsoft Azure OpenAI Service.



2 LLM’s Do Not Understand


By now I’m sure that everyone will have at least some understanding of what LLM’s are and what they can do. Much has been written on the subject, so I won’t bore you with it again. But to understand the risks of such services and how we can mitigate them, it is also important to understand what they are not.


As remarkable as ChatGPT is, it’s important to understand that it is not intelligent. At least, not in the same way that humans are intelligent. It’s very cleaver, but it does not think, it does not create, and, most importantly, it does not understand. It has been compared to the type of autocomplete tool you get in your email app in that it simply predicts what word comes next based on what has already been written.


LLMs are trained on vast amounts of text to learn patterns and associations in language. When you ask it a question, it finds the words and sentences that typically appear in correlation with the questions you have asked. It does not understand the question, or the answer. It simply auto completes the conversation with what looks right based on the training data.


Why am I telling you this? Because it’s important to understand that an LLM does not know if the results it returns are true, false, sensitive, confidential, biased, malicious, or prejudiced. As such, it is up to the organisation to implement the controls required to ensure their implementation is safe and secure.


So, how do we do that?



3 ENISA Framework for AI Cybersecurity Practices


Organisations and Governments are racing to catch up with AI and to define policies and standards that ensure AI systems are implemented safely and securely. In June 2023, the European Union Agency for Cybersecurity (ENISA) released the Framework for AI Cybersecurity Practices (FAICP).


I like this framework for a number of reasons. Firstly, it consolidates the work done by multiple Standards Organisations into one simple framework. Secondly, the framework recognises that while AI systems have their own specific security concerns, they’re hosted within traditional ICT environments and, therefore, good cybersecurity foundations within the ICT layer are extremely important.



Image source: ENISA



ENISA define the three layers of the FAICP as follows:


· Layer I (Cybersecurity foundations). The basic cybersecurity knowledge and practices that need to be applied to all ICT environments that host, operate, develop, integrate, maintain, supply, and provide AI systems. Existing cybersecurity good practices presented in this layer can be used to ensure the security of the ICT environment that hosts the AI systems.


· Layer II (AI-specific). Cybersecurity practices needed for addressing the specificities of the AI components with a view on their life cycle, properties, threats and security controls, which would be applicable regardless of the industry sector.


· Layer III (Sectoral AI). Various best practices that can be used by the sectoral stakeholders to secure their AI systems. High-risk AI systems (i.e., those that process personal data) have been identified in the AI Act and they are listed in this layer to raise the awareness of operators to adopt good cybersecurity practices.



4 AI Risks


As an ICT system, AI systems are subject to all the technical risks associated with ICT systems, but they also bring some new risks. As noted by ENISA, AI systems are Socio-Technical in nature, meaning there are societal risks as well as technical risks. This is because they are directly connected to social dynamics and human behaviour, which manifests in threats such as bias, lack of fairness, lack of explainability, etc.

ENISA note five risks that are specific to Machine Learning (LM) and Large Language Models:


· Evasion. Evasion is a type of attack in which the attacker works with the ML algorithm input to find small perturbations which can be used to exploit the algorithm’s output. The generated input perturbations are designated as adversarial examples.


· Poisoning. In a poisoning attack, the attacker alters the data or the model to modify the ML algorithm’s behaviour in a chosen direction (e.g., to sabotage its results or to insert a back door) according to its own motivations.


· Model or data disclosure. This threat is related to the possible leaks of all or partial information about the model, such as its configuration, parameters and training data.


· Compromise of ML application components. This threat refers to the possible compromise of an ML component, for example by exploiting vulnerabilities in the open-source libraries used by the developers to implement the algorithm.


· Failure or malfunction of an ML application. This threat is related to the failure of the ML application. It can be caused by denial of service due to a bad input or by the occurrence of an untreated handling error.



5 Securing AI Systems


We now have a framework to build from and a better understanding of the risks. Now let’s bring it all together and discuss what we can do at each of the three layers of the FAICP to protect our organisations.


5.1 Layer I – Cybersecurity Foundations


Layer I is all about building a good cybersecurity framework and good cybersecurity practices to protect the Confidentiality, Integrity, and Availability of ICT systems. Organisations need to ensure they have a solid security architecture including principles, policies, standards, and procedures, that are supported by Senior Management and the Board. A robust cybersecurity framework should include independent oversight and assurance to ensure appropriate security controls are designed, implemented, and operating effectively.


There are several frameworks and strategies that organisations can build from, including the ACSC’s Essential Eight and the NIST Cybersecurity Framework. Key to success at this level is the use of tried and tested security principles, such as Defence in Depth, Zero Trust, Least Privilege, Reduced Surface Area, Trusted Credentials, etc.


5.2 Layer II – AI Specific


Security practices at layer II must consider the AI specific risks mentioned above. Let’s focus on what I think organisations are most likely to face: Data Disclosure and Poisoning.


5.2.1 Data Disclosure


Recall that LLM’s, like ChatGPT, do not understand the data and cannot determine if a response is accurate or appropriate. If the training or prompt data contains personally identifiable information, corporate intellectual property, or other confidential information, this may be returned in query results leading to unauthorised access and data disclosure.


It is also important to note that if you are using public models, such as ChatGPT, any data you provide may be used to train the model. Users must never submit sensitive or confidential information to a public LLM, or any public webservice for that matter.


When using private models, such as Azure OpenAI Service, it is also important to check the terms and conditions. For example, Microsoft store all prompts and responses for 30 days for content filtering and abuse monitoring. Any AI provider should undergo the same third-party security assessment process as any other partner.


The key controls required to avoid data disclosure risks include Data Classification, Data Sanitisation, Identity and Access Management, Access Control, User Training, and Data Loss Prevention.


5.2.2 Poisoning


Poisoning occurs when a malicious actor modifies the training or prompt data so as to change the outcome in their favour. For example, they could change customer or user behaviour, degrade service levels, cause damage to brand reputation, promote misinformation, etc.


Preventing poisoning attacks comes down to protecting data integrity. Organisations must protect their training and prompt data with controls such as Identity and Access Management, Access Controls, Encryption, and Logging and Monitoring. They should also use large data sets to reduce susceptibility to malicious samples. Key principles organisations must enforce include Least Privilege and Need to Know.


5.3 Layer III – Sectorial AI


Each sector will have unique problems that require unique solutions. For example, the Health sector will have different problems to Financial Services. ENISA provides guidance for some specific industries, and this will expand over time. Organisations should work collaboratively with other like organisations to define their unique risks and develop mitigating strategies.



6 AI Trustworthiness


I would like to draw particular attention to AI Trustworthiness. It should be the imperative of any organisation implementing AI to do so in an ethically responsible manner, and the ethical use of AI systems centres around building AI solutions that are trustworthy. ENISA define the following characteristics of trustworthy AI:


· Accountability – People and Organisations should be accountable for the outcomes of AI solutions.


· Accuracy – The output of AI solutions must be accurate and reflect reality.


· Explainability – How a conclusion or result was reach should be explainable it a way that can be understood by affected persons.


· Fairness – Results produced using AI systems should be fair and unbiased.


· Privacy – The privacy of individuals must be protected.


· Reliability – AI systems must reliably produce consistent results and meet performance standards.


· Safety – Prevent behaviours that are harmful to humans.


· Security – AI systems must resist attacks and ensure Confidentiality, Integrity, and Availability.


· Transparency – Foster understanding of AI systems and make stakeholders aware of their use.



7 Conclusion


The advancements in AI are incredible, and it’s understandable that organisations are rushing to adopt them. But those that do so without the proper cybersecurity frameworks in place do so at significant risk. It is also important to understand that AI services will only grow in number and popularity, and your staff are going to use them whether your organisation is ready or not. At a minimum, we must ensure that everyone understands the risks and we must cultivate AI Trustworthiness. To confidently lean into AI, we must develop robust cybersecurity frameworks based on solid foundations, such as the ENISA FAICP.


Is your organisation’s cybersecurity framework ready for AI?



Author - Michael Hodson, Senior Cyber Security Consultant

90 views
bottom of page