In recent years, organizations have moved quickly to integrate new AI technology into their business processes, from basic machine learning models to generative AI tools like ChatGPT. Despite offering numerous business advantages, however, the integration of AI expands organizations’ attack surfaces.
Threat actors are constantly looking for ways to infiltrate target IT environments, and AI-powered tools can become another entry point to exploit. AI security strategies are essential to safeguard company data from unauthorized access.
MLSecOps is a framework that brings together operational machine learning (ML) with security concerns, aiming to mitigate the risks that AI/ML models can bring to an organization. MLSecOps focuses on securing the data used to develop and train ML models, mitigating adversarial attacks against those models, and ensuring that developed models comply with regulatory compliance frameworks.
The risks of AI adoption
ML models can help organizations increase efficiency by automating repetitive tasks, improving customer service, reducing operational costs and maintaining competitive advantages.
But ML adoption also introduces risks at different points, including during the development and deployment phases, especially when using open source large language models (LLMs). The following are among the most significant risks:
- Bias. AI tools could give biased results based on the data they trained on.
- Privacy violations. AI tools are often trained on vast amounts of data, and sometimes it is unclear whether the owners of that data gave consent for its use in AI training. For example, an AI coding tool might be trained on code snippets from software projects in GitHub repositories that contain secrets, such as account credentials.
- Malware. Developers could inject malicious code into their LLM to make it behave maliciously or to produce inaccurate results. For example, a compromised LLM could be used in a malware scanner to recognize specific malware code as benign.
- Insecure plugins. Many LLMs allow the use of plugins to receive freeform text. Maliciously crafted input text could execute remote malicious code, leading to a privilege escalation attack.
- Supply chain attacks. Many vendors use pretrained models from third-party providers for their AI tools. These models could contain backdoors or be designed with security flaws that facilitate unauthorized access to the tool in an organization’s supply chain.
- IT infrastructure risk. AI tools are ultimately software programs that need computing infrastructure to run on, such as servers for hosting and networking devices to facilitate access to the model. If threat actors reach the underlying hardware where the ML model is deployed, they can conduct various attacks against it, such as model extraction, or disrupt the service by targeting it with denial of service attacks to affect the ML model’s performance or availability.
MLSecOps and its primary benefits
The term MLOps refers to the process of operationalizing ML models in production. It involves several phases:
- Select the ML model architecture and choose the training data sets.
- Clean and preprocess the training data for the ML model.
- Train the model and evaluate the model’s performance on metrics such as accuracy and precision.
- Deploy the model into the production environment.
- Monitor the model’s ongoing performance after deployment to ensure it works well in various real-world conditions.
MLSecOps, therefore, is a natural extension of MLOps. Similar to how DevOps evolved into DevSecOps by integrating security practices into the software development lifecycle, MLSecOps ensures that ML models are developed, tested, deployed and monitored using security best practices.
MLSecOps integrates security practices throughout the ML model development process. This integration ensures the security of ML models in two areas:
- The security of the data used to train and test the ML model.
- The security of the IT infrastructure used to host and run the ML models.
5 security pillars of MLSecOps
MLSecOps specifically focuses on the security issues related to ML systems. The following are the five main security pillars that MLSecOps addresses.
1. Supply chain vulnerability
Like other software tools, ML systems frequently use components and services from various third-party providers, creating a complex supply chain. A security vulnerability in any component across the ML system supply chain could allow threat actors to infiltrate it and conduct various malicious actions.
Typical supply chain elements for an ML system include the following:
- Software and hardware components from external providers.
- Use of an external provider, such as a cloud provider, to host the ML model.
- Use of a service from an external provider to provide the communication infrastructure between the ML system — for example, hosted on the cloud — and its users, who are spread across different locations.
The U.S. was the pioneer in addressing the security aspects related to the software supply chain. In 2021, the Biden administration issued Executive Order 14028, which requires all organizations in both public and private sectors to address security vulnerabilities in their supply chain.
2. Model provenance
Model provenance is concerned with tracking an ML system’s history through development, deployment, training, testing, and monitoring and usage. Model provenance helps security auditors identify who made specific changes to the model, what those changes were and when they occurred.
Some elements included in the model provenance of an ML system include the following:
- Data sources used to train the ML model.
- Changes made to ML algorithms.
- Information regarding retrained models, including when it was retrained, any new training data and that data’s sources.
- ML model use over time.
Model provenance is essential to comply with the various data protection compliance regulations, such as the GDPR in the European Union, HIPAA in the United States and industry-specific regulations such as the Payment Card Industry Data Security Standard.
3. Governance, risk and compliance
Governance, risk and compliance (GRC) frameworks are used within organizations to meet government and industry-enforced regulations. For ML systems, GRC spans several elements of MLSecOps, with the primary aim of ensuring that organizations are using AI tools responsibly and ethically. As more organizations build AI-powered tools that rely on ML models to perform business functions, there is a growing need for robust GRC frameworks in the use and development of ML systems.
For instance, when developing an ML system, organizations should maintain a list of all components used in development, including data sets, algorithms and frameworks. This list is now known as the machine learning bill of materials (MLBoM). Similar to a software bill of materials for software development projects, MLBoMs document all components and services used to create AI tools and their underlying ML models.
4. Trusted AI
Trusted AI deals with the ethical aspects of using AI tools for different use cases as more organizations rely on AI tools to perform job functions, including critical ones. There is an emerging need to ensure that AI tools and their underlying ML models are giving ethical responses and are not biased in any way towards characteristics like race, gender, age, religion, ethics or nationality.
One method to check the fairness of AI tools is to request that they explain their answers. For instance, if a user asks a generative AI tool to recommend the best country to visit in summer, the model should provide a justification for its answer. This explanation helps humans understand what factors influenced the AI tool’s decision.
5. Adversarial machine learning
Adversarial machine learning is concerned with studying how threat actors can exploit ML systems in various ways to conduct malicious actions. There are four primary types of adversarial ML:
- Poisoning. Threat actors inject malicious data into ML training data sets, causing them to produce incorrect answers.
- Evasion. This attack occurs after the ML model has finished training. It involves sending specially crafted prompts to the ML system to elicit incorrect responses. An example is altering a photo of a dog slightly to trick the ML system into misidentifying it as a building.
- Inference. Threat actors try to reverse engineer the ML model to reveal what data was used to train the model, which could contain highly sensitive information.
- Extraction. In this attack, threat actors try to extract or replicate either the entire ML model or just the data used to train it.
MLSecOps best practices
ML development teams can use the MLSecOps methodology efficiently to mitigate cyberattacks when developing ML models. The following are some MLSecOps best practices:
- Identify threats associated with ML. Identify potential attack vectors related to ML development. For instance, there are special security vulnerabilities that pertain to ML development, such as data poisoning, model extraction and adversarial sample attacks.
- Secure model data. If the data used to train the model contains sensitive information, such as customers’ personally identifiable information, that data should be appropriately masked through encryption or other means.
- Use sandboxing technology. The development environment of the ML models should be isolated from the production environment. This ensures attackers cannot access ML models during the development phase, in which the model is under testing and might be inadequately protected.
- Scan for malware. All software components used to create the ML model, especially open source ones, should be scanned for malware and other security vulnerabilities. Components coming from third-party providers should be scanned thoroughly to ensure they are free from security holes.
- Perform dynamic testing. Dynamic testing involves supplying malicious prompts to the ML model during testing to ensure it can handle them. This is essential for LLMs that are exposed to the internet.
Nihad A. Hassan is an independent cybersecurity consultant, an expert in digital forensics and cyber open source intelligence, and a blogger and book author. Hassan has been actively researching various areas of information security for more than 15 years and has developed numerous cybersecurity education courses and technical guides.