AI-Based Attacks , Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime
Sleepy Pickle: Researchers Find a New Way to Poison ML
Hackers Can Use the Attack Method to Manipulate ML Model Output and Steal DataSecurity researchers have found a new way of poisoning machine learning models that could allow hackers to steal data and manipulate the artificial intelligence unit's output.
See Also: 2024 CISO Insights: Navigating the Cybersecurity Maelstrom
Using the Sleepy Pickle attack method, hackers can inject malicious code into the serialization process, said researchers at Trail of Bits. With this access, a hacker can theoretically steal data, run a malicious payload and manipulate output in ML models with higher persistence than they could with other methods of supply chain attacks.
Serialization is the process of converting a Python data object into a series of bytes to store on a system. It's also called the "pickling" process. The pickle format continues to be one of the most popular ways to package and distribute ML models, despite known risks.
Sleepy Pickle "goes beyond previous exploit techniques that target an organization's systems when they deploy ML models to instead surreptitiously compromise the ML model itself, allowing the attacker to target the organization's end users that use the model," said Trail of Bits.
The attack process is simple. Attackers can employ programs or tools often used for detection and analysis, such as open-sourced fickling, to create malicious pickle files. They can then convince the target to download the poisoned file via phishing, man-in-the-middle or supply chain attacks and remain undetected until the user deserializes the data object containing the malicious code and executes the payload.
With this attack method, the hacker doesn't need to have local or remote access to a system. There is no trace of the malware attack. And because serialized models are usually large, a small piece of malicious code may not be easy to detect in the total file size.
As hackers customize the malicious payload they use, they can also make it harder to detect.
The damage Sleepy Pickle can do is low in severity if defenders have controls in place such as sandboxing, isolation, privilege limitation, firewalls and egress traffic control.
But hackers can use the method to manipulate the model in ways such as inserting backdoors or manipulating its output. The report describes how bad actors can use this method to suggest that drinking bleach can cure the flu. Attackers can also use the poisoned model to execute malware that can steal data, phish users and output misinformation.
Apart from using a safer file format than pickle, there's not much organizations can do to prevent these risks, said the researchers. They advised using models from trusted organizations and assessing the security risks of AI and ML models holistically rather than in isolation.
"If you are responsible for securing AI/ML systems, remember that their attack surface is probably way larger than you think," said the researchers.