AI-Based Attacks , Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime
Why Discovering Shadow AI Is Key to Protecting Data
Normalyze's Ravi Ithal on the Rise of LLMs and the Security Challenges They PoseEnterprises are rapidly adopting generative AI technologies, but this surge has brought significant security challenges. As companies race to integrate large language models, or LLMs, into custom applications, the biggest concern is maintaining the confidentiality and privacy of data and ensuring that LLMs are safe, said Ravi Ithal, co-founder and CTO of Normalyze.
Many employees may unknowingly share sensitive data with LLMs. Whether developers use cloud-based LLMs or employees upload corporate documents into tools such as ChatGPT, these actions pose significant risks. "The number one step is to discover shadow AIs," Ithal advised security practitioners. "And number two: Think about how to protect them."
"Discovery is the number one task for everyone. You will then understand what type of data stores are being used for what LLMs and what applications, and then the next step is to find out what sort of data is there," he said.
In this video interview with Information Security Media Group at Black Hat 2024, Ithal also discussed:
- Tools and strategies for protecting data in generative AI applications;
- How shadow AI presents new risks by bypassing traditional security controls;
- How Normalyze is helping clients secure LLMs.
Ithal has more than 20 years of experience in network security and creating disruptive technologies that protect enterprises from cyberthreats. He was the founder and chief architect of Netskope and a founding engineer at Palo Alto Networks.