Navigating AI-Based Data Security Risks in Microsoft Copilot
Zenity's Michael Bargury on AI Prompt Injection and Copilot Security FlawsAs enterprises increasingly adopt generative AI-powered tools such as Microsoft Copilot, security leaders grapple with addressing new cybersecurity vulnerabilities such as data leakage, model poisoning and unauthorized access to sensitive information, according to Michael Bargury, co-founder and CTO of Zenity. While Copilot and other AI assistants are built on existing enterprise search capabilities, they significantly lower the barrier for accessing potentially sensitive data.
By sending an email, a Teams message or a calendar event, attackers can use prompt injection "to completely take over Copilot on your behalf," Bargury said. "That means I control Copilot. I can get it to search files on your behalf with your identity, to manipulate its output and help me social-engineer you."
Although Microsoft has taken steps to secure Copilot, including 10 different security mechanisms discovered by Bargury, his team was able to circumvent all of them. "You need to be persistent," he said, warning that existing security measures are not sufficiently robust to safeguard enterprise data from threat actors.
In this video interview with Information Security Media Group at DEF CON 2024, Bargury also discussed:
- The risks associated with creating bots using Copilot Studio;
- The importance of training programs to educate users about preventing social engineering attacks through AI tools;
- Security concerns around plug-ins that expand Copilot's capabilities.
Bargury is a hacker, builder and cybersecurity educator. He has nearly 15 years of leadership experience and has worked in organizations such as Microsoft and OWASP Foundation. Bargury is a regular speaker at top conferences, including Black Hat, DEFCON and RSAC.