Artificial Intelligence & Machine Learning , Governance & Risk Management , Next-Generation Technologies & Secure Development
Proof of Concept: Ensuring AI Compliance, Security Controls
Panelists Troy Leach and Avani Desai on AI Organizational and Regulatory Challenges Anna Delaney (annamadeline) • May 22, 2024In the latest "Proof of Concept," Troy Leach, chief strategy officer at Cloud Security Alliance, and Avani Desai, CEO at Schellman, discussed integrating AI into organizational frameworks. They highlighted the evolving roles of compliance and leadership and the importance of regulatory frameworks in ensuring robust and trustworthy AI deployment.
See Also: Establishing a Governance Framework for AI-Powered Applications
Leach emphasized the significance of leadership in AI governance, stating, "We need clear AI policies and acceptable practices, which are still missing in the vast majority [of organizations]. We have struggled very clearly in enterprises with documentation, logging and monitoring."
Desai said that fostering a culture of accountability and transparency is crucial for compliance and security. "Rather than imposing blanket restrictions, we decided to prioritize educating our team members on responsible usage, really focusing on fostering a culture of accountability and security awareness," she said, adding that "you have to proactively establish documentation and communication channels."
The panelists - Leach, Desai, Anna Delaney, director, productions, ISMG, and Tom Field, senior vice president, editorial, ISMG - discussed:
- The importance of regulatory frameworks, including NIST AI RMF and ISO 42001;
- The evolving roles of AI officers and the need for clear organizational responsibilities;
- Challenges and strategies for implementing AI within a zero trust framework.
Leach has spent more than 25 years educating about and advocating for the advancement of responsible technology to improve the quality of living and parity for all. He sits on several advisory boards as an expert in information security and financial payments. Leach also founded a consulting practice that advises on the opportunities to leverage blockchain technology, zero trust methodology and various cloud services to create safe and trusted environments. Previously, he helped establish and lead the PCI Security Standards Council.
Desai has domestic and international experience in information security, operations, P&L, oversight and marketing involving both startup and growth organizations. She has been featured in Forbes, CIO.com and The Wall Street Journal and is a sought-after speaker on a variety of emerging topics, including security, privacy, information security, future technology trends and the rising number of young women involved in technology.
Don't miss our previous instalments of "Proof of Concept", including the Feb. 27 edition on how to secure elections in the age of AI and the March 21 edition on opening up the AI black box.
Transcript
This transcript has been edited and refined for clarity.
Anna Delaney: Hello! This is Proof of Concept, a talk show where we invite security leaders to discuss the cybersecurity and privacy challenges of today and tomorrow and how we can potentially solve them. We are your hosts ... I'm Anna Delaney, director of productions here at ISMG.
Tom Field: I'm Tom Field. I'm senior vice president of editorial at ISMG. Anna! Welcome back from the RSA Conference.
Delaney: What a pleasure that is. And we are indeed back, we survived. And what was the topic de jure Tom, in your opinion?
Field: I've been joking that they should refer to it now as the RSA AI Conference. Every conversation 'started with', 'ended with' and 'surprisingly erupted with' the topic of generative AI.
Delaney: The short benefits, risks and implications of how we use it offensively and defensively in cybersecurity. But also how do we regulate this technology. And that is the main focus of today's proof of concept — regulatory compliance. How evolving AI technologies require strong organizational strategies and teamwork to ensure security and also balancing innovation with ethical and legal responsibilities. Have these concerns come up in the conversations you've been having Tom?
Field: When I spoke to Trevor Hughes, the CEO of the International Association of Privacy Professionals, we had a conversation about ethics and governance, and a part of that was where regulatory guidance is going to come from in the world not just in the United States, in the world. There are some things happening in Europe and some things happening in Asia. I think we may be on the cusp of seeing a GDPR of AI. We differed over whether we're going to see anything soon in the United States, and I maintain that if you can't get one bit of privacy legislation in this country but are left with 50. I don't see a lot of hope for AI. But he and I have a friendly bet on that because he feels more optimistic about it than I do. Huge topic of conversation and very timely for today.
Delaney: Yeah very interesting in the different approaches. With Europe, we're taking a more prescriptive and binding stab at it compared to the U.S., where there is more flexibility and room to adapt over time. Let’s move forward to a couple of points from discussions at the RSA Conference. One was that regulations often emerge when industry practices fall short. And change is hard. However, although it is hard and difficult, it is crucial for improving security and reducing risk. Another point is that leadership is very important when it comes to government and companies to drive the necessary changes and balance innovation with regulation.
Field: To add to that, we've never done any of this at the pace that we're doing it today. AI is a completely different animal than anything we've tackled before, and however we do this, in regulatory fashion, it is going to be at a scale and speed unlike anything we've seen.
Delaney: Yeah that's a very good point. I mean, you get to hear what our guests have to say on these points and some of the challenges that are coming up. So I think it's time to welcome them in. I'm very pleased to welcome Avani Desai, CEO at Schellman, and Troy Leach, chief strategy officer of the Cloud Security Alliance. Welcome to you both. Thank you so much for joining us.
Field: Indeed! Thanks for being with us.
Avani Desai: Great to be here. Thank you.
Troy Leach: Yeah! Thank you.
Delaney: Tom, why don't you lead the way?
Field: Let's start here and I have questions for both of you. As AI technologies evolve and integrate deeper into organizational frameworks, I wonder if you might discuss the intertwined roles of organizational responsibilities and regulated practices and shaping what we want to see, i.e., a robust AI deployment strategy.
Desai: I can take it first and then we'll pass it off to you Troy. I think Tom you said it. This is reminiscent of the impact that GDPR had on organizations. So when GDPR first emerged, there was an introduction of the data protection officer, which drove significant resource allocation accountability. And I'm sure you've heard this, especially if you're talking to IPP, the whole concept of privacy by design. So what I'm envisioning is a very similar situation unfolding with AI. We are going to anticipate the emergence of a distinct oversight role. Perhaps it's going to be the chief AI officer or maybe a chief data officer. We're starting to see the chief trust officer role come out. For instances, this role is going to work closely with the DPO because of the intertwining nature of AI and the privacy concerns and data protection issues. So instead of simply reacting to regulatory environments, I think organizations are going to adopt a proactive approach. So we're going to embed regulatory compliance into the control environment, which I think is going to be through frameworks like NIST AI RMF or ISO 42001, the first global framework focusing on trustworthy AI. Moreover, I think that's going to foster this culture of, trust by design. I think it's fascinating that we're witnessing this trend and doubling down on these frameworks, such as the EU AI Act, and personally, ISO 42001, to establish a strong foundation of organizational leveling. And, we've seen this similar pattern before, where companies are initially seeking technological solutions for emerging risks to only realize that you need to address the foundational organizational practice. So I'm excited to see this evolution of the critical need for dedicated leadership and frameworks to focus on the complexities of AI deployment. Moreover, I think we're witnessing this pivotal integration of AI into organizational strategies.
Leach: Yeah and just to follow it obviously, I think, for myself, I'll go, I agree with all of the conversation about the leadership and oversight role of an officer in the company. So I'll take the other side of maybe some of the practitioners that have to execute roles and responsibilities in the organization. So I'm naturally an optimist. But let me recap all the ways that we've struggled for essentially the entirety of the internet, of doing good organizational responsibility and managing what regulators come up with to be able to protect systems data. So I think we've struggled very clearly in enterprises with documentation, logging and monitoring. And when I'm hopeful that AI does … this is the optimism coming back … is if we look at just in general what gen AI can do really well, it synthesizes data, it can categorize information with good reasoning and logic and it is excellent at pattern identification. And as we've seen that ChatGPT and everything else that is shown, it is good at communicating, creating very readable language. And so for that, I think SOC analyst and GRC folks alike, all these roles and responsibilities for assurance that there's security and trust, I think that the responsibilities in these organizations are going to get the best augmentation that we have ever had in the digital age, which is going to improve what we've struggled with for over 30 years.
Field: Very well said both of you. Anna! Your witnesses?
Delaney: Thank you. It's great stuff so far. So a question for you both again. Resiliency benchmarking and shadow access we know are key for keeping AI operations secure. How do you see zero trust fitting into all of this? How do you see these fitting into zero trust architectures as well and what can organizations do together to boost their AI systems' security and resilience?
Leach: So I don't mind taking a stab at this. This is a hard question. And why I think it's hard is because we're still getting familiar with how to properly leverage gen AI. So I'm hearing from many companies that are in the enterprise world putting a temporary restriction until they get that verification right. So zero trust is never trust, always verify. And so they're at a point where I'm not going to trust this and I just don't know how to verify. And it's going to be difficult because a part of the inherent characteristics of gen AI is it creates new things. And so it is sometimes difficult to identify what is going to come out of this large language model. So I think it has come closer to the zero trust methodology and it isn't methodology. There needs to be clear AI policies and acceptable practices which are still missing in the vast majority. I heard that up to 75% of organizations still don't even have a basic AI policy yet - heard that at RSA. I think we also need to better pen tests and know what type of threat analysis we need to have against these models that are being used. I also think that we just need a better collection of AI use cases within an enterprise. So we have this inventory where we've talked about bill of materials for software and crypto. We need that also for all the AI cases that are potentially happening both sanctioned and unsanctioned and understand how we can manage that and then monitor prompts that come in and out. I've seen some interesting APIs that can monitor what an enterprise user is putting into a query and then based on that prompt, monitor whether that was acceptable and also the outcomes. And so putting in these things will help CISOs. But again, we're just in the early days and I've yet to hear someone confidently say that they have applied zero trust to their strategy for AI.
Desai: I 100% agree, and I can tell even that internally within our organization, early on, we encountered the same scenario where employees were already utilizing AI tools like ChatGPT. So what we had to do was we had to swiftly implement policies and awareness campaigns, focus on acceptable use, and ensure that the employees weren't inputting any sensitive data, and they were responsible for that. So rather than imposing blanket restrictions, what we decided to do was prioritize on educating our team members on responsible usage, focusing on fostering a culture of accountability and security awareness. But I agree with you. I think this is very complex. The complexity of these AI technologies, such as LLMs and RAGs, intensifies with these risks. I think it is introducing a whole new set of security risks beyond what we've traditionally seen in ERPs and cloud and information systems in general. So I think it's concerning, especially shadow access. So whether it's unintended or unauthorized access, exposure of sensitive data and governance issues is going to be paramount. So I agree with Troy, this presents such a unique challenge, especially to IT teams and CISOs, and they're going to evolve over time. And I worry that there is going to be some type of potential circumventing the existing governance controls. And so, it goes back to there's an importance of implementing robust monitoring controls and ensuring compliance. But zero trust architecture, which states never trust, always verify. I think you're going to need robust authentication and access controls are going to be crucial to countermeasure against these shadow access, and I think it's going to be essential for cybersecurity programs to have frameworks, like NIST and ISO, so you can establish risk management programs and then take technical resources like MITRE ATLAS and OWASP. I think that's going to give you real insights into real-world attack scenarios to ensure you address any of these emerging threats. So, it is definitely complicated and I think it's going to continually evolve. But we're just going to have to stay on top of it.
Delaney: What else to think about that. Thank you so much! Tom! Back to you.
Field: I'm going to go back to the regulatory landscape. And here's the challenge. I'm going to try to say all these in one breath and not make a mistake. Avani pointed out the ISO 42001. We have the EU AI Act, and we have President Biden's AI EO. Troy! Avani! What new challenges and opportunities do these frameworks present?
Desai: So first, I think it's essential to ensure that we consider that these regulations and standards are serving as detractors or enablers of business innovation. I'm all about regulatory frameworks. I run an assessment company. But there is a balance between innovation and overregulation. We're a firm believer that AI regulation is necessary because it needs to establish guardrails, especially for this powerful, evolving technology. But at the same time, I want to make sure it's not stifling innovation. So let's take for example, you mentioned the EU AI Act. It offers a balanced approach by providing SMBs with free resources, like sandboxes, to validate their assumptions regarding their risk categorization for products or technologies that they want to launch in the EU market. I think this empowers businesses to navigate regulatory compliance, especially SMBs to foster innovation. The EU AI Act primarily targets high-risk AI applications, which again ensures that the majority of AI use cases in the EU are probably going to remain unaffected, thereby promoting technological advancement. Similarly, I think ISO 42001 is designed to guide organizations in developing a management system framework that is tailored to specific risk and environment. Again, it's not a blanket over all that says, you know, go check the box, you know, you're going to have to focus on an entire technology rebuild. That's not what it's saying. It's a very flexible framework that focuses on responsibility and trustworthy. The other thing I like it, I like this is because the EU AI act and ISO 42001 are aligned, and this is the first time I'm seeing the world come together and align on frameworks. Now regarding the Biden Executive Order. It is aligned because it's talking about trustworthiness and the regulatory requirements to bolster trust in the AI system. So I think it presents an opportunity for collaboration and knowledge sharing. I'm seeing fostering a community-driven approach, and I haven't for a long time seen emerging technology that is fostering this. So I do think you're going to have to leverage the standards and regulatory guidance to navigate these challenges. But everyone on the global scale is coming together to promote ethical and responsible innovation.
Leach: There's not much I can add to that I think ... that was an exceptional answer and I agree with a lot of it. I think first and foremost, the challenge with any type of legislation or even any framework is that we're discovering new opportunities and risks daily. Hence, even within our cloud security working groups, we have four members dedicated to AI, we've started a taxonomy and it's being added daily to things that are being discovered and new ways to abuse large language models in ways never thought of before - sometimes AI is being able to create new AI risks. I think the challenge with any new regulation is the ability to protect citizens without creating a disadvantage for the society. I like what they said about stifling innovation. I think that's the biggest risk that we want to avoid. And recognize that enterprises will probably move forward with whatever comes out and try to create to that. I think with the EU AI act, I'll just focus on that for one second. I think I love the momentum to have disclosure to users to know when they're engaged with a real person or when they're talking with AI. I think that is easy enough and makes sense. I also think within that legislation, I'll start talking about ranking the high-risk uses of AI. I think that today it is difficult to identify and discriminate what are high-, medium- or low-risk activities within AI? I'm sure we'll figure it out by the time it is enacted in 2025. But I think that is an area where the industry has to be creative and think how do we truly assess the risks and to that collaboration point, come collectively together to an agreeable point what is a good risk. Once you have legislation like this, how do you use it? How do you enforce it? One thing that I heard at RSA was how do you prosecute a bad actor? And if that bad actor is predominantly just AI that's been released into the wild. And it's using what forms of a public good large language model? Who is at fault for what this AI generates and how do you enforce all of these acts against something that is not necessarily of human origin? So there is a lot for us to still discover.
Field: Don Henley says the lawyers dwell on small details. Lots to think about here. But we're running out of time. Anna and I both want to make sure we get in a couple of questions. Avani! I've got one specifically for you and then Troy, Anna is going to follow up a question for you. Avani! From your compliance expertise, what legal frameworks do you find most challenging for organizations implementing AI and I wonder if you may be able to share some specific examples of how those challenges can be overcome?
Desai: So, I mean, we've talked about frameworks like the EU AI Act or setting precedent. And then there's voluntary frameworks like NIST, AI RMF and ISO 42001. But I think they all have a central challenge, and it's our organization's structures that are going to prioritize roles, responsibilities and accountabilities to ensure that they're meeting these AI objectives around security, safety, environmental impact and transparency. This is the first time we're looking at these nonsecurity type of areas like transparency, bias and safety. So there's going to be a couple of ways that we're going to be able to help organizations overcome these. One is you have to proactively establish documentation and communication channels. And it has to be more than just checking the box with your auditors. I live in this world where I think - "Oh our auditors are going to come in, let's just do this, this and this to check the box. So we can get that certification. That type of thinking cannot happen. You have to demonstrate compliance to auditors. But it's also about fostering this culture of accountability and transparency throughout the entire organization top down, bottoms up and ensure that you're investing in training and awareness and empowering employees to navigate these complexities of AI compliance. I also think in addition to documentation and training, it's the organizations that have to leverage emerging technologies to enhance compliance efforts. So for instance, AI-powered compliance tools are going to help streamline risk assessments and monitoring and reporting and all the things that you need to do to stay ahead of regulatory requirements simultaneously by minimizing administrative burden, because I think what's going to happen is, if we have this large administrative burden, people aren't going to want to do it. And I think that we have to further foster collaboration and knowledge sharing within the organization as well as within the industry and networks, and I think the CSA is doing such a great job by providing valuable insights by helping CSPs navigate the compliance landscape with AI and other things. So I guess the only thing I would say before we close this out is you have to remain agile and you got to be adaptive to your approach with AI compliance. Also, keep in mind that these regulatory frameworks are going to evolve, and it's going to be at a quick rate. Therefore, organizations have to actively monitor developments, and you're going to have to adjust your strategies - it could be monthly, it could be quarterly. But just ensure that you have this proactive and collaborative mindset to be able to stay up to date with these compliance challenges.
Field: Very well said. Anna I'll pass this back to you and Troy to bring us home.
Delaney: Thank you. Well! Troy I know that the Cloud Security Alliance held their own AI summit recently. I'd love to know about the key insights that you thought attendees took away and how do these advance the discussion around your own AI framework, which is set for a September release I believe?
Leach: Yes, in draft form. So we did release four research documents, all of which are free to download on our website. The topics covered a lot, which just naturally came up in our conversation here today. So we've talked about organizational responsibilities, best practices with regulated environments, similar to what Avani just talked about? I've never heard dynamic, agile, quick evolution and frameworks used so close together before but I think we need to; this is a new generation of how we build the right guard rails. We also have worked on resiliency benchmarking with AI. We also have another research paper coming out. I think, in general, we look at all types of technology providers, such as cloud service providers, and other types of emerging or existing technologies. There is this regulatory governance push around how do we have third-party trust and assurance on the supply chain, especially in the critical infrastructure. So we'll have the operational resiliency paper out this summer for banks and how they manage some of those risks. I also think about shadow access, and that risk is very real. So we've published some best practices around that and applying zero trust within those AI environments. There's several more papers that are going to be out for publication this summer, again all free to download, and then a draft of our framework will be available in September. We're working closely with all the other organizations mentioned today, such as NIST and OWASP, and other organizations around the world that are building out similar types of recommendations. So we'll have a workshop there. But at the summit, we had a great turnout, about 1300 attendees, with lots of great talks. We had Kevin Mandia and Phil Venables, CEO of Mandiant and CISO from Google Cloud, respectively. We had our own executive leader of the AI safety initiative, Caleb Sima. He talked a lot about those new opportunities and just went through a host of ways that we're going to be able to apply good security, along with a fireside chat he did with CISA's Lisa Einstein. And there were a lot of other CISOs from enterprises like Paramount and Visa and Athena, Kraft Heinz, and many more talking about these challenges. I think one of the biggest takeaways was that security professionals are realizing that AI is inevitable, and it's mandatory in the future to protect organizations. But it does need these guardrails we've been talking about. It needs a community of open conversation, which Avani did such a nice job of articulating what's happening in and around the world today, and collaboration and conversations like Anna and Tom you've offered us today. So I thank you for this. And hope that others will join our workshop in Seattle in September 10 through 12 as we try to walk through much of what the community is built on and then roll up our sleeves to try to build that next generation of security controls that Avani was talking about.
Delaney: Fantastic work there Troy. Thank you so much. And we'd love to talk more about the AI framework in more detail very soon. But for now, Avani and Troy, thank you so much for all the invaluable knowledge and education you've shared with us. It's been brilliant.
Field: Terrific and thank you so much.
Leach: Yeah! Appreciate the time.
Desai: Thank you Anna and Tom.
Delaney: Thank you! And thank you Tom.
Field: We'll do this again.
Desai: Yes please.
Delaney: Thanks so much for watching. Until next time.