AI-Based Attacks , Fraud Management & Cybercrime , Governance & Risk Management
ISMG Editors: AI in Focus - Concerns, Priorities for CISOs
Also: Navigating AI Discussions About Government Policies Anna Delaney (annamadeline) • December 22, 2023In the latest weekly update, two analysts at Forrester - Allie Mellen and Jeff Pollard - join three editors at ISMG to discuss important cybersecurity issues, including CISOs' primary inquiries about AI/ML, how organizations can thwart data poisoning attacks, and practical use cases for AI.
See Also: Breaking Down Silos With a Holistic View of Security, Risk
The panelists - Anna Delaney, director, productions; Mathew Schwartz, executive editor of DataBreachToday & Europe; Tom Field, senior vice president of editorial; Allie Mellen, principal analyst, Forrester; and Jeff Pollard, vice president and principal analyst, Forrester - discussed:
- Top AI/ML concerns and queries from CISOs;
- Measures organizations can take to prevent data poisoning attacks in AI systems;
- Use cases for generative AI.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the Dec. 8 edition on ugly health data breach trends of 2023 and the Dec. 15 edition on decoding BlackCat ransomware's downtime drama.
Transcript
This transcript has been edited and refined for clarity
Anna Delaney: Hello and welcome to the Christmas edition of the ISMG Editors' Panel. I'm Anna Delaney. And this week is all about the theme of 2023: artificial intelligence. We will be exploring current uses and future conversations in general with AI, top AI/ML concerns from CISOs, and navigating precision in AI discussions for government policies. We are thrilled to be joined by not just one, but two analysts superstars today, principal analyst Allie Mellen, and VP and principal analyst Jeff Pollard, both at Forrester, thank you so much for joining us. And also joining on my stellar colleagues, Tom Field, senior vice president of editorial, and Mathew Schwartz, executive editor of DataBreachToday & Europe. Great to see you all. Allie and Jeff, we like to begin these conversations with a brief, where are you at in your virtual world? So where are you in the world, Allie?
Allie Mellen: Apparently, I'm coming at you a little bit from the future. Tom made this joke earlier. He said, "Allie, come at you from RSA 2024." But in actuality, I'm in New York right now. Very cold because it is freezing here. So I kind of wish that I was under the sun.
Delaney: And Jeff. We love this victory piece of art behind you.
Jeff Pollard: Yeah, well, that's my icebreaker and let folks know what they're going for. So I am in my virtual world of Some Snark, where I am permanently, and based, in Charlotte, North Carolina, where I'm joining you today where it's also perhaps not equally as cold as where Allie is, but certainly far too cold for the south east.
Delaney: Tom, I think you've faced the extremities this week.
Field: There's a term this time of year. It's called the weather outside is "frightful". And where I am in Maine, we have had some frightful weather this week, we've had heavy rain, sustained winds of 60 miles per hour, what you're seeing is damage near the State Capitol in Augusta, Maine. I don't have power since Monday, mid morning, and have been staying in hotels since so I gave up the notion that was going to be a white Christmas now. I'm just hoping it won't be a dark, cold Christmas.
Delaney: What a time. Well, hope you stay safe. That's great to know you are safe. Everyone safe. Matt, no trees in your background. You didn't get the memo.
Mathew Schwartz: No, this is the coast of Scotland, slumming it in Scotland. And I don't know if there's any trees left with the - the winds that we regularly have here on the North Sea. So this is the sun rise in Stonehaven earlier this week.
Delaney: Yeah, so in the spirit of Christmas, I went to an event called Glow last week. And it's essentially a garden which is all lit up with various light installations. It's very pretty. And it sort of sets us up nicely for the festivities. Speaking of festivities, it's time to start the real party, Allie and, Jeff, you know, we have some questions for you. And I'm going to hand over to Matt to start the conversation at this point.
Schwartz: Right, I am interested in some of the top AI and ML queries that you've been getting from your clients. I don't know who wants to go first.
Pollard: Sure. So a little bit of this involves the separation that Allie and I have in terms of of what we cover, we do collaborate really, really often. But we've got some divisions in terms of how we think about AI and generative AI, which has probably been the most common topic this year. For me, I talk about this from a concept of securing enterprise adoption of generative AI. So basically, as an organization, how can you securely adopt the generative AI solutions that exist to allow and enable employees to use them without preventing it, but also do it in as secure fashion as possible? I'd say the kinds of questions that have definitely changed over the last year, for probably the first six or eight months or so of this year. The most of the questions were like, well, what do we do now that we tried to ban it and that failed? Which if they had talked to me, before they tried to ban it, I could have told them it would fail. But they didn't always ask and they were all sort of confronted with the existential crisis at what number of policy exemptions - do you no longer have a policy. So a lot of security leaders had to kind of confront that challenge. The questions have shifted now. And now most of the time when I get questions about one, of course, protecting enterprise data, making sure that if they're working with a third party that's using a model or an application built on a model that their enterprise data is safe, that it's safeguarded, it's not being used to train, where do prompts go, and what happens to the data included in prompts which is a question that is really, really good to ask, because even if your data isn't used to train a model might be used to make problems better, especially with things like retrieval, augmented generation rag and things like that. I'm not a policy person. In fact, I'm generally the one guilty of trying to circumvent them, although Allie joins me in that endeavor often, but I've turned them to one this year because a lot of my questions right now are very much about how can we develop robust and effective policies for generative AI use across our enterprise, which really have to be tailored by the various stakeholder groups. There's a different policy for bank tellers versus devs, or data scientists, you really need to kind of work on that. And then a lot of the other questions I get are around instrumentation. How do we effectively instrument and log generative AI, because right now, a lot of solutions don't exist for that. And you have to hack things together to be able to get the kind of visibility you need, especially with security and IT teams and dev teams, were really log dependent cultures, that's kind of what we use to know things are happening. Unfortunately, a lot of the logging that exists from these solutions right now is really logging about how many input and output tokens you've used, so they can bill you, there's not a lot of security, relevant data coming from them. So that's a big question I'm getting is how do we figure that out? How do we start building for that and what solutions are available?
Schwartz: Fascinating. I love the GRC keeps coming back no matter how much it evolves. Thank you so much. Well, what about you, Allie? What are you hearing?
Mellen: Yeah, I thought the other side of the coin. So all of my focus on generative AI is how it's used in security tools and what that means for the security team. And I love this because it's such an interesting area right now. And there's so much happening with with many vendors. And honestly, so many of the questions that I get are just when can I get my hands on it from CISOs. CISOs are more excited about generative AI and its use in security tools that I've seen with any of the technologies that I cover over the last three years. They want to know what they're going to be able to do with it, how it's going to be able to augment and support their team, and most importantly, how much it's going to cost for them. So we put together a lot of research to map out what does this actually look like in practice? What are going to be some of the best use cases? What are the most common use cases? And what is the in-between where you're going to get a ton of value, and you're going to see it consistently throughout the products you use?
Schwartz: And are there any real-near-term gains? Let's save the future and the prognostication for later, but any real-near-term gains that you see to be had from AI? Or is it simply going to be that coming up to speed on it, hopefully with the right rules and logging in place?
Mellen: So interestingly enough, a lot of the implementations that we're seeing right now are around that chatbot use case, right? Oh, you can ask it a bunch of questions. I think this is actually one of the least useful and least interesting applications of generative AI that I've seen so far, even though it's everywhere, and people are pretty excited about it. And the reason is because it doesn't fit into what the analysts does every single day into their workflow into making their lives better, it's just another interface that they're going to have to work with. Instead, where I'm seeing a lot of really great gains in the short and honestly, what's going to be the long term, is on things that are really simple, but such a huge help to the analyst. Things like reporting, nobody likes to write incident response reports, but you got to do them. And generative AI can help you do that. And it can help you save hours of time doing that, things like that. Super simple use cases, but in the long run, are just going to help analysts focus on the things that they do best. So those are the types of things that I'm the most excited about, even with all of this hype around things like chatbots.
Schwartz: Excellent. That's great. It's nice to know that there is some light in store for people to get rid of some of the drudgery. So before I have to pass off my baton, because I need to share, I have a question for Jeff, just because we were talking or you mentioned policy. And one of the things I've been hearing is that when we talk AI, we all kind of know what it means, it's a little like cyber, we all know what it means at least these days. But with AI, it can mean so many different things. It can mean like expert systems for medicine, we've just been talking gen AI for chatbots and making life better for analysts. It can also refer to deep fakes and other AI-driven tools or scripts that can be used by fraudsters, or one end or nation states at the info warfare side of things. Do you think - and this might be a big ask - that we collectively need to be more precise when we talk AI, especially when it comes to policy discussions at the government level?
Pollard: Yeah, we definitely do. One of the ways that we divide this up at Forrester, when we talk to clients is kind of making and there's some nuance, of course, if you go deep enough, but in general, the way we think about this is that we have predictive AI, which has been around for a while. It works. It's done well. It's usable. It's useful. We know that in the right circumstances that can help us with various tasks. Then you have generative AI and generative AI is that kind of content creation and production to some extent, it's you know, probabilities of what comes next based on whatever you type before. But it's not really a brain inside the computer, the way that some people think it is. What concerns me, particularly at the policy level is one, how many of them we have now. At BlackHat, one of the talks that I attended, you know, the speaker mentioned that over the course of three years, from sort of 2020 to 2023, there were 49 different AI regulations published across public and private sector, companies with their own frameworks, governments with their own various government agencies. And that isn't a huge amount to try and navigate through whether it's just the the broad, you know, NIST and U.S.-based executive orders, or whether it's things coming out of EMEA and the European Parliament, in the EU and things like that. It's a challenge. And the problem is that a lot of the words that are used and a lot of the terminology that's used, they're in competition with each other, right? If things are traceable, okay, I agree, that's a good thing. If things are auditable, that's probably a good thing. If we're making sure that we don't have algorithmic bias. Those are good things. But there's also a tension there with security, if you have all of those things, because that means that you have in a system that's more open, you know, and these things also have an effect on speed, which affects adoption. If things are traceable and auditable, does that slow them down? If it does, then that affects users using the technology. So I think that some regulation is a good thing. My biggest concern overall, though, about the precision of languages. Right now, what we've seen traditionally is that large tech companies, which are to some extent, the folks that are dominating the AI landscape, right now, generally weaponize policy and regulation to stomp out innovation. And this is a space where there can be a tremendous amount of innovation if those companies are free to do it. So one of my concerns is making sure that our policy and our regs aren't nonspecific, and frankly, become a tool of regulatory capture for semi monopoly organizations to keep out smaller entities that would be more innovative than them.
Schwartz: Excellent, great, hopeful, bright future there. If we can, as you say, get the policy instruments to play nice with the up and comers. Wonderful. Thank you so much. I'm going to hand over to my colleague, Anna.
Delaney: Fantastic. Great start. Just a couple of questions for you again, Jeff. And following on from Matt's question about the sort of precision of language. My question calls for more clarification around what do you think distinguishes security concerns in AI from traditional cybersecurity issues?
Pollard: There aren't that many. That's the thing that kind of keeps happening is that the more that we see, and this is I sort of - liken this back to the IoT problem. So if we remember, when IoT landed on the scene, there was a lot of information out there about IoT and the opportunities and what it was going to do for us. And it did a lot of things. But when you looked at it from a security perspective, really the challenges were exacerbating other existing security problems. If you had trouble with network traffic, well, you now had more connected devices that generated more network traffic. If you had trouble with asset management, you now had way more assets that were on your environment that could potentially be problematic. I think that that's one of the challenges we see from the solutions, we're going to see more models, because models are going to democratize, we've already seen that. So you're going to have more LLMs out there that interact with your data, you're going to have more applications built on those LLMs. And that's a huge challenge. Because if you already have trouble managing your application estate and securing it, which every security team does, well, now we're going to have even more applications. And then finally, let's take software security or application security. If you already have trouble securing code that your developers are writing today. Well, what happens when I equip them with a coding assistant uttering bot in Forrester terms, that now helps them write 20 to 30% more code, that means you have 20 to 30% more code to secure because the code is helping them write is not necessarily secure. And so what we're going to see here is a massive exacerbation of existing issues. And of course, we're going to see adversaries adopt tools like this - phishing emails are already better and will continue to get better. But I think what we're seeing is just a new type of security problem in some scenarios. But the bigger impact we're going to feel is that this is exacerbating a lot of existing security issues that we've been dealing with for years.
Delaney: Thanks for the clarification. Well, my second question is around the concerns we hear when it comes to safeguarding AI systems from data poisoning attacks. So what measures do you recommend organizations to take to prevent and to detect data poisoning attacks in their AI systems?
Pollard: Yeah, we've been writing about data integrity for about six years now. So first and foremost, I think one of the distinctions we have to make is that if you are a real security organization that is concerned about building an LLM, unleashing it into the wild, and things like that, the academic research is informative, and maybe it's interesting, but it's not very actionable. I don't necessarily need to poison your data, or do an inference attack to figure out your model. When I could just phish your developers, it's much faster, and if I'm not publishing your phishing, I don't necessarily have to do attacks like that. But one of the big concerns that exist a spam, actually, when it comes to data integrity, is that as we begin to deploy these systems, and as we begin to trust the use of these systems more and more, we are going to start to depend less on the human beings in their decision-making processes, we're going to automate some activities as a result. And if data is tampered with or poisoned in those scenarios, then we're going to start making automated decisions, or people are going to defer to the machine and make the decision that the computer tells them to and if data has been tampered with, that can have real-life outcomes that could lead to loss of life, that could lead to loss of limb, that could lead to bad products, that could lead to lawsuits and litigation inevitably. So I think one of the big challenges here when it comes to data tampering, data poisoning, is that as you begin to automate, as human beings trust the outcome that these models and these applications tell them to do things. That's a real problem. And then on top of that, the other challenge is that most of our security controls are built with confidentiality in mind, not integrity, right? We all have this CIA triad. But if you really draw that triangle, it's not equilateral or whatever the phrase is, I don't know the geometry. It doesn't work that way. We've overloaded ourselves for confidentiality with an InfoSec. Right. And we've given away availability kind of it. And integrity sort of sits there in this mysterious land of well, whose job is it? Is it the chief data officers' job to make sure that data is protected, and is not tampered with? Or is that the CISO's job? Now I can answer that because I've done these interviews. I can tell you that in the same organization, when I talked to the chief data officer, and I say, who's making sure that your data isn't tampered with, they say the CISO, when I asked the CISO, who makes sure that your data's high quality hasn't been tampered with or or messed with, they say, Oh, well, that's chief data officer, because it's their data. And this has happened on calls where they were like, okay, look, I think we need to end this interview, and go have a chat with each other, not air grievances festivus style on the call with Jeff. But so it is a real concern. And I think the problem is that from a security infrastructure perspective, that aspect of data integrity is not something that we have a lot of controls around, we just don't have the instrumentation telemetry or processes for it.
Delaney: You give us lots to think about there. Thank you so much, Jeff. At this point, I'm going to pass over to Tom.
Field: Very good, Anna, thank you very much. Allie, a question for you. We're coming to the end of 2023. Here, we seem to be at the top of the hype cycle regarding generative AI. What are the actual use cases that most have your attention right now?
Mellen: Well, first off, I think that we are just starting the hype cycle, I think that we're going to see so much more in 2024, that's just going to blow our minds, and it'd be pretty wild. So I'm actually really excited to see what happens next year and how this evolves. I mentioned earlier that some of the early implementations that we're seeing are very chatbot-focused, it's very much so you can interact with your environment and your data this way. The way that we break up, the use cases for generative AI typically falls into three different buckets. The first is content creation. So actually generating things whether it's generating code or generating text, generating reports. The second is behavior prediction. So this is kind of more of a farther out implementation that we're not seeing as much right now. But it's all about looking at a particular set of activity as though it was a type of language, we can ultimately make anything into a language that is consistent, predictable, always happening and always following a particular pattern. And so using a probabilistic model like generative AI to predict what's going to happen next, within that sequence is something that has a lot of potential around things like predicting privacy risk or attacker activity or risk scenarios in general. And then the third is knowledge articulation. So that is that chatbot use case. These buckets of use cases become very important when having this conversation, because to be honest, a lot of times it can seem like the possibilities are endless with what we can do with generative AI. But in reality, most things that we can do fall into one of these three buckets with a lot of the content creation being some of the most useful the coding assistant use case, the human readable descriptions of alerts, being able to summarize incidents for reporting purposes, like we talked about earlier, being able to do things like convert query languages, either from natural language or between two different query language sigma to SQL and vice versa, being able to make these types of conversions. It's not necessarily like, Oh, this is the biggest game changer of the cybersecurity industry we'll ever see. But it's something that's just going to help the analyst improve that much faster and help them get to the next step in what they need to do that much quicker. And so those are the types of use cases that I am the most excited to see in the New Years, particularly around that content creation piece.
Field: I'm going to borrow your buck at MITRE, because as I think about the conversations I have with security and technology leaders about AI, generative AI, particularly, they come down to policy, potential regulation use cases, and like the buzzword of the year: guardrails. My question for both of you, Jeff and Allie, what are the conversations you hope to be having about AI? One year from now?
Mellen: Jeff, do you want to start us off? So I actually want to start this with a little anecdote because I was thinking about this question. And I'm trying to figure out how to articulate what I most hope to see. And it doesn't come down to a particular feature. But what it does come down to is something that I'll describe in this anecdote, which is, this week, I spoke with someone on Zoom, who has a hearing disability. And they use AI to actually give them basically closed captioning, so they can participate more effectively in meetings, so they don't miss notes. This is a use case we've had around for a while, right? It's not something that's particularly new, we've used transcription services for a while. But having the opportunity to see that person be able to participate in a way that they otherwise would really struggle was very heartwarming. First of all, very heartwarming. But it also highlighted for me, that we have such a broad amount of people that have a very diverse perspective on the world, who are in many ways limited in how they can participate in these types of activities, including cybersecurity, which, when I think about cybersecurity, I think of it as predicated upon the different perspectives that we can bring into the conversation, the different perspectives that we can leverage to understand attacker activity better and understand how we need to defend and more different ways to find all the use cases that I alone couldn't see, or you alone couldn't see. So what I'm hoping that we're going to be talking about long story short, next year around this time is all of those use cases where we're able to bring in more perspectives, because generative AI and AI more broadly has enabled those people to enter the conversation and bring their best selves to those conversations. I think that that has the potential to unleash human creativity in a way that we haven't seen before. And to give us a perspective that we just otherwise would not be able to have. So I'm really hoping that we're going to start seeing that in the new year. And I'm excited to do some research around that and see the different areas where we can make a bigger impact there. Jeff, bring us home.
Pollard: So I hope that my mind is equally positive to Allie, a little different. I hope that we see an AI enabled workforce, not one that's AI replaced, certainly. And what I mean by that is I hope that we see a cybersecurity workforce, from CISOs all the way down to entry-level folks that are joining, where the drudgery is being removed, where their experiences with the products and services that they use start to improve. Allie and I talked about this in terms of SOC analyst experience. But beyond that, I hope that we see our ability to eliminate drudgery, upskill, reskill and just make the day-to-day lives of security practitioners better because this is a hard job. And it's a field where we're constantly confronted with our failures, even though I don't think it should be termed a failure. We do something that's critical, right? Everyone in InfoSec does. And I hope to see a workforce that is less burned out, that is more hopeful and more positive, because these tools are complementing the activities that these people have to perform. Because I think that we are on the cusp of technology that is truly innovative, and is truly something that can make our lives better as employees. And I sincerely hope that the security practitioners out there get to experience the full effect of that. And next year, the conversations we're having are about what to do with the productivity that we've unleashed in our cyber teams, instead of having conversations about "Can I cut costs, or well, this vendor promised it could do things and now it doesn't seem to do it." So I really hope that we're having positive conversations about how we are able to reenergize and reinvigorate a workforce that is consistently on the cusp of burnout and consistently on the cusp of kind of figuring out what's next for them. That deals with a lot of failure often because that's the nature of what we do.
Field: Excellent insights, Jeff and Allie. Anna, let me turn this back to you.
Delaney: Thank you. Very, very informative. And finally, we've got one, one last question for you. Just for fun. We love to ask this question as one year draws to a close, if there was one word or phrase to describe the state of cybersecurity in 2023, what would that be? Oh, we Allie and Jeff will give you a moment to take a breather. Tom, you want to start us off?
Field: Yeah, I'm going to choose my words carefully, so I stay on the nice list. But given what we have seen recently, we've seen credit unions rocked by a third-party breach. We have seen Mr. Cooper, the mortgage company hit by a massive breach, healthcare entities tied to ransomware attacks. Matt was just writing today about the Xfinity breach that yes, that impacts me. That's my internet service provider. Given everything we're seeing, as we close out this year, I'm going to go back to my opening remarks. weather outside is frightful.
Delaney: That's a good one. Matt, can you help that?
Schwartz: No song lyrics? We're not even going down that rabbit hole, "sophisticated". I'm going to say, I have seen obviously, with attackers getting more sophisticated. I was just thinking about that with this Xfinity breach. This was a one two punch using a flaw in the Citrix NetScaler products, very different to what we were seeing 10 years ago. And in some cases, we're seeing more sophisticated defenses. And if I have a wish for 2024, it would be even more sophisticated defenses.
Delaney: Well said. Well, my word is adaptive. Certainly from the conversations I've been having with security leaders, you know, many of them are saying they're focusing on adaptive security, not a new term. But it's come up in conversations this year, more than other years, just to stay ahead of this rapidly evolving landscape. How about you, Jeff?
Pollard: Do you want to see resilient, which I wouldn't have thought I would say even three seconds ago, but I'm going to say resilient - Allie is shocked by this one. I debated between this one and austerity, but I'll go with resilient for this reason. It has been a rough year. To Tom's point, it's been frightful. It's been sophisticated to Matt's point, but I think that security teams have hung in there, and they're still keeping at it, and they're still doing it. And so it's not about cyber resilience. It's just about the resilience of the people that are doing this every day, right. We've remained committed to innovation, we've remained committed to going in every day and kind of doing the task that we do. So yeah, I think resilient, especially for the security pros out there, that that'd be mine.
Mellen: I kind of wish that I had gone before Jeff, because I was going to say, we're not going to end on a high note here. I was going to say, chaotic. And I do think that this ties into the resilient piece quite well, because I mean, when I think about, first off the attacks that we faced this year, which have been chaotic, and have been different and very challenging for practitioners, when I think about the cybersecurity market as a whole, extremely chaotic. So many changes happening especially for security operations teams, what I'm seeing on the cybersecurity market perspective, there's a lot of movement within the vendor community that's affecting practitioners. And then just also with the changes that we've seen from a technology perspective how these tools are evolving, and how that affects how practitioners have to use them and have to learn to use them. There's just a lot going on. And so I think that it ties in really well to Jeff's message of resilience, because it does require and pretty much force practitioners to be resilient and to build resiliency as much as possible in the face of a lot of different changes at once. And I think it's important that we highlight that chaos because we need to call it out. But the fact that it's happening and that it does make it challenging to do our jobs well, but it leaves us more resilient in the end, I hope.
Delaney: Nice you we ended with resiliency, see the positive. I mean, next year will be less chaotic. Well, this has been absolutely brilliant, usually informative, and a lot of fun. Thank you so much, Allie and Jeff for joining the ISMG Editors' Panel. But also for all the interviews you've done with us this year. We appreciate it.