WEBVTT 1 00:00:00.510 --> 00:00:02.070 Michael Novinson: Hello, this is Michael Novinson with 2 00:00:02.070 --> 00:00:04.470 Information Security Media Group. We're going to be 3 00:00:04.470 --> 00:00:07.080 discussing artificial intelligence in the SOC. To 4 00:00:07.080 --> 00:00:09.720 explore this further, I am joined by Nikesh Arora. He is 5 00:00:09.720 --> 00:00:13.230 chairman and CEO at Palo Alto Networks. Hi Nikesh, how are 6 00:00:13.230 --> 00:00:13.590 you? 7 00:00:13.620 --> 00:00:14.820 Nikesh Arora: Good, Michael. How are you? 8 00:00:14.930 --> 00:00:16.820 Michael Novinson: Doing really well, thank you. Now it has been 9 00:00:16.820 --> 00:00:20.780 about five months since ChatGPT took over the world's headlines. 10 00:00:20.780 --> 00:00:22.640 I want to get a sense from you at a high level, what do you 11 00:00:22.640 --> 00:00:24.860 feel are the biggest cyber risks and opportunities around 12 00:00:24.860 --> 00:00:25.700 generative AI? 13 00:00:26.770 --> 00:00:28.990 Nikesh Arora: Well, it's a very specific question. I think, 14 00:00:28.990 --> 00:00:32.680 first of all, ChatGPT is amazing. I think we've all been 15 00:00:32.680 --> 00:00:36.820 talking about using AI. And when I go to my team internally, 16 00:00:36.820 --> 00:00:39.580 like, well, look at this - ChatGPT. I mean, go away, we've 17 00:00:39.580 --> 00:00:42.520 been using supervised machine learning models the last 12 18 00:00:42.520 --> 00:00:45.130 years. We've been using unsupervised models for last 19 00:00:45.130 --> 00:00:48.970 seven years. But I think what ChatGPT has done, it has 20 00:00:49.000 --> 00:00:54.760 actually reformed the way we interact with computing. So, you 21 00:00:54.760 --> 00:00:56.920 know, if you looked at traditional model is, you design 22 00:00:56.920 --> 00:00:59.080 a product, you have a bunch of data, we spend a lot of time 23 00:00:59.080 --> 00:01:01.570 building UI, and have a lot of product managers. You have UI 24 00:01:01.570 --> 00:01:05.170 engineers, who try and anticipate how the end user or 25 00:01:05.170 --> 00:01:08.530 customer is going to use your product by building UI. And what 26 00:01:08.560 --> 00:01:11.320 ChatGPT has done has shown you, well, why are we, why am I 27 00:01:11.320 --> 00:01:14.350 creating a new language called UI? And then you are translating 28 00:01:14.350 --> 00:01:17.380 that. And you're asking me questions in my UI or in SQL, 29 00:01:17.440 --> 00:01:19.900 just ask me like you would normally. So I think that's kind 30 00:01:19.900 --> 00:01:25.060 of the power of what ChatGPT has done. It has immense memory. It 31 00:01:25.060 --> 00:01:27.220 remembers everything that was ever written about a topic. So 32 00:01:27.220 --> 00:01:29.530 it can summarize things much faster for you. So I think that 33 00:01:29.530 --> 00:01:32.860 this summarization capability, this recursive, regressive 34 00:01:33.100 --> 00:01:37.450 statistical model that it has, allows it to feel pure, almost 35 00:01:37.450 --> 00:01:40.690 like, you're talking to another person, I think that has a lot 36 00:01:40.690 --> 00:01:45.580 of implications, not just in cybersecurity, as we see in 37 00:01:45.580 --> 00:01:47.740 almost anything that we're going to do, and many people have 38 00:01:47.740 --> 00:01:50.380 called the iPhone moment. So I think probably is the iPhone for 39 00:01:50.380 --> 00:01:55.210 AI. From a cybersecurity specific perspective, you know, 40 00:01:55.240 --> 00:01:59.710 what's interesting is, I've seen early examples of people trying 41 00:01:59.710 --> 00:02:02.500 to use it to create malware. Now, there's good news and bad 42 00:02:02.500 --> 00:02:05.980 news. The bad news is it can do so. The good news is because 43 00:02:05.980 --> 00:02:09.580 it's relying on prior models, which are recursive, regressive 44 00:02:09.580 --> 00:02:12.580 models, it's kind of making malware kind of similar to what 45 00:02:12.640 --> 00:02:17.110 it has, which is good for now. Because allows us to identify 46 00:02:17.110 --> 00:02:19.990 the patterns, we know them before. So we are able to build 47 00:02:19.990 --> 00:02:24.310 blocking techniques toward that malware or attack that is 48 00:02:24.310 --> 00:02:28.300 building, but it can generate phishing attacks, at scale, if 49 00:02:28.300 --> 00:02:31.330 you want. It can generate them on a customized basis. We're 50 00:02:31.330 --> 00:02:34.030 gonna have to contend with that. We have to fight computing with 51 00:02:34.030 --> 00:02:37.690 computing, and we have to fight AI with AI. So I think that's 52 00:02:37.690 --> 00:02:40.300 where we're gonna have to go from an opportunity perspective. 53 00:02:40.630 --> 00:02:43.240 And I mean, it's another wake up call to anyone who's not paying 54 00:02:43.240 --> 00:02:45.040 attention to making sure they're secure. 55 00:02:45.660 --> 00:02:47.639 Michael Novinson: So we've seen organizations talking already 56 00:02:47.685 --> 00:02:50.538 about having a ChatGPT embedded in their technology. For it to 57 00:02:50.584 --> 00:02:53.484 be first to get serious benefit out of it, what are the, what's 58 00:02:53.530 --> 00:02:56.153 the foundation that companies need to lay? What are those 59 00:02:56.199 --> 00:02:58.961 initial steps that need to be taken to really get value from 60 00:02:59.007 --> 00:03:00.480 generative AI and security tech? 61 00:03:00.000 --> 00:03:02.402 Nikesh Arora: I think it's important to take the good parts 62 00:03:00.000 --> 00:04:45.540 Michael Novinson: So what are they? What do you feel are those 63 00:03:02.455 --> 00:03:05.284 of what ChatGPT offers as a window into how AI can be 64 00:03:05.338 --> 00:03:08.434 useful, as opposed to blindly copying the model of ChatGPT 65 00:03:08.487 --> 00:03:11.743 into any industry that we're in, right. Okay, I see this huge 66 00:03:11.797 --> 00:03:15.106 flurry of activity where people want to quickly integrate open 67 00:03:15.160 --> 00:03:18.309 AI into their products. And look, I can talk to my product, 68 00:03:18.363 --> 00:03:21.672 but I think be careful, don't forget, they see there's kind of 69 00:03:21.726 --> 00:03:24.982 like being two threats. There's been the ChatGPT threat as is 70 00:03:25.035 --> 00:03:27.971 clear as security threat, or it's okay to have multiple 71 00:03:28.025 --> 00:03:31.121 answers to the problem. Write me a story, write me a song. 72 00:03:31.174 --> 00:03:34.591 There's no right answer. There's a good answer, there's a better 73 00:03:34.644 --> 00:03:37.900 answer, or there's a really bad answer. But it doesn't matter 74 00:03:37.954 --> 00:03:41.317 depends on your taste. Sometimes I like some songs, you may not 75 00:03:41.370 --> 00:03:44.840 like them. So to the extent that you have variability possible in 76 00:03:44.893 --> 00:03:47.989 your answers is good. And as you like, I like to call it a 77 00:03:48.043 --> 00:03:51.139 sandwich problem. A human prompts it, a human assesses the 78 00:03:51.192 --> 00:03:54.022 output. It's great, it's a contained problem. You can 79 00:03:54.075 --> 00:03:57.118 converse with it, you can make it smarter, keep asking it 80 00:03:57.171 --> 00:04:00.160 better, better questions prompted. But still, you got to 81 00:04:00.214 --> 00:04:03.523 be careful, there is a risk of hallucination. Maybe it doesn't 82 00:04:03.577 --> 00:04:06.726 know the answer, just makes it up, statistically. It's like 83 00:04:06.780 --> 00:04:09.983 here's maybe what you want to hear. Sometimes nice. You tell 84 00:04:10.036 --> 00:04:13.292 me what I want to hear. I like you more. But I think the risk 85 00:04:13.346 --> 00:04:16.815 is in the case of what we do, in this case where you need precise 86 00:04:16.869 --> 00:04:20.178 answers. And wrong answers are not acceptable. You need a much 87 00:04:20.232 --> 00:04:23.328 more precise way to get the answer. I think that's what to 88 00:04:23.381 --> 00:04:26.371 go watch out for just blindly putting ChatGPT into every 89 00:04:26.424 --> 00:04:29.413 product and calling it a co-pilot is dangerous. You have 90 00:04:29.467 --> 00:04:32.830 to think about what data are you using to train your system on? 91 00:04:32.883 --> 00:04:35.926 What are the answers going to look like? How do you avoid 92 00:04:35.979 --> 00:04:38.488 false positives? How do you avoid the notion of 93 00:04:38.542 --> 00:04:41.958 hallucinations? There's a lot of work that needs to be done. But 94 00:04:42.011 --> 00:04:43.560 man, it looks very promising. 95 00:04:45.540 --> 00:04:47.850 foundational steps or those building blocks that companies 96 00:04:47.850 --> 00:04:50.520 should put in place to make sure that it's not generating false 97 00:04:50.520 --> 00:04:51.060 positives? 98 00:04:51.420 --> 00:04:54.750 Nikesh Arora: You know, Michael, I'm gonna sound like we told you 99 00:04:54.750 --> 00:04:58.710 so. But I'm gonna say it anyway. Look, we were there together 100 00:04:58.710 --> 00:05:00.540 five months ago, we were there before that. You and I have 101 00:05:00.540 --> 00:05:04.020 talked about this, and we talk about, the only way security is 102 00:05:04.020 --> 00:05:07.080 going to get done right, is you pay attention to data. If you 103 00:05:07.080 --> 00:05:09.450 pay attention to what the data is telling you, you're gonna 104 00:05:09.450 --> 00:05:11.460 have to use computing, you're gonna have machine learning to 105 00:05:11.460 --> 00:05:15.030 understand the patterns, look at anomalous behavior, stop 106 00:05:15.030 --> 00:05:19.290 anomalous behavior, use AI - generative or otherwise - and 107 00:05:19.290 --> 00:05:23.670 figure out how to fight bad actors with automation and data 108 00:05:23.670 --> 00:05:26.190 analytics and machine learning. So that's the opportunity, 109 00:05:26.190 --> 00:05:26.970 that's what we're gonna have to do. 110 00:05:27.810 --> 00:05:29.490 Michael Novinson: So speaking of artificial intelligence and I 111 00:05:29.490 --> 00:05:31.590 know that's been a point of emphasis for your Palo Alto 112 00:05:31.590 --> 00:05:34.020 Networks, how do you see AI changing the way that the SOC 113 00:05:34.020 --> 00:05:34.500 works? 114 00:05:35.250 --> 00:05:39.180 Nikesh Arora: Well, you know, think about it. Let's just break 115 00:05:39.180 --> 00:05:43.620 it down into two parts. One, I think what AI does, is, if you 116 00:05:43.620 --> 00:05:46.920 don't have heft in the industry, if you don't have a lot of data 117 00:05:46.920 --> 00:05:49.770 you're processing, you can't train new models. It's very hard 118 00:05:49.770 --> 00:05:52.230 to say, I'm starting a company and I'm going to train my model 119 00:05:52.380 --> 00:05:54.180 on my customers. Well, you're gotta have customers to start 120 00:05:54.180 --> 00:05:56.820 there. Now we're blessed. We have 62,000 customers who use 121 00:05:56.820 --> 00:05:59.730 our firewalls, we have 1,000s of customers in our cloud security 122 00:05:59.730 --> 00:06:03.030 business and our SOC business, as does our SASE does. And so 123 00:06:03.180 --> 00:06:05.910 that's a good starting point. Now, again, it's ours to screw 124 00:06:05.910 --> 00:06:08.790 up. We have to make sure that we use that data, we have to apply 125 00:06:08.790 --> 00:06:11.550 that intelligently. And we've been working for the last four 126 00:06:11.550 --> 00:06:14.250 years. And I've said this before, we launched a product 127 00:06:14.370 --> 00:06:18.570 four months ago before ChatGPT came about called XSIAM. And the 128 00:06:18.570 --> 00:06:22.530 whole premise of that was that we thought all data strategies 129 00:06:22.530 --> 00:06:25.920 being deployed in the SOC were legacy strategies where you 130 00:06:25.920 --> 00:06:30.390 ingested all the data you could find, didn't quite normalize it. 131 00:06:30.900 --> 00:06:33.540 And sometimes, you know, AI has this problem, the garbage in 132 00:06:33.540 --> 00:06:35.970 garbage out. And I think the risk that we've seen in the past 133 00:06:35.970 --> 00:06:39.690 is that we rush to apply AI, but the data foundations weren't 134 00:06:39.690 --> 00:06:43.140 strong enough. So we've built some good data foundation in our 135 00:06:43.140 --> 00:06:46.830 XSIAM product, we've looked hard at what incremental 136 00:06:46.830 --> 00:06:50.880 opportunities, ChatGPT or generative AI, brings forward. 137 00:06:51.150 --> 00:06:54.570 But all it's done is it's re-energized. Just continue to 138 00:06:54.570 --> 00:06:57.930 focus on getting good data, building great models, training 139 00:06:57.930 --> 00:07:00.810 our data, and effectively using automation. And every human 140 00:07:00.810 --> 00:07:03.330 interaction is a training opportunity. Say, look, here's 141 00:07:03.330 --> 00:07:05.460 my hypothesis. Here's I think how security should be done, 142 00:07:05.820 --> 00:07:08.850 dear customer, dear partner, dear user, their employee, 143 00:07:09.270 --> 00:07:11.640 interact with it, tell me if it's the right answer, if not. 144 00:07:11.640 --> 00:07:13.860 So I think what's going to change for us internally is, 145 00:07:14.100 --> 00:07:16.530 we're going to keep doubling down on AI, keep doubling down 146 00:07:16.530 --> 00:07:19.260 on good data. But, we're also going to use every human 147 00:07:19.260 --> 00:07:20.610 interaction as a training event. 148 00:07:21.210 --> 00:07:22.860 Michael Novinson: Now, it's been a little over seven months since 149 00:07:22.860 --> 00:07:26.160 you did introduce XSIAM. Want to get a sense from you of who's 150 00:07:26.160 --> 00:07:28.350 using it the most right now, what's the profile of customer, 151 00:07:28.350 --> 00:07:29.370 and how are they using it? 152 00:07:29.000 --> 00:07:33.530 Nikesh Arora: Well, like the traditional approach to SOC has 153 00:07:33.530 --> 00:07:37.340 been a post-breach approach, unfortunately. You have a 154 00:07:37.340 --> 00:07:39.800 problem. You say you have all this data, let me go query the 155 00:07:39.800 --> 00:07:42.440 data, figure out what happened. Right, so actually happens after 156 00:07:42.440 --> 00:07:44.570 you had a breach. And of course, you're monitoring it to make 157 00:07:44.570 --> 00:07:47.330 sure you can fix hygiene, fix security issues, but for the 158 00:07:47.330 --> 00:07:51.980 most part, the SOC has been employed as a tool to figure out 159 00:07:52.070 --> 00:07:54.170 well how the breach happened, what happened, how do you 160 00:07:54.170 --> 00:07:57.200 remediate it, how do you go spin up all the sort of backups to go 161 00:07:57.200 --> 00:08:00.440 bring things back, most of it cyber-resilience product. We 162 00:08:00.440 --> 00:08:03.950 really think a SOC should be a proactive product. Product where 163 00:08:03.950 --> 00:08:07.790 you can go remediate security issues much sooner. So at Palo 164 00:08:07.790 --> 00:08:11.840 Alto Networks, we went from days of mean time to respond our SOC 165 00:08:11.840 --> 00:08:14.210 down to under a minute. It took a lot of work. It took us four 166 00:08:14.210 --> 00:08:18.530 days, sorry, four years, excuse me, not days, but we've 167 00:08:18.530 --> 00:08:21.260 basically packaged that technology in XSIAM. And what 168 00:08:21.260 --> 00:08:24.290 we're noticing is we've actually done it very carefully. We've 169 00:08:24.320 --> 00:08:26.810 exposed it to customers who already have a lot of Palo Alto 170 00:08:26.810 --> 00:08:29.930 Networks in their infrastructure and other vendors - we support 171 00:08:29.930 --> 00:08:33.200 every vendor out there. But we do require you to have our XDR 172 00:08:33.200 --> 00:08:36.380 product because we think having a single source of truth as data 173 00:08:36.380 --> 00:08:39.980 is important. And we, you know, we showed it to 10 customers who 174 00:08:39.980 --> 00:08:42.500 became design partners, all of them have become paying 175 00:08:42.500 --> 00:08:45.380 customers in a very short period of time. We obviously continue 176 00:08:45.380 --> 00:08:47.780 to see interest, but we're doing it very carefully. We're 177 00:08:47.780 --> 00:08:50.600 exposing it to more and more customers who believe are 178 00:08:50.600 --> 00:08:53.270 aligned with our product road map. And we use that as 179 00:08:53.270 --> 00:08:56.570 feedback. We use that as ability to get our stuff get better and 180 00:08:56.570 --> 00:09:00.890 better. So look, I think it's a very promising category. I see a 181 00:09:00.890 --> 00:09:04.370 lot of enthusiasm around it. I see customers are tired of 182 00:09:04.400 --> 00:09:08.090 legacy SOC solutions, which have relied on data ingestion for 15 183 00:09:08.090 --> 00:09:10.940 years. And it's time for that part of the industry to have an 184 00:09:10.940 --> 00:09:12.230 inflection point. I think this is it. 185 00:09:12.720 --> 00:09:14.220 Michael Novinson: Want to ask you here finally, I know for as 186 00:09:14.220 --> 00:09:15.780 long as I've been in this industry, people have been 187 00:09:15.780 --> 00:09:18.030 talking about consolidation. Wanted to get a sense of, you 188 00:09:18.030 --> 00:09:21.180 know, over the past 12 months is the economic downturn took hold, 189 00:09:21.240 --> 00:09:23.850 how have those conversations around vendor consolidation and 190 00:09:23.850 --> 00:09:25.410 reducing the vendor footprint changed? 191 00:09:26.200 --> 00:09:29.080 Nikesh Arora: Like five years ago when I joined the industry, 192 00:09:30.520 --> 00:09:34.480 I was told by the industry leaders and participants, 193 00:09:35.260 --> 00:09:38.020 cybersecurity is not going to consolidate because people want 194 00:09:38.050 --> 00:09:40.450 best-of-breed solutions. And they're not going to buy just 195 00:09:40.450 --> 00:09:43.810 because it works together. And I said okay, well what if it works 196 00:09:43.810 --> 00:09:46.030 together and it's also best to breed, and you can buy it 197 00:09:46.030 --> 00:09:49.420 individually or together. So I'm hoping we've proven to the 198 00:09:49.420 --> 00:09:52.510 market that slowly and steadily if you actually solve the real 199 00:09:52.510 --> 00:09:56.560 customers' problem, if you solve it with a great product, and if 200 00:09:56.560 --> 00:09:59.860 your products work together and show you the benefit of there 201 00:09:59.860 --> 00:10:03.040 are together, then it'll lead to customers buying more things 202 00:10:03.040 --> 00:10:05.950 from you. Notice I didn't use the word consolidation, for 203 00:10:05.950 --> 00:10:08.710 reasons like we want to build best-of-breed products in all 204 00:10:08.710 --> 00:10:11.230 categories, we want our products to work better. We want our 205 00:10:11.230 --> 00:10:13.690 products to work together for our customers, encourage them to 206 00:10:13.690 --> 00:10:16.600 get more of our products. And, we're seeing that happen. I have 207 00:10:16.600 --> 00:10:19.630 to be careful, you know, we're in the midst of our quiet 208 00:10:19.630 --> 00:10:23.470 period. So well, I'll say what I said, the prior quarter, which 209 00:10:23.470 --> 00:10:26.950 is, in the current economic climate, customers have less 210 00:10:26.950 --> 00:10:32.230 budgets to go out and try new things. They want trusted names, 211 00:10:32.260 --> 00:10:34.810 they want people who can deliver, they want value, and 212 00:10:34.810 --> 00:10:37.960 they want ROI. And in that case, you know, we are a trusted name. 213 00:10:37.960 --> 00:10:39.970 We have proven to the market that we bring best-of-breed 214 00:10:39.970 --> 00:10:43.390 capability to the market and our stuff works together. So that's 215 00:10:43.390 --> 00:10:46.420 what drove behavior from our customers in Q1 Q2. 216 00:10:46.900 --> 00:10:48.400 Michael Novinson: Definitely will be interesting to watch 217 00:10:48.400 --> 00:10:50.440 going forward. Nikesh, thank you so much here for the time. 218 00:10:51.100 --> 00:10:52.150 Nikesh Arora: Thank you for having me, Michael. 219 00:10:52.450 --> 00:10:54.070 Michael Novinson: Of course! We've been speaking with Nikesh 220 00:10:54.070 --> 00:10:57.040 Arora. He is chairman and CEO at Palo Alto Networks. For 221 00:10:57.040 --> 00:10:59.770 Information Security Media Group, this is Michael Novinson. 222 00:11:00.070 --> 00:11:00.820 Have a nice day.