WEBVTT 1 00:00:00.240 --> 00:00:02.820 Anna Delaney: Hello and welcome to the ISMG Editors' Panel, our 2 00:00:02.820 --> 00:00:06.150 weekly show where ISMG journalists share and analyze 3 00:00:06.150 --> 00:00:09.900 key cybersecurity stories of the moment. I am Anna Delaney and on 4 00:00:09.900 --> 00:00:13.050 this episode, I'm joined by my talented colleagues Mathew 5 00:00:13.050 --> 00:00:16.890 Schwartz, executive editor of DataBreachToday and Europe; Tony 6 00:00:16.920 --> 00:00:20.640 Morbin, executive news editor, the EU; and Michael Novinson, 7 00:00:20.730 --> 00:00:24.240 managing editor, ISMG business. A pleasure to see you, 8 00:00:24.240 --> 00:00:24.840 gentlemen. 9 00:00:25.710 --> 00:00:26.730 Tony Morbin: Delighted to be here. 10 00:00:27.150 --> 00:00:28.080 Mathew Schwartz: Great to be here, Anna. 11 00:00:28.650 --> 00:00:29.520 Michael Novinson: Thanks for having us. 12 00:00:29.870 --> 00:00:32.000 Anna Delaney: Oh, it's a pleasure. So, Tony, you've 13 00:00:32.000 --> 00:00:34.040 brought some friends with you today. Have you not? 14 00:00:34.560 --> 00:00:37.350 Tony Morbin: Yes, artificial intelligence - flavor of the 15 00:00:37.440 --> 00:00:39.120 month and maybe the decade. 16 00:00:40.500 --> 00:00:43.020 Anna Delaney: More on that later. Mathew, we love this 17 00:00:43.020 --> 00:00:44.640 scene. Thank you for bringing that. 18 00:00:44.000 --> 00:00:47.390 Mathew Schwartz: Yes, I know, bringing all the old hits, 19 00:00:47.390 --> 00:00:52.490 although I took this this week here in Scotland. Not a balmy 20 00:00:52.490 --> 00:00:55.250 day, as you can see. Perhaps even a little too cold for 21 00:00:55.250 --> 00:00:57.950 picnicking, but that didn't mean I didn't try. 22 00:00:58.460 --> 00:01:01.460 Anna Delaney: It's moody, it's atmospheric, poetic even. 23 00:01:01.780 --> 00:01:04.960 Mathew Schwartz: Thank you. On the Burns Night week, I was 24 00:01:04.960 --> 00:01:06.820 trying for a little poetry. 25 00:01:06.960 --> 00:01:10.080 Anna Delaney: Cheers to that. And Michael, you're in the city 26 00:01:10.080 --> 00:01:11.520 of neon lights. Love it. 27 00:01:12.320 --> 00:01:14.810 Michael Novinson: Oh, thank you. I am in front of the Avon Cinema 28 00:01:14.810 --> 00:01:18.200 in the east side of Providence, Rhode Island. It's a movie 29 00:01:18.200 --> 00:01:21.650 theater dating back to 1938 which I realized for folks in 30 00:01:21.650 --> 00:01:25.250 the U.K. is not that open, in the U.S. it is. Art Deco movie 31 00:01:25.250 --> 00:01:29.570 theater is just a single screen with about 500 seats, they show 32 00:01:29.570 --> 00:01:32.900 all the ads from the 1950s. So it's a real walk back in time. 33 00:01:33.320 --> 00:01:36.440 And crazy thing is it's actually been owned by the same family 34 00:01:36.440 --> 00:01:39.980 for 85 years now. So they live long and prosper. 35 00:01:40.590 --> 00:01:43.740 Anna Delaney: Amazing! How gorgeous is Art Deco building 36 00:01:43.740 --> 00:01:50.370 and showing Wes Anderson there as well. Love it. Well, I do 37 00:01:50.370 --> 00:01:54.270 love a bit of cafe culture and watching the world go by so here 38 00:01:54.270 --> 00:01:59.280 I am, Lyon, France, soaking up the sweet life. Talking of 39 00:01:59.280 --> 00:02:02.760 sweetness and music to our ears, Matt, are you starting our 40 00:02:02.760 --> 00:02:04.770 discussion off with some good news this week? 41 00:02:05.530 --> 00:02:08.140 Mathew Schwartz: I am, indeed, and I know it's shocking that 42 00:02:08.140 --> 00:02:12.580 there could be good news when it comes to cybercrime and 43 00:02:12.730 --> 00:02:17.560 ransomware. But we are getting some reports that efforts to 44 00:02:17.560 --> 00:02:23.380 blunt the scourge of ransomware is having an impact. This might 45 00:02:23.380 --> 00:02:26.860 be unexpected given that it seems to be dominating the news 46 00:02:26.860 --> 00:02:30.520 cycle. We keep seeing new victims, everything from schools 47 00:02:30.520 --> 00:02:36.070 to hospitals, continuing to fall victim. But ransomware incident 48 00:02:36.070 --> 00:02:41.230 response firm Coveware, which works with, it says, thousands 49 00:02:41.260 --> 00:02:45.880 of victims per quarter, has just issued some research, which is 50 00:02:46.030 --> 00:02:49.750 fascinating. And part of the research is a look at how many 51 00:02:49.780 --> 00:02:52.840 victims are paying a ransomware or paying a ransom, I should 52 00:02:52.840 --> 00:02:57.130 say. So if you look back to 2019, when it started keeping 53 00:02:57.130 --> 00:03:01.600 track of this, it said about four or five victims were paying 54 00:03:01.630 --> 00:03:06.400 on an annualized basis. So about 80% of victims were paying a few 55 00:03:06.400 --> 00:03:11.590 years back. In 2022, however, again, on an annualized basis, 56 00:03:11.740 --> 00:03:16.570 it said only 41% of victims paid. First, there's a couple of 57 00:03:16.570 --> 00:03:20.680 ways to look at that. That means by 60% of victims didn't pay. 58 00:03:20.680 --> 00:03:25.720 That's great, but 40% did. That is still quite a bit. But we 59 00:03:25.720 --> 00:03:29.740 have seen really good improvement there. And 60 00:03:29.860 --> 00:03:33.070 unexpectedly for me, Coveware said that one of the reasons 61 00:03:33.100 --> 00:03:38.560 it's seeing fewer victims pay is because of law enforcement. Now 62 00:03:38.560 --> 00:03:41.830 we think of law enforcement trying to disrupt ransomware 63 00:03:41.830 --> 00:03:44.680 wielding attackers. And this is complicated because a lot of 64 00:03:44.680 --> 00:03:49.030 them are based in Russia. And Russia doesn't extradite anyone 65 00:03:49.030 --> 00:03:53.140 who's attacking foreign nationals. So that strategy has 66 00:03:53.140 --> 00:03:56.380 been a little tough to enforce. We've seen some disruptions as 67 00:03:56.380 --> 00:03:59.620 well of infrastructure, but that doesn't put people behind bars. 68 00:03:59.920 --> 00:04:03.460 But what law enforcement has been retooling to do, especially 69 00:04:03.460 --> 00:04:06.490 in the last year or two - and I'll just pick the FBI as one of 70 00:04:06.490 --> 00:04:10.240 the big examples. They are not just trying to disrupt groups 71 00:04:10.270 --> 00:04:14.140 and track intelligence, but also to assist victims. We see that 72 00:04:14.140 --> 00:04:16.450 here in the U.K. as well with the National Cybersecurity 73 00:04:16.450 --> 00:04:20.440 Center. They have a national incident response team that 74 00:04:20.440 --> 00:04:24.100 helps victims. So Coveware is based in the States. It often 75 00:04:24.100 --> 00:04:28.690 works with the FBI. And it says that the FBI is rapidly 76 00:04:28.690 --> 00:04:33.070 deploying cyber specialists to organizations that are impacted. 77 00:04:33.370 --> 00:04:36.580 The FBI says that in the States, it can typically have somebody 78 00:04:36.730 --> 00:04:40.780 at an impacted organization within an hour. And it says 79 00:04:40.810 --> 00:04:45.460 abroad in 70 or 80 countries via its legal attache and local 80 00:04:45.490 --> 00:04:48.910 relationships, if there's a U.S. organization, it can typically 81 00:04:48.910 --> 00:04:52.780 have someone there within a day. Now this is fascinating, because 82 00:04:52.810 --> 00:04:57.400 the rapid response appears to be helping victims respond more 83 00:04:57.400 --> 00:05:01.510 quickly themselves to get an idea of what has happened and 84 00:05:01.540 --> 00:05:06.880 how they can best restore their systems whenever possible. So 85 00:05:07.090 --> 00:05:09.490 Coveware says a lot of times when victims are looking at the 86 00:05:09.490 --> 00:05:13.600 math behind all this, they find that paying a ransom isn't 87 00:05:13.600 --> 00:05:17.350 actually going to help them restore any more quickly, for 88 00:05:17.350 --> 00:05:22.300 example, if they have working backups, and at that point, why 89 00:05:22.300 --> 00:05:26.140 pay. So they're having a hard look at the numbers here. And 90 00:05:26.200 --> 00:05:29.830 whenever possible, always recommending that victims never 91 00:05:29.830 --> 00:05:33.160 pay. Obviously, it's a business decision. But when you have 92 00:05:33.160 --> 00:05:36.400 these experts being brought to bear as well, apparently, this 93 00:05:36.400 --> 00:05:40.810 is helping. So this was one of the nice findings for me, that 94 00:05:40.840 --> 00:05:44.560 we've been seeing is, again, the FBI stepping up. And we've got 95 00:05:44.560 --> 00:05:47.590 some interesting testimony from the head of the FBI cyber 96 00:05:47.590 --> 00:05:51.130 division talking about how they have been trying to do this, not 97 00:05:51.130 --> 00:05:53.710 just to get better intelligence on how victims are getting hit, 98 00:05:54.040 --> 00:05:58.540 but because having this victim-first mentality is really 99 00:05:58.540 --> 00:06:04.000 helping U.S. organizations. And it is leading to a decrease in 100 00:06:04.000 --> 00:06:07.030 ransoms being paid, wonderful stuff. And so just one more 101 00:06:07.030 --> 00:06:11.650 statistic on that. Blockchain intelligence firm Chainalysis 102 00:06:11.920 --> 00:06:18.190 recently released its look at known cryptocurrency flows to 103 00:06:18.400 --> 00:06:24.430 bad people, bad guys, threat actors, wallet addresses. So the 104 00:06:24.430 --> 00:06:29.260 wallets it knows to be run by bad people and ransomware 105 00:06:29.260 --> 00:06:32.530 funding. It's looked at how that's flowing. And it has seen 106 00:06:32.530 --> 00:06:38.440 a huge drop from 2021 to 2022. So last year dropped by 40%, 107 00:06:38.710 --> 00:06:44.260 40%, less in the volume of ransoms in dollar - dollar value 108 00:06:44.890 --> 00:06:49.060 being paid to these attackers. So something's definitely having 109 00:06:49.060 --> 00:06:52.330 an impact here. That's all the good news. The very brief bad 110 00:06:52.330 --> 00:06:55.390 news is ransomware groups have a history of innovating. There's a 111 00:06:55.390 --> 00:06:58.090 lot of criminal profits, there's hundreds of millions of dollars 112 00:06:58.090 --> 00:07:01.900 at stake here. So I'm sure they'll respond. Will it be 113 00:07:01.900 --> 00:07:05.230 effective? We don't know. But hopefully not. And there's good 114 00:07:05.230 --> 00:07:08.920 signs here showing how all of this can be disrupted. And 115 00:07:08.920 --> 00:07:11.470 hopefully we can keep applying these lessons that have been 116 00:07:11.470 --> 00:07:11.830 learned. 117 00:07:13.130 --> 00:07:15.410 Anna Delaney: Yeah, that's fascinating, the stance or the 118 00:07:15.440 --> 00:07:18.470 shift from criminals to victims supporting them. And if you 119 00:07:21.980 --> 00:07:24.470 think about it, there was so much disruption on many fronts 120 00:07:24.470 --> 00:07:27.530 last year, because we had insurance policies changing, 121 00:07:27.560 --> 00:07:31.760 making it hard for the ransoms to be paid out. And then we had 122 00:07:32.300 --> 00:07:34.160 various sanctions being introduced against 123 00:07:34.310 --> 00:07:39.170 cryptocurrency firms. Going back to organizations and their 124 00:07:39.620 --> 00:07:43.730 response plans, are they getting better at building up their 125 00:07:43.730 --> 00:07:48.800 backups and preparing and responding to ransomware, do you 126 00:07:48.800 --> 00:07:53.270 think? Or there's other factors that I've mentioned? 127 00:07:53.840 --> 00:07:56.538 Mathew Schwartz: Definitely. I mean, the FBI's head of cyber 128 00:07:56.601 --> 00:08:00.618 testifying before Congress last year, he just said very clearly, 129 00:08:00.681 --> 00:08:04.698 "You will not be, well, you will be amazed to see the difference 130 00:08:04.760 --> 00:08:08.463 that having an incident response plan that's been practiced 131 00:08:08.526 --> 00:08:12.166 makes." So he said, "It's night and day in terms of how an 132 00:08:12.229 --> 00:08:16.309 organization can respond." So in the past year or more, I've been 133 00:08:16.371 --> 00:08:19.635 hearing business resiliency experts saying that more 134 00:08:19.698 --> 00:08:23.087 organizations are asking, demanding, seeking, refining 135 00:08:23.150 --> 00:08:26.978 their incident response plans, more organizations are running 136 00:08:27.041 --> 00:08:30.681 tabletop exercises, so that when disaster strikes, they've 137 00:08:30.744 --> 00:08:34.636 rehearsed how and who they're going to respond, who's going to 138 00:08:34.698 --> 00:08:38.527 be involved. And that's never something you want to do in the 139 00:08:38.590 --> 00:08:42.167 midst of a crisis. So I think we've been hearing from the 140 00:08:42.230 --> 00:08:45.682 likes of the FBI and from ransomware response firms and 141 00:08:45.745 --> 00:08:49.259 from cyber insurers about everything you need to have in 142 00:08:49.322 --> 00:08:52.523 place before you get hit. Because despite your best 143 00:08:52.586 --> 00:08:56.038 efforts, you could still get hit. And as you say, cyber 144 00:08:56.101 --> 00:08:59.490 insurers are restricting policies now to organizations 145 00:08:59.553 --> 00:09:03.130 that only have relatively good procedures and defenses in 146 00:09:03.193 --> 00:09:06.833 place. All of this has been having a great effect. This is 147 00:09:06.896 --> 00:09:10.160 what we need. We need to improve the base ability of 148 00:09:10.222 --> 00:09:14.051 organizations to respond not just to ransomware, but any type 149 00:09:14.114 --> 00:09:17.754 of online attack because if ransomware falls out of favor, 150 00:09:17.817 --> 00:09:21.645 something else is going to be wielded by criminals. So all of 151 00:09:21.708 --> 00:09:25.223 this is good. Obviously, it's been a painful lesson, but 152 00:09:25.285 --> 00:09:29.240 lessons do appear to get to be getting learned, which is great. 153 00:09:29.000 --> 00:09:31.556 Tony Morbin: I was quite interested, Matt, to see your 154 00:09:31.618 --> 00:09:35.234 other piece where you quoted Ciaran Martin, former head of 155 00:09:35.296 --> 00:09:39.286 the U.K. NCSC, calling for a ban on paying ransoms. Do you think 156 00:09:39.348 --> 00:09:42.839 fact that fewer people are paying ransoms make that more 157 00:09:42.902 --> 00:09:45.770 likely to happen or we're not going to see it? 158 00:09:45.770 --> 00:09:49.040 Mathew Schwartz: So, Ciaran has been clear that while he thinks 159 00:09:49.040 --> 00:09:52.850 this would be a great idea, he acknowledges that it's probably 160 00:09:52.880 --> 00:09:56.390 not what everybody thinks is a great idea. So I think in 161 00:09:56.390 --> 00:09:59.690 theory, it's an excellent idea because as we saw with Britain 162 00:09:59.690 --> 00:10:03.320 and kidnapping and ransoms back in the 80s. Banning that led to 163 00:10:03.350 --> 00:10:08.750 many fewer Brits being kidnapped for ransom. Unfortunately, we're 164 00:10:08.750 --> 00:10:11.840 not seeing a similar thing happened now. I think because 165 00:10:11.870 --> 00:10:14.900 banning ransom payments can drive more organizations out of 166 00:10:14.900 --> 00:10:18.860 business. So it is seen unfortunately, as sometimes 167 00:10:18.860 --> 00:10:22.700 unnecessary step to take. And having this kind of blunt 168 00:10:22.790 --> 00:10:28.490 one-size-all ban isn't going to help. And thus, I don't think 169 00:10:28.490 --> 00:10:29.570 we're ever going to see it. 170 00:10:31.310 --> 00:10:33.710 Anna Delaney: Great insight. And, Matt, thank you very much. 171 00:10:34.100 --> 00:10:37.850 Obviously, you say the criminals innovate and we know that's 172 00:10:37.880 --> 00:10:41.360 what's happening next, but at least for now, good progress. 173 00:10:41.870 --> 00:10:44.120 Mathew Schwartz: Every event is to be celebrated. Definitely. 174 00:10:45.500 --> 00:10:49.040 Anna Delaney: Tony, ChatGPT is your topic and it's obviously 175 00:10:49.040 --> 00:10:51.560 been the subject of many conversations since the end of 176 00:10:51.560 --> 00:10:55.970 last year. We've reported at ISMG about the possible ways 177 00:10:56.180 --> 00:10:59.180 criminals could use this technology for nefarious 178 00:10:59.180 --> 00:11:02.540 reasons. But I think you're taking a different angle today. 179 00:11:03.080 --> 00:11:06.140 Tony Morbin: Well, I'm looking both at the criminals and the 180 00:11:06.140 --> 00:11:11.090 defenders and society, if you like. I mean, just as the 181 00:11:11.090 --> 00:11:13.910 introduction of personal computers, mobile phones and the 182 00:11:13.910 --> 00:11:18.620 internet itself, just like those, consumer use of AI, 183 00:11:18.650 --> 00:11:23.090 thanks to ChatGPT, is a game changer. It's not because it's a 184 00:11:23.090 --> 00:11:26.150 superior AI to the others. It's just because of its 185 00:11:26.150 --> 00:11:30.740 accessibility. I mean, OpenAI released ChatGPT 3, the new 186 00:11:30.740 --> 00:11:34.580 interface for its large language model the end of November 2022. 187 00:11:34.820 --> 00:11:38.000 Less than two months later, Microsoft has now announced a 10 188 00:11:38.000 --> 00:11:41.360 billion investment in OpenAI. And that's been accompanied by 189 00:11:41.690 --> 00:11:45.410 rumored plans that they might introduce ChatGPT in the MS 190 00:11:45.410 --> 00:11:48.200 Office suite in the future, which would really make it 191 00:11:48.200 --> 00:11:52.400 ubiquitous. I mean, yes, as we mentioned, the early adopters' 192 00:11:52.430 --> 00:11:55.940 friends, the cyber criminals, are already using ChatGPT, 193 00:11:55.970 --> 00:12:00.350 whether that's to write phishing lures, code malware, we've got 194 00:12:00.350 --> 00:12:03.350 researchers at Checkpoint research reporting Russian cyber 195 00:12:03.350 --> 00:12:07.700 criminals bypassing restrictions like geofencing and getting past 196 00:12:07.700 --> 00:12:11.270 bans on its use for illegal purposes, in some cases, simply 197 00:12:11.270 --> 00:12:15.680 by saying that the work is for cyber defense or pentesting. By 198 00:12:15.680 --> 00:12:18.110 the end of December, an underground hacking forum 199 00:12:18.110 --> 00:12:21.800 published a thread called ChatGPT: Benefits of malware, 200 00:12:22.130 --> 00:12:25.640 which included using ChatGPT to create an encryption tool and 201 00:12:25.640 --> 00:12:29.870 information stealer, create dark web marketplace scripts, is a 202 00:12:29.870 --> 00:12:32.570 team of researchers at the Center for Security and Emerging 203 00:12:32.570 --> 00:12:35.960 Technology at Georgetown University and Stanford Internet 204 00:12:35.960 --> 00:12:38.600 Observatory, currently investigating the threat of 205 00:12:38.600 --> 00:12:42.020 chatbots being used, this particular chatbot being used to 206 00:12:42.080 --> 00:12:45.920 spread misinformation and fake news at scale. So all these 207 00:12:45.920 --> 00:12:49.520 threats are real. But, as you said, I want to also cover the 208 00:12:49.520 --> 00:12:52.700 fact that it's equally useful for defenders. Now, I can't 209 00:12:52.700 --> 00:12:56.600 personally verify the claims that I've seen being made by bug 210 00:12:56.600 --> 00:13:00.140 bounty hunters boasting all over Twitter about the thousands of 211 00:13:00.140 --> 00:13:03.260 dollars that they've made using ChatGPT to locate 212 00:13:03.260 --> 00:13:05.930 vulnerabilities and conduct low-level vulnerability 213 00:13:05.930 --> 00:13:09.530 scanning, but it does appear to be happening, it does seem to be 214 00:13:09.530 --> 00:13:12.710 real. Now some of us even suggested this is the end of 215 00:13:12.710 --> 00:13:17.180 pentesters. Okay, is likely that much of the lower-level work 216 00:13:17.180 --> 00:13:20.630 will indeed now be conducted in house, often by less skilled 217 00:13:20.630 --> 00:13:24.140 operators, but it will also be used to reduce workloads for 218 00:13:24.140 --> 00:13:28.100 stretch teams. It will be used alongside humans, making them 219 00:13:28.100 --> 00:13:30.500 more efficient so that the average capability and speed of 220 00:13:30.500 --> 00:13:34.010 work is increased. Whilst the most talented will be able to 221 00:13:34.010 --> 00:13:38.240 achieve even more. Certainly a recent video 'Hacking With 222 00:13:38.240 --> 00:13:42.800 OpenAI GPT, Hacking Without Humans, by Ron Chen, on the 223 00:13:42.800 --> 00:13:46.070 ethical hacking platform Intigriti describes the 224 00:13:46.070 --> 00:13:50.150 activities that ChatGPT can be used for by defenders. That 225 00:13:50.150 --> 00:13:53.600 includes writing bug bounty reports, identifying spam 226 00:13:53.600 --> 00:13:56.900 reports, spotting false positives, and spotting security 227 00:13:56.900 --> 00:14:00.800 logic flaws from documentation. I mean, training was needed to 228 00:14:00.800 --> 00:14:03.860 refine the results. And again, it's not working without a 229 00:14:03.860 --> 00:14:06.560 human, it's working with a human and enhancing their 230 00:14:06.560 --> 00:14:10.940 capabilities. Another use case described in a fascinating blog 231 00:14:10.940 --> 00:14:14.150 this week by Thomas Rid, professor of Strategic Studies 232 00:14:14.150 --> 00:14:17.330 and founding director of The Alperovitch Institute of 233 00:14:17.330 --> 00:14:20.840 Cybersecurity Studies at John Hopkins University. In it, he 234 00:14:20.840 --> 00:14:24.290 describes how he attended a course on malware analysis and 235 00:14:24.290 --> 00:14:29.390 reverse engineering with Juan Andres Guerrero-Saade Hopkins, 236 00:14:29.600 --> 00:14:34.040 and that was conducted using ChatGPT in the classroom on all 237 00:14:34.040 --> 00:14:37.580 of the participants' computers. And while he acknowledged the 238 00:14:37.580 --> 00:14:41.810 limitations of the current iteration of ChatGPT and the 239 00:14:41.810 --> 00:14:45.590 concerns that it could be used by some students to cover subpar 240 00:14:45.590 --> 00:14:48.470 performance, he also describes how it enabled all the 241 00:14:48.470 --> 00:14:51.920 participants who were at different tech levels to keep up 242 00:14:51.920 --> 00:14:55.700 in a really technically advanced class and use it to address the 243 00:14:55.700 --> 00:14:58.790 questions that might otherwise have slowed the class and the 244 00:14:58.790 --> 00:15:01.490 ads that the most inspiring conversation around the use of 245 00:15:01.520 --> 00:15:05.600 ChatGPT is how can the most creative, the most ambitious, 246 00:15:05.660 --> 00:15:08.570 the most brilliant students achieve even better results 247 00:15:08.570 --> 00:15:13.700 faster. So while AI can work faster, it can condense huge 248 00:15:13.700 --> 00:15:16.940 volumes of information. It is primarily utilizing what's 249 00:15:16.940 --> 00:15:19.850 currently known, reflecting common knowledge, like a giant 250 00:15:19.850 --> 00:15:22.550 version of "ask the audience", and we've all the pitfalls that 251 00:15:22.550 --> 00:15:26.060 that entails. However, it can provide new combinations of 252 00:15:26.060 --> 00:15:29.180 existing knowledge and learning models based on that knowledge 253 00:15:29.180 --> 00:15:32.270 can create or suggest new approaches, even if the AI 254 00:15:32.270 --> 00:15:37.070 itself is not really having an original thought. Outside the 255 00:15:37.070 --> 00:15:41.600 original sphere, the education sphere, organizations, they're 256 00:15:41.630 --> 00:15:44.270 quite happy for they're less able to achieve exceptional 257 00:15:44.270 --> 00:15:46.700 results, they're not going to worry about, you know, poor 258 00:15:46.700 --> 00:15:51.080 students. And they're just as happy to see an improved output. 259 00:15:52.040 --> 00:15:54.410 And also, obviously, the fact that they're most capable are 260 00:15:54.410 --> 00:15:57.440 going to work even faster and tackle more difficult workloads 261 00:15:57.440 --> 00:16:02.180 and bring their productivity to the next level. Now, we all know 262 00:16:02.180 --> 00:16:06.470 the technology itself is morally neutral. And so as mentioned, 263 00:16:06.470 --> 00:16:09.650 it's going to be used by attackers and defenders alike. 264 00:16:09.920 --> 00:16:13.070 But just as you wouldn't ban cars to prevent their misuse, 265 00:16:13.370 --> 00:16:16.460 we're likely to face huge difficulties when it comes to 266 00:16:16.460 --> 00:16:20.840 preventing AI being used by adversaries, as mentioned, 267 00:16:20.840 --> 00:16:24.290 geolocation or bans on illegality are easily bypassed. 268 00:16:24.920 --> 00:16:28.010 But if AI truly becomes as powerful as its potential 269 00:16:28.010 --> 00:16:31.070 suggests, it may be that governments will want to 270 00:16:31.070 --> 00:16:35.180 restrict its usage in the same way that in the 1990s, the U.S. 271 00:16:35.210 --> 00:16:38.720 put export restrictions on some crypto encryption algorithms. 272 00:16:38.960 --> 00:16:42.500 Back in 2013, the U.K. government plan to charge a 273 00:16:42.500 --> 00:16:46.850 yearly internet access license, and with all the issues around 274 00:16:46.880 --> 00:16:50.210 subsequent ability to then revoke a license, he ended up 275 00:16:50.210 --> 00:16:53.210 backing down. But it's certainly conceivable that this kind of 276 00:16:53.210 --> 00:16:58.130 approach could be revisited and applied to AI. In 2020, China 277 00:16:58.130 --> 00:17:01.310 banned or restricted export of some data-driven algorithms, 278 00:17:01.370 --> 00:17:03.770 specifically saying that they would apply to TikTok, 279 00:17:04.040 --> 00:17:07.700 indicating governments are concerned about who uses what 280 00:17:07.700 --> 00:17:12.890 they perceive to be their AI. And just this month, ChatGPT did 281 00:17:12.890 --> 00:17:17.420 introduce a $42 per month professional plan. 282 00:17:17.990 --> 00:17:21.170 Theoretically, users can still use the tool for free as they 283 00:17:21.170 --> 00:17:24.800 have been so far. And the new professional plan says that it's 284 00:17:24.800 --> 00:17:27.920 going to let users expand the capabilities of the AI chat bot 285 00:17:27.920 --> 00:17:32.330 with faster response speeds and priority access to new features. 286 00:17:32.900 --> 00:17:36.380 Now, in reality, the free version is now often unavailable 287 00:17:36.380 --> 00:17:40.190 due to excess user demand. So we're quickly going to be seeing 288 00:17:40.190 --> 00:17:44.180 how the paid model pans out. But a paid model immediately puts 289 00:17:44.180 --> 00:17:47.540 restrictions on who can use it, making it that much easier to 290 00:17:47.540 --> 00:17:50.630 introduce further restrictions for more advanced versions, 291 00:17:50.720 --> 00:17:54.500 including licenses and export restrictions, which could hit 292 00:17:54.500 --> 00:17:58.430 its current prime advantage of availability. And just because 293 00:17:58.430 --> 00:18:01.460 previous attempts to restrict knowledge haven't worked, that's 294 00:18:01.490 --> 00:18:05.900 unlikely to stop politicians trying. So in the meantime, I 295 00:18:05.900 --> 00:18:08.630 would just suggest that any cybersecurity professionals who 296 00:18:08.630 --> 00:18:12.410 haven't already got on board with ChatGPT should do so now. 297 00:18:12.710 --> 00:18:14.870 Take advantage of the opportunities to automate 298 00:18:14.870 --> 00:18:17.960 routine tasks, and also get a better understanding of the 299 00:18:17.960 --> 00:18:21.410 potential uses by attackers, so that you can take appropriate 300 00:18:21.410 --> 00:18:24.110 precautions to protect against misuse and abuse. 301 00:18:25.620 --> 00:18:27.420 Anna Delaney: That was excellent, Tony. And you've 302 00:18:27.450 --> 00:18:30.870 certainly got us thinking. There's so many angles and 303 00:18:30.870 --> 00:18:33.780 directions we could take this. But I think we're going to 304 00:18:33.840 --> 00:18:37.950 revisit ChatGPT right at the end. And I think a lot of what 305 00:18:37.950 --> 00:18:40.650 you said sounds hopeful, we talk about the skill shortage year on 306 00:18:40.650 --> 00:18:45.270 year getting worse. If this can be a tool to help teams support 307 00:18:45.270 --> 00:18:49.770 them, not replace them. But you know, there's obviously a need, 308 00:18:49.800 --> 00:18:53.370 but that was excellent. So we'll come back to ChatGPT in just a 309 00:18:53.370 --> 00:18:57.600 moment. Michael, moving on to you, good news, as again. I 310 00:18:57.600 --> 00:19:00.960 mean, it's good news week, is it for Microsoft, at least, or 311 00:19:00.990 --> 00:19:02.730 those who invest in Microsoft? 312 00:19:04.130 --> 00:19:07.580 Michael Novinson: Absolutely. And certainly in the world of 313 00:19:07.580 --> 00:19:10.010 Microsoft, they really are the elephant in the room when it 314 00:19:10.010 --> 00:19:13.070 comes to security. They, two years ago, shocked the world 315 00:19:13.070 --> 00:19:16.610 when they came out and disclosed in January of 2021 that they had 316 00:19:16.640 --> 00:19:20.450 $10 billion in security revenue. People knew they obviously had a 317 00:19:20.450 --> 00:19:23.270 robust business around Active Directory, people were aware 318 00:19:23.660 --> 00:19:27.200 that they provide an email protection or an Office365. But 319 00:19:27.200 --> 00:19:29.210 everybody was shocked when they said 10 billion - that's 320 00:19:29.210 --> 00:19:32.390 significantly larger than any other security company in the 321 00:19:32.390 --> 00:19:35.720 world, including those who only do cybersecurity. So two years 322 00:19:35.720 --> 00:19:39.020 ago was 10 billion. Last year in January, they came out and said 323 00:19:39.020 --> 00:19:42.560 it was 15 billion. And then just yesterday, Tuesday, during the 324 00:19:42.560 --> 00:19:45.560 company's earnings call, they said they'd hit the $20 billion 325 00:19:45.860 --> 00:19:49.760 revenue threshold over the past 12 months, meaning that over the 326 00:19:49.760 --> 00:19:54.020 past year, they've added $5 billion in security revenue. 327 00:19:54.020 --> 00:19:57.950 That's more security revenue than any other company does in a 328 00:19:57.950 --> 00:20:01.160 year with the exception of Palo Alto Networks. So it really 329 00:20:01.160 --> 00:20:04.640 speaks to the breadth of their portfolio, they have plays in 330 00:20:04.640 --> 00:20:08.720 identity, of course, as well as compliance, privacy, security, 331 00:20:08.720 --> 00:20:13.550 device management. And it speaks to some of the challenges that 332 00:20:14.120 --> 00:20:17.120 fairplay security companies face from going up against Microsoft. 333 00:20:17.150 --> 00:20:21.320 I mean, first and foremost is tossed out that Microsoft 334 00:20:21.320 --> 00:20:23.990 essentially is able to bundle their security capabilities with 335 00:20:24.020 --> 00:20:29.000 either an E3 or an E5 license. So really, what Microsoft's 336 00:20:29.000 --> 00:20:31.370 focused on is trying to get customers to pay for their 337 00:20:31.370 --> 00:20:34.040 office productivity or their email tools. And then they can 338 00:20:34.040 --> 00:20:36.530 essentially throw in some security capabilities and 339 00:20:36.530 --> 00:20:40.100 limited cost. Obviously, as a company that only makes money 340 00:20:40.100 --> 00:20:43.760 off of security cannot match that because they need to make 341 00:20:43.760 --> 00:20:48.050 money off the actual security technology. So historically, the 342 00:20:48.050 --> 00:20:51.620 sense was essentially for cost conscious customers, for small 343 00:20:51.620 --> 00:20:54.590 businesses, for midsize businesses that sure, they would 344 00:20:54.590 --> 00:20:58.250 use Microsoft for security was good enough, maybe not top of 345 00:20:58.250 --> 00:21:02.180 the line, but good enough, and the price was unbeatable. So 346 00:21:02.180 --> 00:21:05.060 they'll go that direction. Something that Microsoft really 347 00:21:05.060 --> 00:21:07.910 emphasized during yesterday's earnings call, not explicitly, 348 00:21:08.120 --> 00:21:11.210 but if you look at the examples of the wins they point out, is 349 00:21:11.210 --> 00:21:14.360 that large enterprises have also adopted Microsoft security 350 00:21:14.360 --> 00:21:17.720 technology. They talked about customer wins with companies 351 00:21:17.720 --> 00:21:22.370 like IKEA, like Roku, like NTT communications, like the 352 00:21:22.370 --> 00:21:25.280 University of Toronto, which has tens of thousands of students. 353 00:21:25.430 --> 00:21:28.310 So these are not mom and pop businesses just looking for a 354 00:21:28.310 --> 00:21:32.540 good deal. And the efficacy of their technology is something 355 00:21:32.540 --> 00:21:37.280 that's also been recognized by analysts, their identity, they 356 00:21:37.280 --> 00:21:41.960 are top of the line, and they have their market shares, but 357 00:21:41.960 --> 00:21:45.260 also in spaces like EDR and XDR, where Forrester sees them as a 358 00:21:45.260 --> 00:21:48.950 category leader alongside CrowdStrike and Trend Micro. And 359 00:21:48.950 --> 00:21:52.370 then most notably in this, in the SIEM, the security 360 00:21:52.520 --> 00:21:56.000 information and event management space, which Microsoft really 361 00:21:56.030 --> 00:22:00.980 entered in 2019 with a product called Sentinel. And this most 362 00:22:00.980 --> 00:22:05.570 recent year back in October, Gartner named them as the top 363 00:22:05.570 --> 00:22:08.330 product at SIEM. They had the highest score in their SIEM 364 00:22:08.330 --> 00:22:11.600 Magic Quadrant, which is really remarkable. Typically, if you 365 00:22:11.600 --> 00:22:15.050 look at the Magic Quadrant to the Forrester waves, it takes a 366 00:22:15.050 --> 00:22:18.050 decade or more for a company to really move into that leadership 367 00:22:18.050 --> 00:22:21.380 role. Because what happens is when a company starts a product 368 00:22:21.380 --> 00:22:24.260 that they're focused on, a particular use case or narrow 369 00:22:24.260 --> 00:22:27.200 issue, and they do a very good job with that. But there's 370 00:22:27.200 --> 00:22:29.870 essentially this whole broad other range of issues that the 371 00:22:29.870 --> 00:22:32.900 product doesn't address yet, which makes it hard for it to 372 00:22:32.900 --> 00:22:36.110 fit the full range of customer needs. So to make it for 373 00:22:36.110 --> 00:22:39.350 Microsoft to essentially go from entering the market in 2019 to 374 00:22:39.350 --> 00:22:43.490 being the category leader, especially in a category like 375 00:22:43.490 --> 00:22:46.730 SIEM that's been around for two decades, it's really something, 376 00:22:46.730 --> 00:22:50.270 at least in my observations, that's unprecedented. But 377 00:22:50.270 --> 00:22:54.110 obviously the benefit Microsoft has is resources that any other 378 00:22:54.110 --> 00:22:56.630 company can dream of, that if they decide they want to roll in 379 00:22:56.630 --> 00:22:59.750 one direction, the amount of labor, the amount of capital 380 00:22:59.750 --> 00:23:03.230 that they can put behind that is simply a match. So this is 381 00:23:03.230 --> 00:23:07.100 really Microsoft's presence. And Microsoft's approach is really 382 00:23:07.100 --> 00:23:09.620 going to be something everyone in the industry is going to have 383 00:23:09.620 --> 00:23:12.980 to grapple with, especially those focused on serving small 384 00:23:12.980 --> 00:23:16.190 and midsized, cost conscious customers. 385 00:23:17.540 --> 00:23:19.722 Anna Delaney: Yeah, it's an incredible story, actually, to 386 00:23:19.772 --> 00:23:22.351 think at the start of the pandemic when CISOs in our 387 00:23:22.401 --> 00:23:25.080 roundtable used to say, "Well, we're relying solely on 388 00:23:25.129 --> 00:23:27.907 Microsoft cybersecurity services." And there was sort of 389 00:23:27.957 --> 00:23:31.033 like a surprise reaction from peers in the room, but maybe not 390 00:23:31.000 --> 00:23:34.570 Michael Novinson: So definitely a surprising thing to hear and 391 00:23:31.082 --> 00:23:31.430 so now. 392 00:23:34.570 --> 00:23:38.260 definitely something that investors are raising when 393 00:23:38.260 --> 00:23:40.780 they're talking to the central ones in the CrowdStrikes of the 394 00:23:40.780 --> 00:23:43.630 world. They want to hear what impact is Microsoft having on 395 00:23:43.630 --> 00:23:44.410 their business? 396 00:23:45.680 --> 00:24:25.280 Anna Delaney: Well, excellent. I think this won't be the last 397 00:24:11.920 --> 00:24:14.686 Mathew Schwartz: Oh, yeah, I'll just pick up on what Tony was 398 00:24:14.749 --> 00:24:18.333 talking about, which was a fascinating analysis that came 399 00:24:18.396 --> 00:24:21.854 out from Thomas Rid and instructor as well, in the last 400 00:24:21.917 --> 00:24:25.940 week about the malware analysis course they were teaching, and I 401 00:24:25.310 --> 00:24:28.520 year we're talking about Microsoft cybersecurity 402 00:24:26.003 --> 00:24:29.902 loved the insight the ChatGPT was wonderful for letting people 403 00:24:28.790 --> 00:24:35.900 services, for sure. But there's a great input for today, 404 00:24:29.964 --> 00:24:33.548 in the class ask what they might have thought were stupid 405 00:24:33.611 --> 00:24:37.447 questions, or just trying to get details for things that they 406 00:24:36.440 --> 00:24:42.980 Michael. So finally, speaking of ChatGPT, we know it's 407 00:24:37.509 --> 00:24:41.470 didn't want to stop the flow of the instructor necessarily, but 408 00:24:41.533 --> 00:24:44.803 they wanted to check their knowledge. And I think in 409 00:24:44.866 --> 00:24:48.764 engineering environments like that, so computer science, or if 410 00:24:47.930 --> 00:25:00.530 revolutionizing the way we search for information. And of 411 00:24:48.827 --> 00:24:52.474 you're doing anything in a security and operations center, 412 00:24:52.536 --> 00:24:55.994 anything where you need some technical detail if it's a 413 00:24:56.057 --> 00:25:00.081 refresh, or if it's to find the best answer, or how you format a 414 00:25:00.144 --> 00:25:04.042 certain type of coding, or the coding language and the nuances 415 00:25:00.530 --> 00:25:06.770 course, we have many use cases. But what for you is the most 416 00:25:04.105 --> 00:25:07.752 of that, I think there's some really interesting ways that 417 00:25:06.800 --> 00:25:09.950 interesting angle of this technology that you'll be 418 00:25:07.815 --> 00:25:11.336 that helps people become more productive, again, in this 419 00:25:10.760 --> 00:25:18.980 following as it evolves? Matt? 420 00:25:11.399 --> 00:25:15.171 classroom environment as well without disrupting the rest of 421 00:25:15.234 --> 00:25:18.818 the class, but empowering themselves with any information 422 00:25:18.881 --> 00:25:19.510 they need. 423 00:25:19.000 --> 00:25:22.235 Tony Morbin: I just saw a great tweet today. And it said, 424 00:25:22.308 --> 00:25:26.867 "English is the new programming language." And I just thought, 425 00:25:26.941 --> 00:25:31.132 "Yeah, I can go along with that." So I think as a result, 426 00:25:31.205 --> 00:25:35.544 we're going to have to assume that ChatGPT is being used in 427 00:25:35.617 --> 00:25:37.750 any communication we receive. 428 00:25:37.000 --> 00:25:39.777 Michael Novinson: World of Cybersecurity, the intercept put 429 00:25:39.845 --> 00:25:43.369 out a very interesting information rather, put out a 430 00:25:43.436 --> 00:25:47.705 very interesting story that this week, talking about the use of 431 00:25:47.773 --> 00:25:51.431 the ChatGPT technology and in the world of GitHub, and 432 00:25:51.499 --> 00:25:55.700 particularly, they called out working with Copilot, which is a 433 00:25:55.768 --> 00:26:00.172 collaboration between GitHub and OpenAI to better detect security 434 00:26:00.240 --> 00:26:04.508 vulnerabilities, whether it's in the code of developers written 435 00:26:04.576 --> 00:26:08.370 or in the code that Copilot has suggested. So definitely 436 00:26:08.438 --> 00:26:10.810 something I want to keep an eye on. 437 00:26:11.140 --> 00:26:13.232 Mathew Schwartz: And that's a great point I mentioned that as 438 00:26:13.279 --> 00:26:16.132 well. Just if you're coding and you have ChatGPT to help you 439 00:26:16.180 --> 00:26:18.130 write more secure code. How cool is that? 440 00:26:19.690 --> 00:26:22.206 Anna Delaney: To think we weren't really talking about 441 00:26:22.269 --> 00:26:25.792 ChatGPT three months ago, and here it is revolutionizing 442 00:26:25.855 --> 00:26:29.630 everybody's lives. So well, that's what's exciting about the 443 00:26:29.693 --> 00:26:33.531 future, I guess. Well, thank you so much, everybody. This has 444 00:26:33.593 --> 00:26:37.180 been great. As always, Tony, Matt and Michael, thank you. 445 00:26:37.180 --> 00:26:38.290 Mathew Schwartz: Thanks for having us, Anna. 446 00:26:38.290 --> 00:26:39.070 Michael Novinson: Thanks. 447 00:26:39.070 --> 00:26:40.480 Tony Morbin: Thank you, Anna. 448 00:26:40.480 --> 00:26:42.610 Anna Delaney: Thank you so much for watching. Until next time,