WEBVTT 1 00:00:07.140 --> 00:00:09.570 Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm 2 00:00:09.570 --> 00:00:13.200 Anna Delaney, and today we'll discuss the news about AT&T 3 00:00:13.230 --> 00:00:16.230 allegedly paying hackers a ransom following a breach of its 4 00:00:16.230 --> 00:00:19.920 Snowflake account, the role of AI bots in the workplace, and 5 00:00:19.920 --> 00:00:24.180 the contrasting AI regulations between the EU and the U.S. 6 00:00:24.420 --> 00:00:26.970 Today, I'm joined by Mathew Schwartz, executive editor of 7 00:00:26.970 --> 00:00:30.330 DataBreachToday and Europe; Rashmi Ramesh, assistant editor, 8 00:00:30.360 --> 00:00:33.630 global news desk; and Tony Morbin, executive news editor 9 00:00:33.720 --> 00:00:35.610 for the EU. Great to see you all. 10 00:00:36.450 --> 00:00:37.170 Tony Morbin: Good to see you Anna. 11 00:00:37.440 --> 00:00:38.130 Rashmi Ramesh: Thank you. 12 00:00:38.550 --> 00:00:39.450 Mathew Schwartz: Thanks for having us. 13 00:00:40.350 --> 00:00:41.760 Anna Delaney: Tony, you're in space. 14 00:00:42.440 --> 00:00:46.640 Tony Morbin: Yeah! I mean, you know, why space? It was going to 15 00:00:46.640 --> 00:00:51.320 be the 2001 - HAL thing, but you know the whole idea really that, 16 00:00:52.100 --> 00:00:57.470 yeah, risky environments. So AI sounds like a risky environment. 17 00:00:57.900 --> 00:01:02.190 Anna Delaney: Chaos in space. Love it. Rashmi, where are you? 18 00:01:03.330 --> 00:01:07.290 Rashmi Ramesh: I'm in Alleppey, which is about 600 kilometers 19 00:01:07.290 --> 00:01:10.440 from where I live. So that's a houseboat behind me. It's a 20 00:01:10.440 --> 00:01:14.640 house and a boat that docks in the water through the night and 21 00:01:14.670 --> 00:01:16.770 on which you can explore the town during the day. 22 00:01:17.400 --> 00:01:19.170 Anna Delaney: And that's what you've done recently? 23 00:01:19.470 --> 00:01:19.980 Rashmi Ramesh: Yeah. 24 00:01:20.370 --> 00:01:24.540 Anna Delaney: Love it. Mat, not quite in the jungle? 25 00:01:25.770 --> 00:01:27.300 Mathew Schwartz: Down in the weeds, Anna. 26 00:01:27.360 --> 00:01:27.930 Anna Delaney: Down in the weeds. 27 00:01:27.960 --> 00:01:29.400 Mathew Schwartz: That's where we're going to be today, right? 28 00:01:29.400 --> 00:01:35.370 Well, hopefully not too far down in the weeds. This is the side, 29 00:01:35.370 --> 00:01:39.240 the back, I don't know, of the V&A Dundee Museum. They've got a 30 00:01:39.570 --> 00:01:44.310 lot of lovely flowers, and it's been really beautiful for the 31 00:01:44.310 --> 00:01:48.510 foliage lately in Scotland, owing to the non-stop, incessant 32 00:01:48.510 --> 00:01:51.030 rain that we've had, which has been great for the vegetation, 33 00:01:51.060 --> 00:01:54.780 less good for the mental health, maybe, but sunshine ahead, so we 34 00:01:54.780 --> 00:01:55.260 hear. 35 00:01:55.470 --> 00:01:57.450 Anna Delaney: Very good. At least the flowers are happy. 36 00:01:57.540 --> 00:01:59.730 Mathew Schwartz: At least the flowers are happy, exactly. 37 00:02:00.540 --> 00:02:03.000 Anna Delaney: Well, this is a snap from my weekend in Dorset, 38 00:02:03.180 --> 00:02:07.320 lovely part of the U.K. Little bit of sunshine for an hour or 39 00:02:07.320 --> 00:02:11.940 two on Sunday, a lot of rain, but some very fine seafood. So, 40 00:02:12.540 --> 00:02:18.030 caught the blue size, much needed. Mat, this week's news 41 00:02:18.030 --> 00:02:22.410 revealed that telecom giant AT&T allegedly paid back as a ransom 42 00:02:22.560 --> 00:02:26.280 following a breach of its Snowflake account, emphasizing 43 00:02:26.310 --> 00:02:29.850 ongoing concerns about ransomware victims opting to pay 44 00:02:29.880 --> 00:02:33.900 for assurances of data deletion, and you posed a really important 45 00:02:33.900 --> 00:02:37.350 question in your reporting of the story - what will it take 46 00:02:37.560 --> 00:02:41.010 for victims of cybercrime to stop directly funding their 47 00:02:41.010 --> 00:02:44.670 attackers? So just bring us up to speed on the story, and maybe 48 00:02:44.670 --> 00:02:47.250 have some thoughts as to how we can answer that question. 49 00:02:48.510 --> 00:02:51.180 Mathew Schwartz: Great question, and it is a topic that keeps 50 00:02:51.180 --> 00:02:55.920 coming up again and again and again, as we see reports that 51 00:02:55.950 --> 00:03:01.470 although the total number of ransomware victims who pay a 52 00:03:01.470 --> 00:03:05.820 ransom, who choose to pay a ransom to their attackers, seems 53 00:03:05.820 --> 00:03:09.090 to be going down. Seems to be lower than a third now, 54 00:03:09.120 --> 00:03:13.140 according to some firms that help victims, maybe in the 28% 55 00:03:13.140 --> 00:03:18.810 range. 28% of victims is still a huge amount. We often see 56 00:03:18.810 --> 00:03:22.140 ransomware groups publicizing victims who don't want to pay, 57 00:03:23.010 --> 00:03:26.640 partially to try to normalize I think. Some of the sky high 58 00:03:26.640 --> 00:03:30.540 ransoms they demand initially. Now, the victims who pay don't 59 00:03:30.540 --> 00:03:35.910 necessarily pay ransoms of that amount, but the narrative is 60 00:03:35.910 --> 00:03:40.080 still so often being controlled by these ransomware groups who 61 00:03:40.080 --> 00:03:45.270 are horrible criminals. We've seen that again and again. They 62 00:03:45.300 --> 00:03:48.690 disrupt healthcare, cancer treatment, children's hospitals. 63 00:03:48.810 --> 00:03:52.920 These are scum of the earth, so we don't want to give them any 64 00:03:52.920 --> 00:03:56.400 wiggle room when it comes to controlling the narrative, 65 00:03:56.400 --> 00:04:01.950 controlling the discourse, but paying them does that, validates 66 00:04:01.950 --> 00:04:06.060 this criminal business model, gives them funding to put toward 67 00:04:06.090 --> 00:04:11.670 future attacks against future victims as they seek out new 68 00:04:11.670 --> 00:04:15.870 kinds of data from other organizations to exfiltrate and 69 00:04:15.870 --> 00:04:18.870 to threaten to leak. And then, if they don't pay, to leak it, 70 00:04:19.410 --> 00:04:23.400 and on and on again. We keep seeing this. So, this week, for 71 00:04:23.400 --> 00:04:27.510 example, I've been reporting on the Change Healthcare breach. 72 00:04:27.750 --> 00:04:33.930 Change is owned by UnitedHealth Group, one of the major U.S. 73 00:04:34.140 --> 00:04:41.220 health insurance and service providers, and its CEO in May 74 00:04:41.310 --> 00:04:44.250 told Congress that the breach might impact a third of all 75 00:04:44.250 --> 00:04:50.910 Americans. Now, this is despite the company having paid a ransom 76 00:04:50.910 --> 00:04:56.220 to attackers in return for a guarantee from them that they 77 00:04:56.220 --> 00:05:01.050 would delete stolen data - ransom of about 22 million 78 00:05:01.050 --> 00:05:05.340 dollars. And then, the group involved, kept the money, didn't 79 00:05:05.340 --> 00:05:09.000 pay the actual hacker, so the hacker, who seems to possibly be 80 00:05:09.030 --> 00:05:12.330 based in the West, took it to a different ransomware group and 81 00:05:12.330 --> 00:05:16.110 shook Change Healthcare down for a second time. Did they pay the 82 00:05:16.110 --> 00:05:19.770 second time? We don't know, but an organization that does appear 83 00:05:19.770 --> 00:05:23.670 to have paid, reportedly, according to Wired, who talked 84 00:05:23.670 --> 00:05:27.150 to a security researcher who handled the negotiations, was 85 00:05:27.150 --> 00:05:32.070 AT&T, and attackers demanded a ransom. I think it was in the 86 00:05:32.070 --> 00:05:36.180 neighborhood of about a million, and AT&T reportedly paid about a 87 00:05:36.180 --> 00:05:39.240 third of that, which is a lot less than what we see with some 88 00:05:39.240 --> 00:05:42.450 of these really big organizations that got hit. So, 89 00:05:42.570 --> 00:05:47.160 did AT&T allegedly pay to get a decryptor, which sometimes is a 90 00:05:47.160 --> 00:05:49.830 choice a business needs to make if it's otherwise going to go 91 00:05:49.830 --> 00:05:54.330 out of business. Allegedly, no, it did not. It paid solely for a 92 00:05:54.330 --> 00:05:57.930 promise from attackers that they would delete the stolen data, 93 00:05:57.930 --> 00:06:02.310 and they even sent a video of themselves doing it. Now, you 94 00:06:02.310 --> 00:06:06.420 might say, "Oh, but Mat, couldn't such a video be faked?" 95 00:06:06.600 --> 00:06:10.500 Yes, couldn't these assurances be entirely false, given that 96 00:06:10.500 --> 00:06:14.550 these are criminals who regularly attack children's care 97 00:06:14.550 --> 00:06:17.670 and hospitals and other critical services, and the answer is a 98 00:06:17.670 --> 00:06:23.910 resounding Yes! So they're selling the ability of AT&T and 99 00:06:23.910 --> 00:06:28.110 other organizations to say, "Okay, the horse has already 100 00:06:28.110 --> 00:06:31.590 fled the barn. The barn is burning, but we've managed to 101 00:06:31.590 --> 00:06:35.610 put out, I don't know, we've managed to close the door on the 102 00:06:35.610 --> 00:06:38.730 burning, barter, whatever." Pick your metaphor. It's bad, 103 00:06:38.730 --> 00:06:41.790 basically. This is them trying to spin the message after the 104 00:06:41.790 --> 00:06:45.510 fact. It's ineffectual. It funds cybercrime, and unfortunately, 105 00:06:45.540 --> 00:06:49.230 as with Change, as with AT&T, we keep seeing it again and again, 106 00:06:49.590 --> 00:06:51.660 and I'm not sure how it's ever going to stop. 107 00:06:52.320 --> 00:06:56.280 Anna Delaney: Huge topic. How effective are exceptions for 108 00:06:56.280 --> 00:06:59.790 national security and justifying delayed breach notifications 109 00:06:59.790 --> 00:07:04.500 such as AT&T's from the DOJ, given potential risks to public 110 00:07:04.500 --> 00:07:05.010 safety. 111 00:07:05.760 --> 00:07:07.260 Mathew Schwartz: Well, so there's an interesting question 112 00:07:07.260 --> 00:07:10.770 there. So, for the very first time that we know of, the 113 00:07:10.800 --> 00:07:14.220 Department of Justice invoked securities and exchange 114 00:07:14.220 --> 00:07:18.600 regulation exception that they can do in cases of public safety 115 00:07:18.600 --> 00:07:22.260 or national security. They did this with AT&T because it looked 116 00:07:22.290 --> 00:07:26.700 like the FBI has gotten one of the suspects arrested. 117 00:07:27.450 --> 00:07:31.200 Allegedly, this person hacked T-Mobile, and allegedly the same 118 00:07:31.200 --> 00:07:35.910 person, who is an American based in Turkey, also was involved in 119 00:07:35.940 --> 00:07:41.160 the AT&T breach. So, they have paused that breach notification 120 00:07:41.190 --> 00:07:44.790 by AT&T, but that doesn't seem to have had any impact on 121 00:07:44.790 --> 00:07:49.470 whether AT&T paid the ransom or not. So, it's an interesting 122 00:07:49.470 --> 00:07:53.940 footnote, if you will, for this breach and for notification by a 123 00:07:53.940 --> 00:07:58.770 public, publicly traded organization. But again, it 124 00:07:58.770 --> 00:08:00.210 didn't stop them from paying. 125 00:08:00.750 --> 00:08:02.640 Anna Delaney: What are the other approaches organizations can 126 00:08:02.640 --> 00:08:06.450 take facing ransom demands? Is it to pay or not to pay? Is that 127 00:08:06.480 --> 00:08:08.070 the only options here? 128 00:08:08.790 --> 00:08:11.370 Mathew Schwartz: Yes. So, I mean, there are other options. 129 00:08:11.610 --> 00:08:14.130 Definitely reach out to ransomware incident response 130 00:08:14.160 --> 00:08:17.040 groups, because they may have discovered workarounds. They 131 00:08:17.040 --> 00:08:22.830 often do. They've often found a way to decrypt files quietly. 132 00:08:22.950 --> 00:08:26.190 Now, of course, this doesn't give someone like AT&T the right 133 00:08:26.190 --> 00:08:29.820 to say we took all possible steps after we lost control of 134 00:08:29.820 --> 00:08:33.270 your data to try to get the attackers to delete it, and 135 00:08:33.270 --> 00:08:35.190 maybe they never did, but we gave them a bunch of money 136 00:08:35.190 --> 00:08:39.630 anyway, just in case. So it doesn't really fix that problem. 137 00:08:39.930 --> 00:08:44.130 If an organization is serious, though, instead of setting aside 138 00:08:44.130 --> 00:08:47.220 this money for payment or making the payment, they should be 139 00:08:47.220 --> 00:08:49.800 putting that money into prevention, so that they never 140 00:08:49.800 --> 00:08:55.260 have to consider whether or not to pay. The best message that I 141 00:08:55.260 --> 00:08:59.190 think you can see by a breached organization hit by ransomware, 142 00:08:59.370 --> 00:09:02.100 because no matter how much you prepare, you could still get 143 00:09:02.100 --> 00:09:07.410 hit. The best message is to say, we got shaken down and we do not 144 00:09:07.410 --> 00:09:11.610 pay criminals. We will not perpetuate this cycle any 145 00:09:11.640 --> 00:09:16.680 further. Instead, we spend a lot of time and effort preparing and 146 00:09:16.680 --> 00:09:20.610 practicing for what would happen when we inevitably got hit. 147 00:09:20.910 --> 00:09:25.830 We've wiped and restored all systems, and that's it. We're 148 00:09:25.830 --> 00:09:29.730 done. We're not giving attackers any money. They can go try to 149 00:09:29.730 --> 00:09:33.690 find some other victim. So that's what I like to see, is 150 00:09:34.380 --> 00:09:38.460 we've prepared so we didn't have to pay. AT&T didn't prepare so 151 00:09:38.460 --> 00:09:40.440 they had to pay, but they didn't really have to pay, but they 152 00:09:40.440 --> 00:09:41.040 ended up paying. 153 00:09:41.910 --> 00:09:44.670 Anna Delaney: Excellent insights and takeaways. Thank you Mat. 154 00:09:45.360 --> 00:09:49.230 Rashmi, should AI bots be treated as human employees? Not 155 00:09:49.230 --> 00:09:54.000 everybody seems to think so. HR company Lattice recently faced 156 00:09:54.000 --> 00:09:57.210 backlash and canceled its feature to treat AI bots like 157 00:09:57.210 --> 00:10:01.860 human employees. So, it's quashed for now, but digital 158 00:10:01.860 --> 00:10:03.900 workers remain a point of interest for many in the 159 00:10:03.900 --> 00:10:04.890 industry. Don't they Rashmi? 160 00:10:06.510 --> 00:10:10.470 Rashmi Ramesh: For sure. So, just to give you a brief about 161 00:10:10.470 --> 00:10:16.740 what happened. So, this HR tech unicorn called Lattice, it was 162 00:10:16.740 --> 00:10:21.180 set up by Sam Altman's brother Jack Altman. He's no longer part 163 00:10:21.180 --> 00:10:25.080 of the company, though, but anyway, last week, the company 164 00:10:25.080 --> 00:10:30.360 said that it was making history by attempting to integrate AI 165 00:10:30.360 --> 00:10:35.370 bots into its workforce, and it did that by giving these bots 166 00:10:35.400 --> 00:10:39.930 employee records, onboarded them like human employees, provided 167 00:10:39.930 --> 00:10:45.120 them training, even set performance metrics and assigned 168 00:10:45.150 --> 00:10:49.410 them a boss to give them feedback. Now, in an interview, 169 00:10:49.440 --> 00:10:56.130 the current CEO, Sarah Franklin, also said that she would fire 170 00:10:56.130 --> 00:10:58.890 these digital workers if they did not perform well or 171 00:10:58.890 --> 00:11:02.700 compromise the company's reputation in any way a human 172 00:11:02.700 --> 00:11:08.430 would. Now, whether good, bad or not, it was a significant step 173 00:11:08.430 --> 00:11:12.540 into integrating AI into the workforce. But, in just three 174 00:11:12.540 --> 00:11:18.360 days, there was massive backlash against the step - so intense 175 00:11:18.420 --> 00:11:22.530 that the company had to roll back the program. People, 176 00:11:22.920 --> 00:11:26.490 including those in, you know, the enterprise space and the AI 177 00:11:26.490 --> 00:11:31.560 industry, also said that the idea of giving digital workers 178 00:11:31.590 --> 00:11:35.370 the same status as human employees was not okay. Digital 179 00:11:35.370 --> 00:11:40.710 workers is what Lattice called the AI bots. Now, one, you have 180 00:11:40.710 --> 00:11:44.460 the human element, where the approach seemed like it treated 181 00:11:44.460 --> 00:11:49.200 humans as mere resources to be optimized alongside machines, 182 00:11:49.470 --> 00:11:53.490 which did not sit well with many people. And then, there's a 183 00:11:53.490 --> 00:11:58.200 debate about what's the point of this specific exercise? 184 00:11:58.380 --> 00:12:02.100 Companies are integrating AI into the workforce and quite 185 00:12:02.100 --> 00:12:08.040 quickly and at scale. But, as one of the commentators pointed 186 00:12:08.040 --> 00:12:13.290 out, there are more productive needs in AI for HR industry as 187 00:12:13.290 --> 00:12:19.530 well, rather than this exercise, which seems like a PR exercise 188 00:12:19.530 --> 00:12:25.050 at this point. So clearly, the company did not expect this sort 189 00:12:25.050 --> 00:12:29.340 of a backlash, and in the statement that recalls the 190 00:12:29.340 --> 00:12:32.670 measure, the CEO also said that, you know, the initiative has 191 00:12:32.670 --> 00:12:35.880 sparked many questions, and they don't have clear answers yet. 192 00:12:36.210 --> 00:12:41.040 But anyway, so this whole episode brings up a much broader 193 00:12:41.040 --> 00:12:44.580 concern about AI in the workplace as well. There's 194 00:12:44.670 --> 00:12:49.350 always a growing fear that AI could replace millions of jobs, 195 00:12:49.350 --> 00:12:52.470 making human employees obsolete, and there are studies and 196 00:12:52.470 --> 00:12:55.560 surveys that validate a small portion of that fear as well. 197 00:12:56.040 --> 00:12:59.160 Anna Delaney: So, while we're not at the point yes of asking 198 00:12:59.160 --> 00:13:02.700 our AI bot colleagues about their weekends or making them 199 00:13:03.240 --> 00:13:06.900 virtual coffees, how do you see the role of AI in the workplace 200 00:13:06.900 --> 00:13:08.310 evolving over the next few years? 201 00:13:09.330 --> 00:13:11.310 Rashmi Ramesh: Well, there are more use cases than hours in a 202 00:13:11.310 --> 00:13:15.360 day. But, to summarize it broadly, AI will not really 203 00:13:15.360 --> 00:13:19.320 replace skilled human workers, but it will completely change 204 00:13:19.320 --> 00:13:23.610 how work is done. So, we're already seeing this change 205 00:13:23.610 --> 00:13:27.330 underway with, you know, automation of routine tasks and 206 00:13:27.570 --> 00:13:32.040 supplementing strategy, decision-making, recruitment, 207 00:13:32.040 --> 00:13:37.050 every aspect of our jobs will have AI. But, it will, at least 208 00:13:37.080 --> 00:13:41.370 in the near future, still need humans to oversee it. I don't 209 00:13:41.370 --> 00:13:45.150 know if this statement will age well, but for now, AI will 210 00:13:45.150 --> 00:13:49.410 support humans rather than replacing them. For example, you 211 00:13:49.410 --> 00:13:54.900 know, Intuit laid off about 1800 workers recently. And very 212 00:13:54.900 --> 00:13:58.500 unapologetically, it said that it is not cost cutting. We will 213 00:13:58.500 --> 00:14:02.310 hire back for the same roles, but we'll hire people who can 214 00:14:02.340 --> 00:14:07.530 align with the company's gen AI vision. So this does seem like 215 00:14:07.530 --> 00:14:10.050 it's the future, or at least the near future. 216 00:14:10.350 --> 00:14:12.660 Mathew Schwartz: Rashmi, was there any discrimination in 217 00:14:12.660 --> 00:14:16.590 hiring that you saw? For example, did this experiment 218 00:14:16.860 --> 00:14:20.760 limit itself to ChatGPT or did they also accept applications 219 00:14:20.760 --> 00:14:24.090 from Google's Gemini, for example, or Microsoft Copilot? 220 00:14:25.020 --> 00:14:26.550 Rashmi Ramesh: They made the bots themselves. 221 00:14:27.870 --> 00:14:30.510 Mathew Schwartz: Oh! That poses some interesting ethical 222 00:14:30.510 --> 00:14:31.560 dilemmas. But anyway. 223 00:14:32.430 --> 00:14:35.130 Tony Morbin: It was good to see that the employer found out what 224 00:14:35.130 --> 00:14:38.700 the difference between machines and humans was. Humans answer 225 00:14:38.700 --> 00:14:39.090 back. 226 00:14:43.080 --> 00:14:45.120 Anna Delaney: Well, we're all human at the end of the day. 227 00:14:45.660 --> 00:14:49.350 Thank you. Rashmi. Absolutely fascinating story. Tony, this 228 00:14:49.350 --> 00:14:52.800 week, you're looking at AI risk versus regulation in relation to 229 00:14:52.800 --> 00:14:56.610 both EU regulations coming into force, while the Republicans, if 230 00:14:56.610 --> 00:15:00.570 elected, plan to rescind the AI executive order. So, potentially 231 00:15:00.570 --> 00:15:02.790 lots of change in the air. Do share your thoughts. 232 00:15:03.360 --> 00:15:06.000 Tony Morbin: Okay, well, just over a decade ago, when the 233 00:15:06.000 --> 00:15:08.790 Large Hadron Collider was switched on, there were concerns 234 00:15:08.790 --> 00:15:11.760 in some quarters that it might create a microscopic black hole 235 00:15:11.760 --> 00:15:14.580 that could potentially suck up the Earth, or it might create a 236 00:15:14.580 --> 00:15:17.730 strangelet that could convert our planet into a lump of dead, 237 00:15:17.760 --> 00:15:20.970 strange matter. Certain scientists said that these 238 00:15:20.970 --> 00:15:24.360 outcomes were extremely unlikely, so they went ahead and 239 00:15:24.360 --> 00:15:28.410 we survived. Extremely unlikely was not a particularly 240 00:15:28.470 --> 00:15:31.410 reassuring phrase to use when balanced against the end of the 241 00:15:31.410 --> 00:15:34.530 world. And there were those who felt that if there was any risk 242 00:15:34.530 --> 00:15:37.260 that the world might end, then we shouldn't do it, but they had 243 00:15:37.260 --> 00:15:41.340 no power to stop it. And in some ways, it feels like that with AI 244 00:15:41.340 --> 00:15:44.820 today. I'd like to think that we all want to get the most out of 245 00:15:44.820 --> 00:15:47.730 AI's capabilities while making sure that it's implemented 246 00:15:47.730 --> 00:15:51.480 safely. But the truth is, people have different priorities and 247 00:15:51.480 --> 00:15:54.810 risk appetites, which they then apply to themselves and others. 248 00:15:55.350 --> 00:15:57.840 There are those at both extremes, from wanting to ban AI 249 00:15:57.840 --> 00:16:00.840 research and use altogether, to those wanting unfettered 250 00:16:00.840 --> 00:16:04.110 acceleration of use, regardless of the risk, while the rest of 251 00:16:04.110 --> 00:16:07.770 us are probably somewhere in between. Attitudes also vary by 252 00:16:07.770 --> 00:16:11.340 region, country, political party, with Europe generally 253 00:16:11.370 --> 00:16:14.700 more keen on regulation, and the U.S. less so, particularly the 254 00:16:14.700 --> 00:16:18.000 Republican Party. And I'm not being party political, just 255 00:16:18.000 --> 00:16:21.930 making an observation that the momentum in the U.S. now is with 256 00:16:21.930 --> 00:16:24.960 the AI accelerationists, particularly following president 257 00:16:24.990 --> 00:16:29.040 and Republican nominee Donald Trump selecting Ohio Senator J. 258 00:16:29.040 --> 00:16:32.430 D. Vance as his vice president if Trump was to be elected. 259 00:16:33.090 --> 00:16:37.290 Vance and the Republican Party now oppose regulation of AI, and 260 00:16:37.290 --> 00:16:40.560 their stated goal is to repeal President Joe Biden's executive 261 00:16:40.560 --> 00:16:44.190 order on AI. And the executive order itself was already far 262 00:16:44.190 --> 00:16:47.190 less stringent an approach than that when adopted by the EU, 263 00:16:47.190 --> 00:16:50.940 which is more regulation based. At the same time, we've got 264 00:16:50.940 --> 00:16:55.020 whistleblowers in the U.S. alleging that OpenAI illegally 265 00:16:55.020 --> 00:16:58.950 barred its staff from revealing risks that the irresponsible 266 00:16:58.950 --> 00:17:01.770 deployment of AI posed from the entrenchment of existing 267 00:17:01.770 --> 00:17:05.070 inequalities to the exacerbation of misinformation to the 268 00:17:05.070 --> 00:17:09.300 possibility of human extinction. Across the water here in Europe, 269 00:17:09.540 --> 00:17:13.200 regulations underway with the EU AI Act coming into force next 270 00:17:13.200 --> 00:17:17.070 month on August 1. It seeks to protect democracy, fundamental 271 00:17:17.070 --> 00:17:20.070 rights, environmental sustainability and the rule of 272 00:17:20.070 --> 00:17:24.180 law. And by February, coming February, it will ban AI use 273 00:17:24.180 --> 00:17:27.510 with unacceptable risk and place constraints on AI use with high 274 00:17:27.510 --> 00:17:32.190 risk. The EU Digital Chief Margrethe Vestager says, with 275 00:17:32.190 --> 00:17:35.190 these landmark rules, the EU is spearheading the development of 276 00:17:35.190 --> 00:17:39.030 new global norms to make sure AI can be trusted. Now, this might 277 00:17:39.030 --> 00:17:42.480 sound good in theory, but the critics say that this well 278 00:17:42.480 --> 00:17:45.420 intentioned legislation has been rushed and it might end up 279 00:17:45.420 --> 00:17:48.930 smothering the emerging industry in red tape. There, the 280 00:17:48.930 --> 00:17:51.540 regulators have left out essential details urgently 281 00:17:51.540 --> 00:17:54.570 needed to give clarity to businesses seeking to comply. 282 00:17:54.690 --> 00:17:58.050 Hence, it would curb the use of the technology itself. Within 283 00:17:58.050 --> 00:18:00.570 nine months of the AI Act entering force, new codes of 284 00:18:00.570 --> 00:18:02.850 practice that explain how to implement the rules will need to 285 00:18:02.850 --> 00:18:06.150 be in place and that will also require a rush to partial pass 286 00:18:06.150 --> 00:18:09.570 additional legislation. It's also not clear whether locally, 287 00:18:10.110 --> 00:18:13.290 it will be national telecom competition or data protection 288 00:18:13.290 --> 00:18:17.280 watchdogs that police the rules, as the AI Act doesn't specify. 289 00:18:18.030 --> 00:18:21.030 Without more clarity, there's a danger of patchy implementation 290 00:18:21.030 --> 00:18:23.670 regulation, and this could trigger confusion among 291 00:18:23.670 --> 00:18:26.520 businesses as they roll out products in different countries, 292 00:18:26.730 --> 00:18:29.550 according to a recent equity report. Penalties for 293 00:18:29.550 --> 00:18:33.990 non-compliance reach $38 million or 7% of worldwide annual 294 00:18:33.990 --> 00:18:37.260 turnover, and the cost of compliance could run into six 295 00:18:37.260 --> 00:18:40.770 figure sums for a company with, say, 50 employees, which has 296 00:18:40.770 --> 00:18:44.910 been described as an extra tax on small businesses. Now, the EU 297 00:18:44.910 --> 00:18:47.790 officials cited by the FT deny that the act will stifle 298 00:18:47.790 --> 00:18:50.430 innovation. And note that it excludes research and 299 00:18:50.430 --> 00:18:53.280 development, internal company development of new technologies, 300 00:18:53.400 --> 00:18:58.140 and any system that's not high risk. We'll see. In the U.K., 301 00:18:58.170 --> 00:19:01.320 the labor government is expected to set out an AI bill in today's 302 00:19:01.320 --> 00:19:04.440 King's speech, which is likely to be a watered down version of 303 00:19:04.440 --> 00:19:07.680 the EU regulations with perhaps a few get-out clauses to 304 00:19:07.680 --> 00:19:11.280 encourage investment and innovation. In China, the AI 305 00:19:11.280 --> 00:19:14.400 regulations are reported to be intentionally relaxed to keep 306 00:19:14.400 --> 00:19:18.210 the domestic industry growing. An MIT Technology Review report 307 00:19:18.210 --> 00:19:21.570 by Angela Huyue Zhang, a law professor at Hong Kong 308 00:19:21.570 --> 00:19:24.990 University, explains that this is to be expected. Although 309 00:19:24.990 --> 00:19:27.900 foreign perspectives tend to focus on China's regulatory 310 00:19:27.930 --> 00:19:31.680 crackdowns, she says that the process almost always follows a 311 00:19:31.680 --> 00:19:35.100 three-phrase progression - a relaxed approach, where 312 00:19:35.100 --> 00:19:38.910 companies are given flexibility to expand and compete; sudden, 313 00:19:38.910 --> 00:19:42.390 harsh crackdowns that slash profits; and then eventually a 314 00:19:42.540 --> 00:19:46.560 new loosening of restrictions. Now, I'm not suggesting that we 315 00:19:46.560 --> 00:19:49.830 all ought to follow China's lead, but from my perspective, I 316 00:19:49.830 --> 00:19:53.010 do believe that we need to regulate to prevent accidental 317 00:19:53.010 --> 00:19:56.640 harms and exploitative manipulation of AI. But, we also 318 00:19:56.640 --> 00:19:59.520 need to encourage experimentation. Unfortunately, 319 00:19:59.520 --> 00:20:03.120 with no single authority in charge and public opinion not 320 00:20:03.120 --> 00:20:07.710 able to have an impact even if it were reliably informed, it 321 00:20:07.710 --> 00:20:10.020 looks like the future of AI implementation is going to be 322 00:20:10.020 --> 00:20:13.590 pretty messy, confused and combine good intentions, power 323 00:20:13.590 --> 00:20:14.820 plays and greed. 324 00:20:15.720 --> 00:20:18.750 Anna Delaney: Very interesting balance to get right. How might 325 00:20:19.710 --> 00:20:23.190 these contrasting approaches between the U.S. and the EU 326 00:20:23.580 --> 00:20:26.760 impact global businesses operating in both regions 327 00:20:26.760 --> 00:20:29.880 globally. Just future gazing here, but what are your 328 00:20:29.880 --> 00:20:30.330 thoughts? 329 00:20:30.780 --> 00:20:35.280 Tony Morbin: Money will go where you know the money is. So, the 330 00:20:35.280 --> 00:20:38.880 U.S. being more open to innovation with less 331 00:20:38.880 --> 00:20:43.140 restrictions, the investment is more likely to be in the U.S. 332 00:20:43.140 --> 00:20:46.380 and that includes European's money. They find it easier to 333 00:20:46.380 --> 00:20:51.090 invest in AI development in the U.S. On the other hand, you 334 00:20:51.090 --> 00:20:55.410 know, there's also, I won't say, you know, a walled market, but 335 00:20:55.650 --> 00:21:02.160 it's going to become harder for, say, Chinese or others to break 336 00:21:02.160 --> 00:21:06.450 into Europe with their AI, because it's unlikely to meet 337 00:21:06.450 --> 00:21:07.470 the restrictions there. 338 00:21:09.180 --> 00:21:10.890 Anna Delaney: Well, from the chaos of the galaxy. Thank you 339 00:21:10.890 --> 00:21:13.530 so much Tony. Be interesting to see what happens in the King's 340 00:21:13.530 --> 00:21:18.030 speech this week. And finally, just for fun, keeping it topical 341 00:21:18.030 --> 00:21:22.530 this week, if AI bots took coffee breaks, what kind of 342 00:21:22.530 --> 00:21:25.350 virtual beverage do you think they would prefer and why? 343 00:21:27.780 --> 00:21:28.890 Rashmi, do you want to start us off? 344 00:21:29.550 --> 00:21:31.680 Rashmi Ramesh: Yeah, virtual or not, I think it would be water - 345 00:21:31.710 --> 00:21:36.450 like it's data center overlords and if the Sci-Fi, you know, 346 00:21:36.450 --> 00:21:39.420 books and movies are right, subordinate humans. 347 00:21:40.560 --> 00:21:43.590 Anna Delaney: Data center overlords! Love it. Mat? 348 00:21:44.550 --> 00:21:46.320 Mathew Schwartz: Yeah, I know. We're getting pretty virtual 349 00:21:46.320 --> 00:21:49.170 here, but like iced hallucination or something. I 350 00:21:49.170 --> 00:21:52.410 think with the heat wave that's been hitting so many parts of 351 00:21:52.410 --> 00:21:56.730 the U.S. and other regions, and like Rashmi was saying, the data 352 00:21:56.730 --> 00:22:00.660 center juggernaut that is the water consumption of these 353 00:22:00.660 --> 00:22:03.630 places, I think definitely something iced would be on the 354 00:22:03.660 --> 00:22:05.730 menu - the virtual menu. 355 00:22:06.180 --> 00:22:08.070 Anna Delaney: Yes. Tony? 356 00:22:08.490 --> 00:22:10.500 Tony Morbin: Well, I'm definitely in the camp of not 357 00:22:10.500 --> 00:22:14.850 treating bots like humans. I think it's perverse to give AI a 358 00:22:14.850 --> 00:22:17.760 coffee break. So, I'd give the AI bots the most perverse 359 00:22:17.760 --> 00:22:21.690 coffee. Kopi Luwak made from coffee cherries eaten and 360 00:22:21.690 --> 00:22:25.170 deprecated by the Asian palm civet. Also to get the box 361 00:22:25.170 --> 00:22:27.660 addicted, so we could threaten to withdraw it from them and at 362 00:22:27.660 --> 00:22:28.860 least have one hold over them. 363 00:22:29.640 --> 00:22:33.270 Anna Delaney: So, you don't see them gossiping on coffee breaks, 364 00:22:33.270 --> 00:22:36.030 or, you know, office politics? 365 00:22:36.420 --> 00:22:40.320 Tony Morbin: They are machines. I mean, you know, no employer is 366 00:22:40.320 --> 00:22:43.740 going to say, "Okay, AI bots, you can knock off now at 5. No 367 00:22:43.740 --> 00:22:46.920 need to keep on working, even though, you know, I could be 368 00:22:46.920 --> 00:22:49.290 getting so much more money out of you. You carried on working 369 00:22:49.290 --> 00:22:52.590 and you didn't have any holidays." And you know, I think 370 00:22:53.280 --> 00:22:58.410 to treat machines like humans is to treat humans like machines, 371 00:22:58.470 --> 00:23:02.790 and we are making people just to cog in the machine if we do 372 00:23:02.790 --> 00:23:03.030 that. 373 00:23:04.740 --> 00:23:07.530 Anna Delaney: Deep thoughts today Tony on that one. I'm 374 00:23:07.530 --> 00:23:10.830 going to give them shots of quantum juice, you know, just a 375 00:23:10.830 --> 00:23:14.340 concentrated shot of quantum computing power - the perfect 376 00:23:14.550 --> 00:23:18.840 virtual beverage for a quick boost of energy. I was thinking 377 00:23:18.840 --> 00:23:22.410 maybe espresso martini as well. Yeah, I wouldn't be getting too 378 00:23:22.410 --> 00:23:26.640 drunk in the workplace, but I've opted for the healthier shot of 379 00:23:26.640 --> 00:23:29.310 quantum juice. Well, thank you so much for playing along. These 380 00:23:29.310 --> 00:23:33.600 sounds like delicious drinks, and thank you for all the 381 00:23:33.930 --> 00:23:36.060 information and education you've provided. 382 00:23:37.140 --> 00:23:37.980 Mathew Schwartz: Thanks for having us on. 383 00:23:38.460 --> 00:23:38.970 Tony Morbin: Thank you. 384 00:23:40.110 --> 00:23:43.980 Anna Delaney: Thanks so much for watching. Until next time.