Transcript
Mathew Schwartz: Hi, I'm Mathew Schwartz with Information Security Media Group. It's my pleasure to welcome back to the ISMG Studio, Alan Brill. Alan, so great to see you again.
Alan Brill: It is so good to see you again, my friend.
Schwartz: So I am just going to do the bonafides. You are senior managing director at the cyber risk practice of Kroll. And I understand now also a fellow of the think tank, the Kroll Institute. Yes, exciting times.
Brill: Very exciting.
Schwartz: So also exciting is what's been happening with AI. And I know you've been following that closely. And just one of the first questions I have, because we're going to range around a few issues here. But as a basic kind of entry point. Why do you think AI developers need to be including legal counsel and compliance officers in the development and upgrading processes, which is something that you've publicly stated?
Brill: I have. I've been talking about the subject of why does AI need a general counsel for a couple of years. It's very simple. As a developer, you tend to think of the technology. How do I make it do what I want it to do? But it doesn't exist in a vacuum. Cyberspace is a concept that doesn't exist. Everything happens in the real world. And in the real world, there are things called jurisdictions. And those are countries, and those countries may have laws. And whether you acknowledge that or not, you're subject to the laws. So if you field an AI system and haven't looked into what the laws in the jurisdictions require, you might find that your system is in violation and it's not doing what it's supposed to do. It's not controlled properly. It's not protecting people's privacy properly. And you end up in massive amounts of trouble. And it's the worst kind of trouble. It's avoidable problem.
Schwartz: Legal jeopardy that could have been avoided.
Brill: It's not could have, it's more should have. And there's no real excuse for not recognizing that all systems operate under a system of national and multinational laws. So if you're in the EU, there's going to be legislation in your country, but there's also EU regulation. And you have to follow those. And not doing so is at your risk. And the evolution seems to be that governments are getting more serious about enforcing laws relating to cyber.
Schwartz: So speaking of legal jeopardy, I know that there is some new proposed laws in the books in China. And I think India has also been talking about how it plans to approach AI.
Brill: That's right. Everything is kind of fluid, because countries are coming to grips with what it is, how it's evolving, what it means for their people, and how it fits into their form of government. And in the U.S., the Senate is said that it's going to look at different forms of regulation and rules. But the Chinese Cyber Administration has come out with a draft of a proposed law. That is very prescriptive.
Schwartz: So like the country's cybersecurity law is also very prescriptive - maybe in that mold, slightly.
Brill: It is very much in that mold, as are the proposed changes to their espionage law, to broaden the coverage to all sorts of documents relating to the national security. What they're saying is if you are operating an AI system that can be used within China, you follow their rules. And their rules include things like getting registered, letting them inspect your algorithm, showing them how you develop your training and pre-training sets, how you're going to protect information, how you're going to avoid your algorithm generating something that would not be something that they would like. So still not specific. That'll take a few months, I think, to do, but it certainly indicates a direction. Now if you flip the coin, you come to India, and India's government recently announced that yet at least at this point, they don't have any real plan to enact new laws relating specifically to AI because they see it as an incredible growth opportunity for their cyber industry, which is grown explosively and brilliantly. So what it all comes down to is having counsel that can look at what you're planning and how you're planning to do it, and advise you as to how to avoid avoidable problems, how to avoid getting thrown out of a country, how to avoid your people potentially being grabbed for violating their laws. And that's why what I'm recommending is, you have good counsel who is following this subject and can keep you up to date, because what's perfectly acceptable today may in some jurisdiction in which you operate become unacceptable tomorrow.
Schwartz: So where should the call inside an organization be coming from? Is this the board? Should this be a board-level concern? Cybersecurity governance, potentially? What makes you happy, if you will, professionally speaking, when an organization comes to you, and the call is coming from them, and you're interfacing with them? Who should do that?
Brill: The board has responsibility. It's not a question of whether they have a responsibility to have good governance over IT security. It's just the way the world works. But what's important is how you do it. Do you have anyone on the board who speaks cybersecurity? Or do you just delegate that to the people you're supposed to be governing? Who's giving you advice? How good is that advice? Have you brought your general counsel in so that counsel can figure out how to get the information that's needed? Have you brought in your compliance officer? Because one of the problems that you see in AI, is the systems work wonderfully, but if you go back and say, why did the system do that? They've never collected the evidence, and in litigation, which, unfortunately, is often going to happen, without evidence, you could be between a rock and a hard place, and not have the wherewithal to properly defend your systems' actions.
Schwartz: How do you get that level of insight and oversight when you're working with the developers and they're creating, training, or adopting machine learning systems? What are some of the processes that organizations can put in place to help them give like you're saying there, if they end up in litigation, the things they might want to have they didn't know they needed?
Brill: I think the answer is like so many other things in systems. I remember working with a guy when I was at Jordan's organization, at Jordan, who said that when you want to put something into a system, at every stage of system development, the cost of doing that goes up by 10. So I think that's a good principle. So that the secret is to go back to the beginning to make sure that the folks that are going to be involved in your development, or that you're going to be acquiring systems from, understand that this is important, and that they can't ignore it until later in the process, when they might say, now it's too difficult. We've already locked the database into place. We don't have enough evidence space, so we can't do it. That's not what you want. You want this to be important from Day 1. Because what I found over many years of looking at this is that when you ask people about how they're securing any kind of a system - AI, tradition, doesn't matter. Often what they tell you turns out not to be so much factual as it is aspirational.
Schwartz: I'm shocked to hear you say that.
Brill: I can see. And you can imagine how shocked I was also. So the whole idea is we have to change the way people think about things. We have to recognize that even though we're dealing in a very advanced, technical area, it still exists in the real world - a world of laws, regulations, contractual agreements and litigation. And we have to be ready. Because ignoring those issues won't make them go away, it'll just make it harder for us when they eventually materialize.
Schwartz: So put in time now, do the groundwork or lay the groundwork, the foundation now, so that you are doing these sorts of things in a kind of documented, provable manner that might be called on a court, for example.
Brill: I teach a course on Cybersecurity at the Texas A&M University School of Law in the master's program, and the reason we have the program is that this is so central to the work that corporate and government attorneys do that we want them to have at least a basic understanding of what's going on, and how they can contribute. So that rather than spend their time trying to clean up problems in litigation, they can help avoid them, which is certainly much more efficient.
Schwartz: Yes. Well, Alan, it's great to hear about the advances that we're seeing in AI, the legal nuances and evolution surrounding AI, and also the questions that organizations should be asking to better protect themselves.
Brill: But litigation risk is always there. It's not a question of that. It's a question of what can you do now to mitigate that risk downstream.
Schwartz: Great advice. Well, Alan, it is always a pleasure to get to chat with you. Thank you so much for your time and insights today.
Brill: My pleasure. Good to see you again.
Schwartz: I'm Mathew Schwartz with Information Security Media Group. Thank you for joining us.