The Challenges and Opportunities of Artificial IntelligenceRuby Zefo, Chief Privacy Officer, Uber Technologies, on AI, Privacy and Governance
Generative AI has revolutionized the way people interact with chatbots. Ruby Zefo, chief privacy officer and associate general counsel for privacy and cybersecurity at Uber Technologies, cited ChatGPT as an example of the need to conduct an "environmental scan" of both external and internal risks associated with it.
Zefo also emphasized having a "cross-functional team" for rolling out necessary actions and a "long-term AI governance" plan. The challenge lies in addressing the issue of scale, as AI has the "potential to replicate human bias on a massive scale."
"The expectation that AI is somehow going to be perfect is wrong. What we should be comparing it to is how does it compare to the human solution?" Zefo said. "I started an AI executive governance team because I wanted a framework and a high-level oversight, but not to impinge on innovation in a way that was bad."
In this video interview with Information Security Media Group at RSA Conference 2023, Zefo also discusses:
- Ethical concerns regarding new-generation solutions;
- Legal and operational challenges of AI;
- Advice to privacy officers on how to tackle AI-related issues.
Zefo is Uber's first chief privacy officer and associate general counsel for privacy and cybersecurity. Her team's mission is to drive Uber's efforts to safeguard users' personal data. Previously, she served as vice president and group counsel, artificial intelligence products group, at Intel Corp. She previously held attorney roles at Sun Microsystems and Fenwick & West.
Tom Field: Hi there. I'm Tom Field. I'm senior vice president of editorial with Information Security Media Group, talking about privacy, particularly AI, and how AI isn't taking over the world - maybe just our world. Here's to talk about that as Ruby Zefo. She is the chief privacy officer and AGC for privacy and cybersecurity with Uber Technologies Inc. Ruby, it's a pleasure to see you back here again. Thank you so much.
Ruby Zefo: Thank you, Tom.
Field: So you are self-described as privacy OG. The gang signs come with that.
Zefo: I got some gangster names, but I'm not going to tell you what they are.
Field: When you first heard of ChatGPT last winter, what was your immediate reaction?
Zefo: Well, actually, I was like up, because I like experimenting with things. And I didn't really know at that point what it did other than the minimum amount that I was able to read at that time, and it was still in its early stages. So I immediately signed up for one of those test accounts that you can use, and played around with it like any normal person would. I'd already done this with DALL·E, by the way, because, you know, images, first choice was creating my crazy images. And this was a follow-on to that. So as a lawyer, I wasn't putting any data in there that I didn't have the right to, or that was a trade secret, or any of these other concerns that we have with it now because I was just fooling around to see how it worked. So since that time, of course, things have changed. But my initial reaction was, "This seems fun."
Field: Okay, has that reaction simmered?
Zefo: It's simmered down a little bit. So, you know, it's another potential AI solution that can have enormous benefits, but also significant risks. So the challenge now is to quickly come up, in our case, at least, and I'm sure other companies' cases, to come up with guidance. Because I understand why they're doing what I wanted to do, right? It's just going around and see if it can, in their case, improve their productivity. And if you see your neighbor using it, and let's say it's speeding up his coding and making it more quality, then you're going to want to do the same, right? So I understand that. And so our initial reaction was, "Let's figure out what the immediate rules of the road are." And it'll be the initial guidance will be clear. With that, we'll figure out what the biggest potential use cases are, and move forward accordingly. We did not try to ban it or anything like that.
Field: So to this point, how have you dealt with questions about artificial intelligence and machine learning?
Zefo: Well, I think that the ChatGPT thing is a really good example, because you have to move quickly, right? You don't want to delay research. For me, that's not going to work. So quickly. You have to do an environmental scan both externally, like, where are things going on this? Like, what's it? What are the risks, internally? What are the use cases because don't forget the upside, right? And then you have to figure out your cross-functional team. We can't do this alone, we need to get the engineers involved, we got to get all these people who have a stake, and just quickly roll out what we have to do on a Rolling Thunder basis. But there's also a longer-term AI governance that you want to put in place as well. It's not all just, "Oh, my gosh, I got to whack this mole right now," right? There's a much longer-term thing, we know where this is going. We can do these things. We've done them before in the privacy field, you know, governance over data uses is what we do for a living. And so this is just, to me, another instantiation of that.
Field: And a lot more moles because this is representative of a generation of solutions. It's coming up. So I want to ask you about questions that this generation of solutions arrays, we'll start with the ethical concerns.
Zefo: Yeah. So there are, I think, well-founded ethical concerns. We already know that. I don't think AI is taking over the world. I don't think it's sentient. I don't think people need to worry about that right now. So you know, simmer down on that. But the difference here is that you've got this sort of fusion of massive processing power, big data, new technologies that you have hovering over all of IT, data privacy concerns. So, you know, what are you supposed to do about that? And I think the answer is not coming from banning it or anything like that. The other concerns need to be accounted for in a proper structural governance. But the problem is one of scale. It's replicating human bias at a massive potential scale, which humans, you know, if you've got one loan officer in a bank, who's wrongly declined the loans, that's very different than thousands being generated at one time. If it's looking backward at data that's already bad, that's what people are worried about. But I would say, there is also the upside of much more scalable good that can come out of it. So it's another balancing test, which again, we're used to in the piracy field. Is the impact going to be positive or negative? And is it worth the trade off?
Field: What about the explainability challenges? Talk about that.
Zefo: People expect too much on this, right? So you got a design that you build, you got inputs to data, you've got an algorithm, you got outputs. That step between the algorithm and the output is where things get a little magical and mystical. And if you want deep learning to do its job, you're not going to be able to easily explain it as mostly to a layperson. And even on the human side, I don't know why, but my dad who never went to college decided to take a logic course at the same time, I was taking a logic course in college and he was struggling. And he would say, "I don't understand how these" - I'm trying to explain basically the math of logic to him. And it's like a joke, either you get it or you don't. And when you're trying to explain how it works in a mind, it's extremely difficult even for humans. And so I think the expectation that AI is somehow going to be perfect is wrong. What we should be comparing it to is how does it compare to the human solution? You know, is it making things better for us? And then on the flip side, is it mitigating the harms? Is it going to be perfect, as perfectly explainable, perfectly workable? It's not. So we got to buckle our seatbelts and get ready for some failures and some other negative impacts to get the positive.
Field: What about the legal and operational challenges?
Zefo: Well, there are so many, there are a lot, but it's not that they're not insurmountable. So obviously, people have already been talking about the IP challenges on both sides. So, the Supreme Court just said, "You can't patent purely AI work," Copyright Office has already said the same. So you have to already worry about whether you're going to be owning what you are using an AI system for. But on the flip side, you also don't know where this may have come from, maybe even fringing. So all these people putting, you know, musicians, voices and things, you're going to be able to own that either. And then you accepted like if you go with Grimes, who's now said that she will allow you to co-own music. So you might be infringing, you might not be able to own, those are the problems there. And then of course, trade secrets. Don't be putting your meeting minutes in a third-party software tool. Would you do that if it was a third parties admin, assistant, here, "take all my trade secrets and all my notes." You would never think of doing that. So people have to think about that as they would a human being. And so there's the trade secret problem. And then, of course, all the data privacy problems, which I think are the root of what people are most scared about, because it's going to be the impact on individual people. And what can be harmful to them.
Field: This is a new conversation, or is everything old? New again?
Zefo: I think it's both. So my first foray into AI was when I was still at Intel Corporation. We have commercial products, and we were worrying about, you know, what kind of solutions can we come up with purpose-built silicon and such in cloud? You know, what are people going to want to use? And we're talking about fixing undersea pipes and things like that. And we weren't that worried about people at the time, right? This was probably 2017 or something. But then GDPR comes along with its automated decision-making roles. And then then we're like, "Okay, that's something to contend with." But that's like five years now that we've been contending with that. So that's a conversation anybody who's using algorithms in their business in Europe has already had to be thinking about: Is it high risk? Do you need any human review? Is it having a legal or significant impact on people? So that's not new. What is new, again, is what I said, the scalability of it all. And the impact it can have - both negative and positive. I think that is what's new in the conversation and how fast it's coming. I don't think people expected something like ChatGPT, and for it to take off almost overnight.
Field: How have you approached adding AI governance to your role?
Zefo: First of all, I've leveraged a lot of what I've learned running global teams and, you know, and bringing order to chaos at all my other jobs. This is a skill set that you can reuse over and over. And to be honest, and don't laugh. I start with principles. People say, "Oh, principles don't get you all the way there." Of course they don't. But you know, when you are in a situation where the law and the ethics are just a swirl, and it's hard to know what the right thing to do is, if you come up with a principle-based solution, much like I do with privacy at Uber, was hired pre IPO. Let's get this thing going in the right direction. So people will be on the right side, I would guess at least 80% of the time, right? So I took the fair information, practice principles, it's like wealth, you know, founded where GDPR arose, you know, so the same with AI, there are a lot of commonalities and what people want to see and principles. And if you start there, and people's lens is now bad and not let me do whatever I can with this. And let me do what's fun. And then I think you're already on the right side. And from that falls now processes and procedures and everything else. And you get that cross-functional team together. I started an AI executive governance team because I wanted a framework and a high-level oversight, but not to impinge on innovation in a way that was bad. And so from that flows, you know, we had a lot of listening sessions with people. So they can play a part in how these processes and policies come out in their roles. And it's got to be that specific, it's going to flow down to the particular instantiations of it and not some omnibus law that predetermines what's risky, because out of context, you can't really tell what the AI is doing, whether it's the risky part or not.
Field: So this gives you an opportunity to expand your influence.
Zefo: It definitely does. And as I said, privacy pros - we already have the relationships across the company, we already know how to govern data. We already have these principles and how to bring a program to life across the world. And so why not us? I think we're the perfect people, and you just deepen those relationships you already had with the knowledge base that you can bring to them.
Field: A role like that. Your advice to privacy officers as they tackle these issues in their own roles?
Zefo: I'm going to go to Nike and say, "Just do it". I just started a movement, no one asked me too. I went to my contacts, my executive contacts and the next level down and I said, "this is what I want to do." We started it, we kicked it off, people show up, they're excited. They're so happy to be asked. It's amazing. Like they get to play a part and how this rolls out. We're not telling them from the top down. And that's just a whole exciting new thing. We have people attending all the listening sessions just because they like to hear what other people have to say.
Field: And that is a blessing. Ruby, thank you so much. Pleasure to talk with you. The topic has been AI. We just heard from Ruby Zefo with Uber Technologies for Information Security Media Group. I'm Tom Field. Thank you for giving us your time and attention today. We're very grateful.