Governance & Risk Management , IT Risk Management
Managing Cyber Risk in a Technology-Dependent WorldFred Cohen on Cyber's Positive Impact, the Future's Distributed Information Society
Complexity is the enemy of security, and cyber technology grows more complex every day. Have we created a problem space in computing so complicated that we will forever be unable to safely operate in it for its intended purposes? Fred Cohen, CEO of Management Analytics, says that's unlikely.
Cohen admits, however, that information technologists often pile "crap on crap," and adds, "If you pile enough crap on enough crap, eventually it all falls down." But there is a replacement cycle, he says. And overall, "things have gotten better for humanity" because of the progress made in "information technology and, increasingly, cybernetic technology-communication, sensors, actuators and control."
In this episode of "Cybersecurity Unplugged," Cohen discusses:
- The need to manage complexity in order to keep it reasonably safe, especially in light of our ever-increasing dependence on information technology;
- His definition of risk and the "model-based situation, anticipation and constraint" we use to try to manage it;
- Why zero trust is a misnomer that "would destroy our ability to do anything" and should be replaced by the term "managed trust."
Cohen leads Management Analytics, an assessment and planning advisory services litigation support, angel investment company with a long pedigree in cybersecurity. He coined the term "computer virus" and wrote the first computer virus program in November 1983. Cohen led the team that defined the information assurance program as it relates to critical infrastructure protection. He has done seminal research in the use of deception for information protection, and the protection techniques he pioneered help defend more than three-quarters of all the computers in the world, including the actual core technologies used in antivirus mechanisms and other trusted platform modules. Cohen is a leader in advancing the science of digital forensic evidence examination and has been an information protection consultant and industry analyst for many years.
Steve King: [00:13] Good day, everyone. This is Steve King. I'm the managing director of CyberTheory. And this is our weekly podcast in which we explore all of the wonders of cybersecurity. And today, I am fortunate to have with me Fred Cohen, who is the CEO of Management Analytics, an assessment and planning advisory services litigation support, an angel investment company, with a long pedigree in cybersecurity. Fred is down in Pebble Beach, California. So, all you folks that wish you were on a golf course can only dream about that. And in fact, Fred was the guy who, as a graduate student at USC, coined the term computer virus and actually wrote the first computer virus program back in November of 1983. Fred went on to earn his doctorate in electrical engineering from USC to join his master's in information science from the University of Pittsburgh and his undergrad and electrical engineering from Carnegie Mellon. Fred led the team that defined the information assurance program as it relates to critical infrastructure protection. He did seminal research in the use of deception for information protection, is a leader in advancing the science of digital forensic evidence examination and has been a top flight information protection consultant and industry analyst for many years, too numerous to recite, as I could be here all day. But Fred's accomplishments are legendary and the protection techniques he pioneered now helped to defend more than three quarters of all the computers in the world, including the actual core technologies used in anti-virus mechanisms and other trusted platform modules. To say that Fred knows something about computers and security is an understatement at best. So, welcome, Fred. Thanks for joining us.
Fred Cohen: [02:36] Well, thank you for that extraordinarily complimentary read of the information I've put on the internet.
King: [02:45] You and I have shared opinions about a lot of subjects in cybersecurity, as we've gotten to known each other online here. And the one I obsess about the most these days is what I call a complexity problem. And my question for you is, have we created a problem space in computing so complicated that will forever be unable to safely operate in it for its intended purposes?
Cohen: [03:17] Well, of course, as I think you probably agree, complexity is the enemy of security. Whether we've created a situation that forever will be unresolvable, I think that's unlikely. I have a positive view of the world. If you look at the progress of the world in terms of information technology and increasingly cybernetic technology, communication, sensors, actuators and control, things have gotten better for humanity. People are living longer, they're safer, they're able to do more things more easily than they were ever able to do before all this technology came into being. So the net effect, cyber technology to date, is positive for humanity. It has its potential negatives. But in terms of complexity, what we've done largely is pile crap on crap. So if you pile enough crap on enough crap, eventually it all falls down. At the same time, a lot of it gets replaced. So there is a replacement cycle in information technology. And a good example of that is the Trusted Platform Module, which is an integrity mechanism that's available in many of the systems out there. And as an example, that's something that has improved the integrity of systems without changing all the other problems that they have. So because of the ability to partition things, it's entirely possible that we could replace thing after thing. Another good example is the web which has created lots of problems and complexity. But it also has enabled a different modality than loading all the programs into one computer system and having them all run there. So when they're not all running in the same place together, it's safe. So my point is you need to manage complexity in order to keep it reasonably safe. But say for what purpose. If my game that lets me do paintings - and I'm not a professional painter - I'm just trying to splash things on the board. If that game fails for some reason, somehow the information leaks, it's no big consequence. We have to adjust what we do based on the consequence. The larger problem today, I think, is that the interdependency chain is so long and tricky and so out of control. Now, this is something a technology SBOM, the software bill of materials, and DBOM, digital bill of materials, are intended to work toward correcting. So being able to trace everything back to its source, when you detect something that's become corrupted, being able to fix the supply chain and do that efficiently is an example of a response approach instead of a complete preventive approach. So I think there are trade-offs. And that was a long winded answer to your relatively simple question.
King: [06:18] No, and it's only a partial answer. It's not a simple problem. And I agree with what you said. And I appreciate that. You've also said that it's likely that over time, the people who design information systems in schools, government agencies, businesses were heavier, will build even more dependency on information infrastructure into our society. The people of the United States are so dependent today. And that's not my opinion alone. Of course, it's, I think yours, I think it's people, like in government as well, that are so dependent today that in some cases, we literally cannot survive without information systems. So, as we follow this new road of dependency, we may soon reach the point of no return. The question is: are we there yet? And if so, what are we doing to make sure that those systems aren't corrupted, or in danger of being disrupted?
Cohen: [07:27] So, I think that might be a little bit beyond the claim that I would make. So when you look at survival, you have to breathe. If you can't breathe, you're in trouble, takes about a minute, 60 seconds something, maybe you got 120 seconds, two minutes, if you don't have water for about two days, you're going to die. If you don't have food for about two weeks, you're likely to die. And scale goes up from there. People live without electrical power for all of time. And we can do it again, if we have to. But without water, we got a serious problem. And the problem is that the clean water supplied largely by technology, and this has been true since Rome and so it's not exactly new. So we've always had dependencies, the dependencies drive more toward the issue of civil society than toward survival of individuals. So, what happens historically is those storms in the North, Central United States, and the power goes out, so the farmers are out there, and they're on their own, and they don't have enough power, and their cows are going to die eventually. So the people that go out to fix the power lines drive out there and eventually, if it's been too long, some farmers will come out with a gun pointed at them and say, "Fix mine now." So, that's what happens. Civil society breaks down if it becomes untenable to survive for long enough. At the same time, we still have wars around the world, we have plenty of people living without electricity, or the people living on not that good one, are and so forth. So whether we will have a massive die-off or something like that, that's a different issue than the information technology thing. The other issues of civil breakdown is that if you lose the monetary system, it's a huge problem because there's no way to exchange value and the vast majority of people in the United States today are not farming. They don't have anything they can trade for something of value to somebody else. So, knowledge that we have, our ability to apply our expertise is of little value in that context. On the other hand, I do have a degree in electrical engineering and I know how to make motors work. So, there are lots of things we can do to survive as people and as small groups, but the large-scale society is a huge problem.
King: [09:59] Yeah, it is. And societal breakdown is kind of the point. I mean, whether it's water, whether it's heat, whether it's any form of energy. And as you say, we've lived without electricity for literally thousands of years, as a race or as a species or other. And we can certainly do it again for some time, but civil society probably can't. And that's where when you remove barriers of any kind, we're left with the behavior we're now witnessing in many major cities throughout the United States, too. I worry that we've built a house of glass that is so glassy, that it only takes one stone to create that kind of chaos. I'm not sure who's going to enforce laws and if I can use the phrase 'protect society' during the consequence of that sort of thing happening.
Cohen: [11:16] Yeah, I don't think it's that dire. So, you know, we looked at this in depth. I was at Sandia National Laboratories, just before Y2K, and slightly after it. And one of the things I participated in the President's Commission on Critical Infrastructure Protection as a technology component of that was looking at power and water and other infrastructure. And people seem to forget basics. Water runs downhill. So, basically, if you have water sitting up in a tank somewhere and all the power goes off and all the automatic controls go off, human beings can walk out there and turn valves. And that's still true today. And even if the valves don't have a handy way to turn them, which all of them today typically do, that doesn't change the fact that somebody can come out and change the valve and actually make the water flow. Getting the water backs up to fill the tank, after it starts to empty, that's a different issue that requires a pump. But we can make hand pumps as well. So it's not at the level of dire where we're going to run out of water. And, in fact, if you look at water systems, water systems are highly distributed. Basically, every little town has a water system, I think there are well over 100,000 different water systems in the United States, and then there's groundwater and so forth. My house, you talk about the all these things, and I've thought through these for my life and my house. In my house, we're on granite and the granite has water that flows through it naturally. And so we have a pump to pump the water out, that pumps it out into our front yard, and then pumps it out from there, down to wherever it goes next. But that water can pump into a tank, and we have a tank. And so I can pump the water into the tank and then I have to purify the water. I can boil it, and so forth. So, lots of people have that condition. When you talk about power infrastructure, it used to be much more centralized than it is. Today with solar power, especially. In California, for example, solar powers is effectively required on lots of things. So we have solar in our house, and we don't have a battery, but that's all right. During the day, if necessary, I can get power out of it. And I can get old car batteries and charge them up if I have to. So we can live the lifestyle pretty well. Now, admittedly, if you're in Las Vegas, you're not going to be able to dig a hole and get it. So there's going to be migration and there already is migration. We already have migration because of climate issues. And water, in particular, is being a huge problem. On the other hand, we have massive excess of water in the south eastern part of the United States, typically associated with climate change, but whatever the cause is, we could make pipes that pump the water across the country and we could run those pipes by solar, building distributed, no centralized control at all, everything locally automated. So it's not that we can't do it. And it's not that we aren't already starting to do it, we are. But central control has its efficiencies. The inefficiency in the power grid, owing to central control, has to do with distribution that you lose a lot of power when you send it through the wires from place to place. And then there's another challenge associated with that, that maintaining all those wires is problematic. So they start forest fires and they have outages and so forth. But all of this is addressable. And it's in the process of changing because we're creating a more distributed society in the information arena as well as in the power arena. In the water arena, it's already highly distributed and so forth.
King: [15:00] Yeah, you've been around cybersecurity since near the beginning. In fact, one could argue you were part of it, you've seen a lot of changes, yet, you've also seen a lot of the same things repeating themselves, which is what drives me crazy. What is, in your estimation, our greatest risk today and one of the most significant threats we haven't yet gotten our hands around?
Cohen: [15:26] So, there's one of my favorite questions. And this is one of those places where I think we're likely to part company until you hear what they say. So, risk is the R word. It's a four letter word that ends in K. It's a word that people misuse all the time. Risk is defined in various ways, but the most lucid definition is something like uncertainty about the future. So when you talk about a high risk, what you're talking about is a large range of possible futures. And that includes good futures and bad futures. So when you talk about the biggest risk, you're talking about the biggest variance in the difference between our expectations and what's actually going to happen. So that's a tricky business, we use something called model-based situation, anticipation and constraint to try to manage risk. And we do that whether we know we're doing it or not. And that's the name that I gave it 20-some-odd years ago, and I've been describing it since. So the issue has to do with what is anticipated and constrained by the acts that we take, not just now, but over time. And what's the unconstrained future that we anticipate, that we can't help from happening because there are other actors out there. And then there's the unanticipated future. In terms of the unanticipated future, I am surprised by anything I've heard in cybersecurity service since the 1980s. And when you talk about the same old stuff, it's all the same old stuff. It's like bad code. People make mistakes, those mistakes turn out to have knock-on consequences, input overruns, failure to check input, inconsistency between code, incompatibility between modules, there's a list of these things that I wrote and published in the late 1990s. I think it's 140 different attack methods and 84 or something different defense methods. And I don't see anything that it's missing even today. I haven't had to add anything. And when I wrote it, it wasn't new, through history and making sure that we did our job. I think the biggest problem we have is two-fold. One of them is that we don't have people that bother to learn the history, and read about what happened before, and take it into account. So it's just ignorant of history, we don't teach it properly in the schools, the books don't go into the history of all this stuff. And you'd have to read all these articles that people wrote 50 years ago, which are not on the internet in video form. So you actually have to go and read them. And people don't like to, or want to do that, or they're not told to do that. So there's that problem of history. Reasonable and prudent is this thing that involves people with expertise in light of free making decisions. So the history is there. And the second thing we're missing is that reasonable and prudent part where knowledgeable people are engaged to help make decisions about what to do. But when I say reasonable and prudent, which is not negligent, when you're talking about unreasonable, that's like having fires all pointing at the same spot on sitting up on a wall, not differentiating from each other. That's just unreasonable. It's too much. It's beyond what's reasonable. And the other side of it is imprudent. This is like, where you have a million dollars sitting on a table behind a regular piece of glass, facing an alley with no curtain drawn, and there's a rock on the alley, and there's an alarm system and no response for somebody walks along and says, "Oh, look, there's all that money." Here's a rock, they throw a rock through the window, they grab all the money, they put it in the sack, they walk away. They're done. So that's the imprudent side. So somewhere between imprudent and unreasonable, there's a reasonable and prudent, and that's a judgment call. And we're not doing a good job of making those judgment calls. And then sort of the third leg of that stool is I'll call it bad management, for lack of anything else. The decision making and the execution of those decisions, and I'll call them in a corporate environment. You at home, you can do whatever you want. But in a corporate environment, it's a control system, it's a multi-level control system. And it takes information from outside that should inform decision makers, they should have a decision making process, they should be sharing information outside tours and exchange, if they're at the top of the hierarchy, whatever that may be, then they should be sending information down and getting information up. And using that to make those decisions. And they should have a timely manner to make good decisions with a sound basis. And that should happen at every level. So that at the next level down, they get information from their executive management, and they're doing whatever they're doing, and they're also taking into account what's happening out in the world that they can see and communicating outward and listening to them when there are other sensors and things going on and communicating to their people on their systems what should be happening and responding in a reasonable fashion and reporting back up. So these control systems that are how we manage, you know, Plan, Do, Check, Act from ISO standards, we don't really have those in place. So we're just not managing it in a reasonable way. There's nothing new here, there's nothing tricky. It's not something that we can't do. You look at every attack that you see. And people say, "A highly sophisticated attack." And you look at it and say that's not sophisticated. I saw that in 1992. Same old stuff.
King: [21:35] And so speaking of school, you've taught for years and you've seen a lot of cybersecurity education programs and curriculum and so forth. I guess what I'd like is your assessment of where we are today, and in not just delivery, but design of those systems and curriculum. And if you'd like to tell me also what you think about how we're either succeeding or failing about going about the disbursement and distribution of that education to the people who need it, I appreciate that as well.
Cohen: [22:18] Sure, of course, you're in the ed business as well. That's one of the key things.
King: [22:24] We've been designing ... so, full disclosure, we've been designing our CyberEd.io system for a year. The reason that took so long for the most part was that we had great difficulty finding a platform that we thought matched our expectations and requirements. And then, once we did, the rest of it was sort of easy in terms of defining what the content should be and where the emphasis should be. And I say that easy, but because it's a) my opinion and b) that opinion is supported by lots of ... we have an advisory board of 41, I think it is CISOs, who participated in, "Hey, if I were to do this, I'd want to do it this way. Here's where the emphasis needs to be," et cetera. I have my own view of where that emphasis needs to be. And the things that are being taught are not being taught in the competitive offerings that are in the marketplace.
Cohen: [23:35] So I'll give some full disclosure too. But the first thing, Moodle is this open-source learning management system. And that's a place that a lot of people started as a platform. And once you start there, because you can customize it. And a lot of the basic stuff is covered. It does save a lot of time and effort. But that's not a bad one for people who want to do that. So that's the learning management system at the baseline. You need to do better than that if you want a system that does things like laboratory environments. What you want, ultimately, for successful education is mixed mode. So you're combining visual with sound, with motion, with activities that people perform, with written work. So that they get the information in the different modes in which they can see things and they fuse it all together. Education also takes time and I want to differentiate between training and education, as a starting point. Typically, when we talk about education, we're talking about preparation time of activity, and we're talking about training, we're usually talking about something much more specific that has to do with when you see this, do that. So you have trained response, as opposed to thoughtful processes. And in any process like cybersecurity, in any substantial entity, it's a cooperative process. It's not just one person. So you also have to have education. And then practice and working together with others in a collaborative way. So it's a more complex thing to do education well than to do training well. So having said that, continuing my full disclosure, some years ago - probably four years ago, by now, I got together with a couple of folks at ICE-AI, and they have a learning management system they were developing. And so we started to develop cybersecurity courses for that learning management system. And that's now available, I think you can actually get credits from Webster University for some of the courses in that curriculum. And that's an interesting one, because before that, along with Tom Johnson, I was working at University of New Haven in 2000, to create an educational, graduate educational program for cybersecurity, and then a master's in a PhD. And a couple of other programs in there, we got licensed and then acquired by Webster University. And we brought the programs there. So, at Webster, starting around 2001 and Tom was the real person doing most of the work there. That program got built and expanded, we had laboratories that you could use remotely, we had some physical laboratories as well as informational laboratories. And they got to the point where they had 250 students at any given time. And I believe at that time, that was the largest master's program anywhere in the world in cybersecurity. I'm pretty sure it expanded from there. So very successful, growing program. So the problem then, when we start to look at it is how many experts of what quality do we need, and the answer comes out to be 250 at a time won't get anywhere close, you can't even start. And so, there are different modes of doing this, you can try and get university education as it historically has been formed, where you have PhDs, and they have grad that's running, generate more PhDs and more master's-level expertise. And then undergraduates that are learning at the undergraduate level. And so, to develop that, in enough volume, you need to generate those professors. So generating a professor from the time they start in their first cybersecurity education at the beginning of college to when they have their PhD, and are out there and start teaching, hopefully, with some practical experience, we're talking about 15-20 years. And each one, each PhD can only take maybe at most a dozen more PhDs a year and have them be good. So that means that over that period of years, you've counted all up from the number we started with, and you're not going to get anywhere near what you need to educate hundreds of thousands per year needed in the U.S., and the U.S., as a portion of the population of the world, we're 300 plus million out of 7 billion. So that means, you need 20 times that. So you just can't get there from here in a timely fashion, with that educational approach. And that's why we started to look more seriously at online education, where we can amplify the higher quality expertise in larger numbers, you won't get the same quality of education out of that program. But you will be able to get the volume. So you can still run the university stuff and use that for the top-end. But then for the rest of it, you have this not quite as good educational experience, but you can start getting the volume up. So when you get to the online platforms, which is what you've done and what ICE has done, then you can deploy, you know, easily you can educate millions of people every year, get them through one year's worth of education. And so if you pick the right things to educate on, and you want to fuse that with hands on experience, because cybersecurity, first of all, is an enormously broad field. But also, it's not just something where you think and talk. Stuff happens, and you have to respond. So, you need to learn how to respond, how to be good at it, how to be fast at it. And then the other thing is, when I was younger, when you were younger, the total number of people with different kinds of expertise involved was relatively small, you didn't have to know that much compared to today. Today, if you're going to work at the bottom level of technology, address specific attack mechanisms, operating and specific platforms, and try to mitigate those without going through a standard clean everything process. It's going to be horrific, an enormous amount of knowledge more than any one person can ever have, more than 100 people can ever have to deal with that. So what that means is there's going to have to be a lot of specialization. So you're going to have to train people, besides the educational process, where they understand the overarching issues, they need the training and the specific platform. And that training doesn't last right, so you have to continue to retrain them. So that means certification programs and the idea behind the CISSP survey information security professional, and the CISM security measures, and the other certificates, when they first created that program, was to fix that problem in the same way as we have professional engineers to ultimately get to some sort of a licensing system and an educational process. And I was relatively critical of it early on. The one thing I noticed is everybody that has a CISSP that I've talked to knows the words that I'm using. So, getting the basic language down is one of the fundamental problems we face. This goes, because of the high marketing in cybersecurity, everybody's trying to create new words. So one like well-established engineering fields where I grew up where words have meaning and they don't change. That's not what we have in cybersecurity, we have a new way, it's a breakthrough in marketing technology. I don't think you want to get into a discussion of zero trust as an example. But that would be an example.
King: [31:40] Sure. And there's a reason for that that's very practical. If you want to stay in business, you had better figure out how your product maps to a zero trust solution, in whomsoever you're pitching to.
Cohen: [31:58] Agree to disagree there. And because, as you're well aware, when I hear the words zero trust, I explained that we cannot live in a world of zero trust. So number one, if you actually achieved it, you would destroy our ability to do anything. But second of all, the definitions that each person has of zero trust seem to be different from each other. And the definitions that are assertive, as widely accepted, are internally inconsistent and unachievable. So that's why I struggle against that misnomer. Because, I have this view, and you're well aware of it. I have this view that when we create those misnomers and those misimpressions and we use that to hype things, we get into the hype cycle. And people spend more money on more stuff that doesn't get the job done. But it's also unethical and immoral disservice. So it violates the codes of ethics that if you're an engineer, you remember the IEEE, or those things, they have codes of ethics and the codes of ethics say, "No, you can't mislead the public, you need to communicate in ways that are honest and sincere and don't mislead people." So that's my feelings about it. And I know, I'm not ignorant of marketing. I've been an industry analyst for many years, I worked at Burton Group where we surveyed all this stuff every year and wrote reports, and I struggled against them. I struggle against it now. And we used to call it a breakthrough in marketing technology. So I understand it. But that doesn't take my responsibility in my mind for trying to defend against the misnomer and the misimpression and the misleading of people with regard to what we can do and what we should be doing. I think what we need to do is manage trust. And we should have a managed trust. And all of the techniques of so called zero trust are trust management approaches. They're not zero trust approaches at all. But okay, so that's my view and tell me yours. I'm happy to hear your view.
King: [34:14] My views are exactly the same. We've got a semantic issue and with that same issue, you will find with folks like Chase Cunningham and John Kindervag and Eve Maler, three of the folks that were part of the development of the whole zero trust notion, which was done at Forrester, which is essentially a marketing company. So I suppose it shouldn't surprise you, me or anybody else that we ended up with a misnomer, as you point out, like zero trust, what is actually managed trust. In terms of having no issue with that, I don't, and then we've gone so far as to invest in, and establish, the CyberTheory Institute as a way to push back against bogus marketing around that topic. And to present only the truth around what you can say we mean by zero trust. And we do that and 17 people that are part of that in serving and fellowship roles and running around the country and world, trying to explain what it is we mean by, let's call it, manage trust. Now, it would be silly of me or anybody to change the name from zero trust to manage trust, for a number of reasons. But so, we're gone with zero trust. But just to be clear, between you and I, we don't disagree at all about what the objective of our program is, and what we think is reasonable and practical and doable. And we're clear about the way you go about it.
Cohen: [36:15] So I understand and appreciate that, but I'm going to continue to push you and all these other people, and it's okay for me to go after windmills, the quixotic approach. The challenge is, among other things in cybersecurity, these days, we're in a struggle of influence operations and deception, disinformation, misinformation, frauds, lies, all these things that focus on trying to get things into people's minds that are favorable to the person who's trying to do the influence and at harm to the people who are being influenced inappropriately. So when I talked about information protection long ago, it's about keeping from harm. And this is an example of harm of disinformation, human beings being harmed by having misimpressions, the destruction of civil society and other similar things. So the reason I take this position, which some people think is hard over, is that I think this is an example of disinformation that we're letting in the door. And that instead of standing up to it and being an example of how to do this against disinformation of all sorts, we're saying, "Our disinformation is okay, because it's convenient, and we can make money at it." And that is the antithesis of what I think is the appropriate approach. Now, I understand people are different, we have a group call, we're now having a monthly group call all.net., my website. We sponsor these things. So anybody that wants can join, they just have to be on the mailing list so they can get the URL. And we had one on trust, last month. And we had, among other things, Ron Ross from NIST, who knows a lot about this, a lot of different people and across different fields of the spectrum. And one of the people I respect very greatly in this field. He's been a consultant for many years. Doug Simmons is his name. He was the head of the consulting and cybersecurity at Gartner for many years. He worked with me at Burton Group. I've known him for 20 plus years. And he points out that with this expression, they're able to get more executives to move further toward doing a better job of it, by getting them to adopt some of these techniques, and that is helpful in that and to that end. And so I understand that, and I appreciate it, and I respect it. I just disagree with it, which is fine.
King: [39:03] Absolutely fine. However, what we should do, Fred, and I'm conscious of the time here, and as I see, we're 15 minutes over our allotment, I think what this gives rise to is another at least half-hour session between you and I to let me try to explain to you what we're trying to accomplish here within what we call zero trust. And you will see that it's absolutely no different than your thesis or contention around what we should be doing from an information protection design point of view. I could be wrong, but that'd be a great way to find out.
Cohen: [39:50] I'm well aware of the technologies involved and I think most of those technologies are good things to apply, depending on the circumstance. So I don't have any disagreement about most of the attempts to implement positive change through identity management and disaggregation of risks and verification, and authentication and so forth. But again, the misnomer identity management and temporal microphones is almost entirely what the subject is about. So, I agree the techniques are useful, a subset of the 100 plus techniques, well, 100 plus answers to questions, each of which involves multiple techniques that constitutes what I would call a reasonable and prudent approach to cybersecurity. So I don't think you're going to explain to me anything that I didn't agree with, I already agree that my fundamental disagreement is the underlying basis of the notion of zero trust and the misuse of the terminology.
King: [40:56] Okay, well, that makes it much simpler than that.
Cohen: [40:59] But that doesn't mean we shouldn't have another call some other time. I got to tell you, Steve, LinkedIn, and these other media sites are poor places to express things because you get so little information. In our discussion, it takes a couple of minutes of you saying something to get your point across and doing it is even worse. So, that's why I write articles every month, because I want to express myself in a more in-depth way, subject to the ability of my fingers to keep pushing buttons.
King: [41:34] Well, why don't we take zero trust just on the face of it? And that'll be our topic for next time.
Cohen: [41:49] It's going to be a boring discussion, if all we do is agree.
King: [41:53] Well, I don't think I can't imagine, Fred, that you and I will ever agree on everything, but we've gotten much better, and thank you to our audience for spending 45 minutes of their day. Hopefully, there was good stuff here for them. I thought it was illuminating and useful. And it's always great to hear Fred's mind at work and I respect and admire him immensely. We will go at it again in a couple of months and we'll talk about removing excessive trust from our networks and see where we're good with that.