This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors.
Steve King 00:13
A good day everyone this is Steve King, I’m the managing director at CyberTheory. We are running our podcasts today around a topic that we call secrets in the code. Today’s episode will focus on day zero supply chain vulnerability. With me today is Moshe Zioni. The VP of security research at Apiiro an early stage cybersecurity company founded in 2019, whose purpose is to help security and development teams proactively fix risk across the software supply chain before releasing to the cloud, which is very cool. In my estimation, backed by Greylock and Kleiner Perkins with a $35 million a round, I think they are well on the way to a market leadership position in the space. And some of what they’ve done so far is the current winner of the pretty prestigious RSA sandbox Innovation Award. They were named to Gartner 20 week 21, cool vendor and Dev SEC ops. They found that detected a de zero supply chain security vulnerability on Kubernetes space, the Argos CD platform. And they’ve been a frequent contributor to the NIST 800 to 18 Secure Software Development Framework. So Moshe has been researching security for over 20 years in multiple industries and specialized specializing in penetration testing, detecting algorithms and incident response, constant contributor to the hacking community has been co founder of the Shabak on security conference for the past six years. So welcome to the show emotion. I’m glad you could join me today. Thank you, Steve. I’m
Moshe Zioni 02:08
very happy to be here. Thank you for having me.
Steve King 02:11
Sure. Let’s jump right in. We all know that traditional OpSec is failing modern enterprises, and that we’ve got many hidden risks in open source API security. In fact, you guys published a report, I think, entitled secrets in the code, which eloquently describes the business industry impact of your research, along with some actionable insights for practitioners? Can you give us an overview of that? Sure. So as a
Moshe Zioni 02:40
backdrop secrets, ENCODE is something that many developers and security professionals have been pointing out throughout recent years. But of course, it is as old as code exists. Simply put it is the fact that developers are putting into their code, some strings, or some artifacts that are there without a real reason, or at least not a secure reason to do to do the same thing with a secure string, or maybe some alternative that we have currently, like vaults or something. So instead, they’re using hard coded secrets secret can be a password, a token that can be utilized, again, a cloud service or something, something in this in the Spirit. And by using that sometimes they neglect it in code. And once this code is, is open source to the world, some other hacker can pick it up from the source itself, and utilize it for their own good their permissions there or authorization that you get from those tokens are is of course, varies between different suppliers and providers. But in general, you can think of the most common examples are like tokens to a specific API service that can give you maybe some credentials to implement or to access, cloud services and cloud resources of the organization’s. So this is the backdrop of why we actually went through the research method and eventually resulted in the report that you’ve just mentioned. And in this report, we found we took like something around 20 Different organizations with different scale with different industries. And through those organizations, we actually scanned pretty rigorously all of their commits. commits are the single piece of code that are being pushed into an open source repository. And we reach 2 million commits overall. And by those commits, we have a very good grasp of how secrets behave in code how developers are, wrongly put their secrets in their code. And also what kind of what can we learn from those kinds of behaviors? Is there a Some things you can point out as a pattern. And of course, the result is the report. So you can guess there are some patterns that are most interesting to explore. And to add to the decision making processes within security professionals and organizations, once they have their plan or strategy strategic plan put into
Steve King 05:21
place. Yeah. And are there quite a few dependencies that, you know, downstream dependencies on other open source programs that are called by some of these API’s and, and other open source code that no one has any idea? What what those are? Or are people? I guess the question is, how do we vet? Is that even possible that event the percentage of code that we that we reuse from these libraries?
Moshe Zioni 05:52
Wow, that’s a great question. And of course, a very complex answer. I’ll try to do it briefly. The short answer is that you can assess at least the risk of having specific package or dependencies that you use and import into your code. There is a limit to it, of course, because everything can be seen as a risk. And what we are proposing and we are, can we actually have another project in open source project for that name, the dependency come popular, which is doing exactly that, it’s taking into account multiple intelligence feeds, and made a data of the packages and trying to assess what is the risk of using this kind of import of using this kind of package. There are different ways to go about this kind of route of intelligence over packages, you can maybe scan them, you can actually went through a code review practice with them. But this is, of course, a very laborious and expensive in resources, of an effort to go about every kind of open source dependency that you’re using, that this number is just accumulating over time, and, from our perspective, never go down, we all see the trend of using more and more open source, there is good reason for that is this saves a lot of time, this is this becomes a standard. And by that you can implement and produce better and also faster software to production. So we don’t see retraction from this kind of trend. Quite the opposite.
Steve King 07:29
Yeah, no, and I, you know, the Imam understand that the need for you know, if we’re driving so desperately to digitalization and, and the fourth result revolution, and all of that I see the need for, you know, agile development, of course, but, you know, I mean, at some point, don’t you say, you know, the cost is far outweigh? I mean, to do it to do it in a safe context, isn’t the cost far outweigh the benefit? It’s amazing to me, I know, you guys have developed some best practices also, when it comes to, you know, ethically reporting and patching these vulnerabilities. And can you help our audience understand what a few of these might be? And do they include, you know, if we run into a secret, for example, or the dependency that you’re working on? Now, do you alert the dev SEC ops team? Or how does that work?
Moshe Zioni 08:25
Again, this is a very good point on both cases, and on once you find a vulnerability or you find the secret, which can be seen as a subset of a file a vulnerability in code, some kind of weakness that you are exposing. So in general, yes, there is a responsible disclosure process. If you are internal to the organization, this should be easy for you, you should contact your immediate app SEC engineer or app SEC representative. And by that acknowledges them that should they should respond to this kind of incident. By that they need to, of course, prista, first of all remediate meaning that they need to revoke the token, after they are rotating it into a more secure way and fixing the code. To be supportive of that on dependencies are quite the same. If you find a dependency we have our ability, you acknowledge that to the to your closest representative if you are extended to the organization, that’s a bit more complicated, but fortunately, we have many processes around that. It’s collectively called responsible disclosure, meaning that you are disclosing a vulnerability or maybe a weakness as we mentioned the secret to an organization Hey, listen, you have this kind of of an issue. And you also would like to extend an explained sometimes why this is an issue. What kind of business impact does this help desk this issue has over business noteworthy organizations. Once you have that you are filling up a short report, maybe an email they maybe they have some kind of a bug bounty program which Just another way to support this kind of disclosures. And by that you can go about and just disclose this kind of information safely to the organization, you can look up for more mature organizations will have their contact in the front page, just as for security manners, and of course, every kind of respectable corporate will have this kind of process one way or another.
Steve King 10:25
Yeah. And I assume that that means that we want to only work with mature organizations with that have ways of interacting and contacting to make sure that we’re able to do that responsible disclosure, and have them act on it. Right? Yeah,
Moshe Zioni 10:44
yeah, absolutely. We, this is one measurement for you to measure, if those kinds of issues have been just mentioned dependencies, just to measure if this, this, this dependency is being mature enough in terms of security, you can see if there were any kind of vulnerabilities in the past, you can see if they have a process installed, in order to contact their security advisory or security board. And by that you can assess at least their seriousness and their maturity in terms of security processes. This is a great indicator. Yeah, I must agree. Yeah. So
Steve King 11:14
are you attempting to do that in an automated context? Or do you simply return the discovered dependency to a manual process where people don’t have to look it up.
Moshe Zioni 11:30
So we do both, it really depends on on what the customer needs. And you can, you can, you can set it up as you will, if you’d like to have just as a, an alert or something that will be notifying you about this kind of discrepancy, maybe a vulnerability funding dependency, so you’ll be able to manually act upon. And also, on many vulnerabilities, there are automation processes in place, so you can just forget about it and say you want to be automatic, most of the organizations will have some kind of a mix for high impact vulnerabilities, excuse me high impact on the business, they would like to assess it manually. Either way they can break. For example, if you just need to update the dependency version, you will need to test it first by a human being maybe in the future, that will be even better. So we’ll we’ll be able to just reduce this kind of effort as well. But currently, every kind of high business impact application will have to have some kind of a manual analysis and manual testing before releasing it to to a stable state. You can choose for at least for the time being if you would like for example, just to have a bit as a beta for testing, or maybe for some cutting edge. And someone that’s more like to to have the risk of return, they be able to automatically update for the latest version and then just use it as is.
Steve King 12:55
Yeah, I got it. Ransomware is continuing to be a thorn and everybody’s side is growing like crazy. For all the obvious reasons. You’ve got advice on how organizations can best mitigate future ransomware attacks and specifically around supply chain and open source? Security. I know a lot of people that would love to hear the answer to that question. How do you mitigate future ransomware attacks,
Moshe Zioni 13:22
when we are discussing ransomware. Or if we can generalize it a bit for any kind of malware activity, malware can be directed and can be implemented. Not just of course, by a ransomware, I agree with you the trend somewhere is the most prominent attack vector once you have a foothold into the organizations. And what we are foreseeing and what we are proposing, especially around the supply chain, and they were supply chain ransomware attacks is to defend your code as early as you can. And also, that means that there is a trend called shift left meaning that you would like to have as much as those kind of things and validation done as soon as possible not once, not just once you are going to production. And the second rule of thumb here is if you have something more closer to the actual production systems, what you’ll be able to do is to lock down the versions lock down the specific cases, specific dependencies that you have. And by that, even if someone is let’s say half men in the middle attack over your dependencies, you’ll be able to validate, and by the signature and by the fingerprint of those kinds of dependencies that you you actually get what you’re expecting. So nothing like for example, a very common mistake in those kinds of cases that can lead to those kinds of attacks, potentially, is to leave it to the dependency to be able to pull down the latest version instead of the specific version that you know that is safe to use, and buy that every time that they So a build will go up, it will request the latest version without acknowledging what kind of certificate what kind of fingerprint should should this version have. And this is called a locking, version locking. So you lock the version, you can also add to that on many package managers, the actual fingerprint of the package. And by that you ensure that at least you won’t be harmed, harmed by a new kind of attack through the supply chain through dependencies, if that makes sense.
Steve King 15:27
Okay, how much post sales support? And training do you guys have to provide to get your customers that fully extract value from the solution?
Moshe Zioni 15:42
I would say not much. First of all, we are in very close contact with our customers. As a startup, of course, we have this kind of agility to fit their needs pretty quickly. And we are going through the rule of thumb that if it doesn’t make sense, the first time you look at it, it maybe will make sense that the third or fourth time you will but that’s something that we are refraining from we are trying to make the system approachable meaning that the you user experience itself should reflect native flows of organizations and not enforcing the organization’s to our will, and our own processes and what we think Sheesh, they should do. The second thing we are doing it’s the whole system is interconnected with your current processes. So it won’t make up new processes, if you don’t like to, the workloads that we can build for you are automatic and are suitable for your ticketing system, maybe for your instant messaging systems like Slack like teams, etc. And by that we are leaving the ecosystem instead of instructing it.
Steve King 16:45
Do you think you can scale that down as you grow?
Moshe Zioni 16:49
Absolutely. Currently, the the way that we are doing that is, first of all, we are a cloud native ourselves. So by that the scalability that we, if we have any kind of scalability requests, is pretty easy to do. DevOps teams are pretty used to that. And we are also always preparing ourselves to do much more than we are currently withholding. And, of course, we are looking into more and more customers, we have huge customers on our portfolio. And by that we are pretty confident with that. But of course, we are always checking those kinds of assumptions, we don’t want anyone to be held down by resources or anything similar to that. And the process itself is pretty easy, you can be ramped up into onto the payroll platform, in a matter of less than a day, or even less than some than several hours sometimes depends on your size. And the analysis itself will also kick in soon as possible, though, you will have your repositories analyzed and if you are as
Steve King 17:53
what size customer is your ideal prospect or your ideal end user in terms of, you know, a number of people or obviously they have to have DevStack ops team, how big does that have to be? Yeah,
Moshe Zioni 18:09
so this is the funny thing. We are, first of all, we are seeing a lot of different customers in terms of structure. So sometimes they will have their own DevStack ops team, sometimes they will, they will have dev ops team and not dev SEC ops team, sometimes they won’t have either and they maybe will have a single entity named OpSec, engineer or upset professional to go about and do the work of app SEC application security, excuse me. And by that the whole purpose of the A pillar system is to save you those kinds of resources, you you you won’t need it before that you let’s say you need 10 people to to exercise application security throughout your supply chain Bureau is diminishing those numbers to a single digit. And on the low end of it, the purpose of it is to make the clutters of the alerts and the alarms that you have all the bells and whistles that goes off every time you will have the minimum amount that you need. And the very focused one, dealing with deduplication dealing with automations of those kinds of processes. So in general, our idea of of, of an organization will will have to be something that some organization that will have at least one application security personnel, that can be a devsecops that can be a DevOps, and that can be an absolute professional. In terms of number of developers, you can go up to the hundreds of 1000s. But in general, that’s the whole idea that the system is scalable. We are learning as much as we can from from those kinds of development developer behavior. So if you have more developers, that will make much more value. But if even if you have quite a few, even in the numbers of 10s developer, a few 10s of developers, it’s still going to be much valuable information and insights about who is doing what Add how what is the timeline of each material change in the code? What kind of code impacts you more than that something else and the risks that every code commits, is contributing to your to your repositories. And of course, you decide what to do with it. And we aid you with our workflows and automations around remediation and measurement.
Steve King 20:21
Yeah, I see. And that’s got, that’s got to be one of your key value propositions as well, right? Peoples don’t have to stand up a whole dev SEC ops team, they, if they don’t have one, that’s fine, too, because you’re actually doing that work.
Moshe Zioni 20:38
Exactly. We have some very good indications on that from customers that they applaud us on several occasions than we recently on past months, everyone had those kinds of VIP CDs, meaning vulnerabilities are very high impact into data streams. And instead of spending hours, maybe days, maybe a week, some customers said that their peers in the industry spent two weeks in order to discover all of the weaknesses they have, it took a took them with a much less of a much, much fewer applications, security professionals. And within a few hours, they had all the information they needed to mitigate and to spot every every weakness in every vulnerability that was that were discussed, and those kinds of events. So this is a very good assurance, that the impact and to the philosophy that we are taking really
Steve King 21:31
your platform. Yeah, sounds like it. That’s great. We’ve talked about numbers a little bit here that you know, you in the difference between private and public repositories, you you’ve discovered that I don’t know, it was like eight times the number of expose secrets and privates. Can you told me give our listeners the difference between private and public repositories? And why that we’d have eight times the number of expose secrets in private repositories? Yeah, sure. So
Moshe Zioni 22:00
they there is a technical answer to that. And there is a, I would say psychological, psychological aspect of it. So first of all, the technical answer is that private versus public, a public repository is something that you quite, not surprisingly, opening up to the world and to the public. So everyone can can see your code. The reasons for that vary, sometimes it’s something that you would like to share, because it would like to share something with the community or maybe some some kind of a support to other customers that you have yourself, or you have an Open Source Repositories that you are maintaining the private repositories, which are the funny thing is that they are much more common than the public ones in organizations, of course, is your code that you don’t you don’t want to expose to the world. So this is the technical aspect of repositories, private versus public. The other aspect of it is more a psychological and organizational level aspect, is that what you do with those kinds of private repositories, those private repositories holds your crown jewels. And another difference is that those private repositories have maybe a different threat actor attacking or, or influencing the risk of those kinds of repositories. And what we found in the research is that, as you said, you have eight times the number of secrets on those kinds of private repositories. This is the first of any kind of report that covered internal repositories, to the to this breath. And by that you can also think or at least correlate the fact that developers and every organization feel much more safer to keep their code within their realms. And by that some secrets can slip in much more heavily. And also you they will never expect those kinds of secrets to go out. So they will assume this is safer, and maybe they shouldn’t act upon it as furiously as they will be on public repositories. But this is completely false. First of all, many accidents that we’ve we’ve encountered and aided in those kinds of incidents, try to convey the message that some of those extents begin with the private repositories. But then sometime in the future, this code snippet or maybe the whole repositories, become public. The second thing is that if those private repositories are private, that doesn’t mean that that no one can see that it’s accepted, specific developer quite the opposite. In those kinds of organizations, many have those kinds of access. And something like a snippet can slip through someone can copy paste something to an unsecure device. And by that you see those kinds of private repositories maybe the most notorious case of the past here was the Twitch link, which the streaming service have been hacked sometime in the past and in 2021, and the end of 2021. We saw the link itself a few gigs. bytes of code. And we saw how many, this is pretty confirming to this kind of aspects, how many secrets there were in twitches code doesn’t mean that Twitch is any different from any kind of another implementation, it just confirms the fact that those kinds of secrets are much more prevalent in entire repositories.
Steve King 25:19
Wow. You know, as it gets more complicated the human factor, it gets more important, doesn’t it? Across the board, whether it’s, you know, server configurations, or open source code, or the kind of mistakes that humans make, just naturally, I mean, people are people, you know. So it’s, it’s always interesting to me, it is also interesting that I hit you said that over a third of the secrets that you detected, your research detected happened in the first quarter of the year. What is the correlation between that time of year and the number of secrets?
Moshe Zioni 26:01
Yeah, I’m happy to bring that up. Because for me, it’s the most revealing fact from the report Maybe, and maybe most surprising to many. But when you think about it, what the actual the actual report stated that 30 point 34 point 34% of secrets that were found, were added to those repositories during the first few months during the first quarter of the year. This is spanning the research itself spanned throughout multiple years. So and we saw this kind of very clear cadence that you have in from the beginning of the of the year to the end of it, you have some kind of a sine wave throughout, and the correlation that we found, and we also discussed it with, with experts and some on an organization’s themselves. By the way, I haven’t mentioned that until now that the report itself has been vetted and been validated and discussed with 15 different external, external experts on the field of application security. Some of them are our customers, some of them are champions of application security globally. And they have reviewed it and gave gave their insights as well. And part of what we receive there is that many organizations have this kind of rotation cadence of secrets within their organization. Quite naturally, it maybe it’s the beginning of the year, maybe sometime else inside the fiscal year that needs to be rotated, because you are re rethreading over licenses. And maybe they just had a very good year sometime. And they have this kind of very aggressive recruitment. So they have much more new employees and by the new developers makes much more mistakes. Another fact that we that we put on the report itself, by the way, so we see this kind of seasonality, first of all, because of organization cadences outside of secrets, but affecting secrets indirectly. And also, we can think of the holidays, especially if the US holidays are happening at the end of the year. So something along those lines also can affect the holiday time that people take and then return. Maybe it’s a it’s overburdening for the application security team that is always in the stress of accomplishing more, so they have less time for code reviews. And they can’t really stop the whole flood of secrets at those kinds of times of year. Those are, of course assumptions and correlations, and we can’t really prove one to one. But we see this kind of correlations pretty strongly, especially on the seasonality and rotation factor that that I mentioned.
Steve King 28:41
Yeah. Yeah, that makes sense. I’d love to get a copy of that report. If it’s now public, and perhaps you can email me some version. That’s true. Yeah, that’d be great. It’s worth promoting for sure. I this is a this is a huge problem. You know, it’s right up there in my mind with all of the other complicating factors around our networks being way too complicated. Its moment and, and our approach is relying way too much on human on the human factor. I think we’re near the end of our time here. And I wanted you to have you confirmed that I think, a brief way to summarize a Pirro that you guys discover, remediate and measure every API service, dependency and sensitive data in the CI CD pipeline to map the application attack surface, right. Right, together with
Moshe Zioni 29:42
contextual knowledge about the risks themselves, like what is the material change? What kind of technologies are you using? If the actual code change was affecting authorization, authentication, storage, or anything along those lines and much more. All this contextual knowledge gives us the power To really recommend and to score risks according to your normalization of the organization, and not just by a ad hoc, something agnostic to yoga kind of organization, it the context is everything. And it’s no different with this kind of risks.
Steve King 30:15
Yeah, sure. And, and this all happens pre production, right? pre pre entry into the production stream and the crowd in the cloud. Yeah. Okay. Yeah, correct. So who are some of your more notable customers that folks would recognize? And then what competitors would folks expect to find when looking for a code risk platform? Is that a category by the way, that code risk platform? Category? Is that is that a Gartner thing? Or did you guys can see that?
Moshe Zioni 30:46
I don’t think it’s a Gartner category, the Gartner closer thing is the scene app or the cloud native application protection platform. And by that I can mention a few of course, I can mention every kind of customer that we have. But just to name a few we have so first platica, Chegg tripactions, Imperva, rivian, mind geek, Rakuten, and many more on our platform. And if you just notice the whole line there, there are diverse customers from for many industries, any shape and size. And this is, of course, gives us a lot of, of johe working with those kinds of big customers that knows how to run application security programs. And by that they enjoy the experts platform that gives them the this kind of contextual power. Yeah,
Steve King 31:37
I’m sure. In terms of competitors. I know you guys are early, have there been a bunch of competitors that that have been sort of creeping up? Or do you have any serious competitors that you worry about?
Moshe Zioni 31:51
I don’t think it’s I think it’s too early to really designate a competitor, every there is a lot of cloud related startups and solutions. But every everyone is doing their thing very much differently. And we are not excluding the we are not excluded there. And by that I don’t see anyone like direct competitor, but the area is still fresh. Let me let me put it that way, asked me again in one year, and I you
Steve King 32:19
know, I will, I believe I’ll have you back in a year. And we’ll have the same conversation and see, see where you are, which is great. You know, I mean, when you sold Imperva, there must have been competitors there that you beat out. Right? Again, we
Moshe Zioni 32:37
are we have a very unique approach and philosophy, to the market to application security in general. To be honest, the first time I’ve heard from the founders about the company done plotnick. And you’re not done about the solution, my jaw just dropped. As a veteran in the application security industry, this was not just news, but earth shaking and a paradigm shift in the way that organizations should deal with application security from now on. And this is so much time after that. I still feel like there is no competitor in the same scale and in the same maturity, and very much nothing the same even method that we are looking into. And that’s why I’m struggling to find a direct competitor that you are looking for.
Steve King 33:26
Yeah, no, I know. I don’t believe that you’re being evasive at all. I think that you’re right. I don’t know any. Any competitors here. And you guys. That’s why when Alex originally contacted me, I was I was floored, you know, I was like, Can this be for real? Because you’re absolutely right. This is a this is a solution I haven’t seen before and it is revolutionary, absolutely set in terms of you know, security by design. No, no question about it. So thank you Moshe, for taking the time out of your crazy schedule, I’m sure to join us today. This is Moshe ziani, the VP of security research at a Pyrrho and we will ask you to come back not in a year but maybe six months and have another one of these and kind of see what’s happened in the market. Now. You know, we’re heading into a challenging moment here to the next few months and but you know, cybersecurity is not going to stop and so people still need to protect their PII and PII and IP and all the rest of it. So I’m sure that you should have a fantastically successful quarter here.
Moshe Zioni 34:41
Thank you very much, Steve. And I’m looking forward for the next invitation. It was a very pleasant discussion. And there was questions. Thank you very much.
Steve King 34:49
Good. Thank you. And thank you to our listeners for joining us in another one of our unplugged reviews is the stuff that matters in cybersecurity and technology and our Our new digital landscape until next time, I’m your host, Steve King signing out