What’s VEX got to do, got to do with it?
Seems like every time I talk to someone or do research on Software Bill of Materials, I encounter VEX – Vulnerability Exploitability eXchange – and I never really understood what they were used for.
I knew they had something to do with understanding the vulnerabilities that exist inside the components we list inside of an SBOM, but why does the format or concept exist? After all, we already have ways of exchanging vulnerability information like Bill of Vulnerabilities or Vulnerability Disclosure reports, right?
Well, VEX represents an approach to sharing vulnerability information as well. As well as being a concept, it offers a format specifically designed to describe the exploitability of a vulnerability. It encompasses crucial details such as attack vectors, exploit complexity, and the impact of a vulnerability.
Well, just because you have a component with the vulnerability, doesn’t mean that the application itself is affected. It’s quite possible that the component only has one vulnerable method – and it may not even be used by your application.
Understanding this context around vulnerability enables security practitioners, researchers, and vendors to assess and prioritize the remediation efforts more effectively.
In this episode, I’ll be talking once again to Steve Springett from the CycloneDX project and we’ll be diving into the topic of Vulnerability Exploitability eXchange.
We’ll gain a deeper understanding of how VEX fits into the broader landscape of information exchange and Software Bill of Materials, and how it contributes to our collective efforts in building safer and more resilient software systems.
Welcome back, to daBOM.
Hey, let’s get started. Welcome back to daBOM. I’m here with Steve Springett once again. We’re going to be talking about VEX today
Vulnerability. Exploitability. Exchange.
Steve, tell us again about yourself in case someone just tuning into the podcast.
Steve Springett. I’ve been doing software supply chain security for, entirely too long. It’s been about 11 years, so far. I am the leader of the OWASP Dependency Track Project. I believe it’s the very first, platform to, ingest, consume, analyze bills of materials, including software bill of materials from vulnerability and license types issues.
That project was started in 2013, just over 10 years ago. We’re actually celebrating our 10th anniversary on that project.
I’m also a leader and creator of the OWASP CycloneDX Project, which is the bill of materials format, specifically designed for mostly cyber security use cases, but there’s obviously non cyber use cases as well that are important and which we support.
And finally, I’m also a lead and co-author of the OWASP Software Component Verification Standard, which is a way for organizations to measure and improve their software supply chain assurance.
What I actually get paid to do is leading a team of architects here at ServiceNow. I’m the director of product security and we try to help the 4,000 plus developers, try to build, secure end resilient software.
We’re here today to talk about VEX. What is this magical format for capturing vulnerabilities?
Yeah, I wish it wasn’t a format, but we’ll get into that later.
VEX is, at its core, it’s actually really simple. It’s are you affected by this component, this vulnerable component or not? Applications are constructed primarily of open source components at their core. Depending on the stats, upwards of 80% of projects code base could be open source. And when vulnerabilities affect those components in those open source projects, a lot of times your projects are not necessarily going to be affected by those otherwise vulnerable components.
Maybe the execution path isn’t called, maybe there’s mitigating controls already in place. there’s a number of reasons why, an otherwise vulnerable component is not going to be exploitable in the context of the of given product.
The premise for VEX actually came about, early in the NTIA days when NTIA was discussing software bill materials and the software transparency movement in general. The concern among vendors was that if they were going to be transparent about the components in their inventory, given the fact that most of the vulnerable components are not going to be exploitable, then the vendors also needed a way to communicate that they were not actually exploitable to these vulnerabilities.
When you provide SBOMs and that full transparency, the consumers of that SBOM will ultimately analyze that. You don’t necessarily, as a vendor, you don’t want to necessarily increase your support cost by having that transparency, right? You want to companion that SBOM with something else and they came up with VEX as that’s something else to communicate that, yes, I’m using this component, but no, I’m not actually vulnerable to it. So that’s at its core, that’s really what it was intended to do.
Sounds like a compensating control, like you’re listing out, hey, this doesn’t affect me. There’s a vulnerability here, There’s some kind of containment of that vulnerability, or things don’t affect them. maybe it’s not the call chain or something like that.
You said this isn’t a format. What is VEX then if it’s not a format?
CISA seems to have some conflicting advice. There’s a document that was published by CISA in April, 2022 that outlines the minimum data elements of VEX. That was then extended in June 2022. More recently, they’ve actually created a format. There’s already two formats already in use today. And that’s csaf, which is used by some fairly large software vendors, from the likes of Red Hat, Oracle, and a few others. and then there’s Cyclone Dx that’s been around for quite some time and, has had the ability to communicate non exploitability, going back since 2019.
But, more recently what VEX really did was provide us the ability to say why something wasn’t exploitable. That justification, I think, is really the value add that VEX provides. You already have two formats and it looks as though CISA is trying to create yet a formal format, even though technically speaking, CISA is not a standards body. That’s going to be interesting for them.
So vex is both a concept and a format to support that concept?
That is correct, yeah. The CISA documentation from 2022 does not outline a format, and specifically state that CycloneDX and CSAF are in fact, compatible with the VEX specification.
More recently, I don’t think it has been published yet, but there is a minimum requirements for Vulnerability Exploitability eXchange. which is a fairly lengthy document. It’s about 20 pages or so that actually outlines a formal format and, with language that could be used in creating a future standard. Because again, CISA is not a standards body. But I think the language in the document could, in fact be used to create a standard in the future.
We talked about the problems that VEX is supposed to solve. Are we solving these today, or is it still really early in the process or early in the life of this?
It depends. In the case of Oracle and Red Hat, yeah, their CSAF, their CSAF feeds actually do include the exploitability information, which is fantastic. VEX is already used to a limited extent. I don’t know how much trading is actually happening. Nor do I know how much trading of SBOM is actually happening. I do know that if you look at Maven Central, for example, I think there’s like in excess of 30,000 SBOMs, most of which are CycloneDx up on Maven Central.
SBOMs are certainly being distributed, especially for open source projects. VEX is one of these things where if you are a procurement department and you are requiring SBOM, I would absolutely also require VEX or some kind of exploitability statement to go along with it.
As an end consumer, you really care about risk. The idea behind VEX is to say whether or not you are exploitable or not, and the justifications for why you are not. But the world isn’t necessarily black and white like that. The world is made up of all different kinds of shades of gray.
One of the justifications in VEX is that inline mitigations are present. A mitigation is not remediation. It doesn’t mean that you’ve actually fixed the issue. It means that you’ve reduced the likelihood and or risk of that issue becoming a thing, and vex doesn’t communicate that.
Interesting. So we have the SBOs defining the components, files, fonts, the composition of an application. We have VEX talking about the vulnerabilities inside of that SBOM or the components that are in that SBOM. And then we have this contextual information that it’s almost like a conversation starter, but it’s information that comes through that helps clarify how the organization that’s generating it is dealing with their vulnerabilities, their mitigations, and that kind of thing.
If it requires manual intervention or manual nurturing of that SBOM or of that information, is that going to introduce more work effort for organizations generating these things?
I’ll give you a two part answer, The vulnerabilities obviously need to be triaged by the vendors, at least the responsible vendors, to identify whether or not a particular vulnerability actually affects their product. That’s just the cost of doing business,
One of the interesting things, I think with VEX, and it’s one of these things where I don’t think it’s scalable, is that VEX is both designed to be, used without SBOM. So you can use a VEX without having the software bill of material. And, many in the VEX community are stating that in VEX you actually assume that you’re vulnerable until the vendor tells you otherwise. And I don’t think these two concepts are compatible, and I’ll tell you why.
There’s over 200,000 vulnerabilities in the NVD. If I use a VEX without a corresponding SBOM that means I literally am having to account for the 200,000 plus vulnerabilities in the NVD and having to say that, no, I’m not affected to all 99.9% of those things. You’d have to have every single vendor do that.
I think that’s a massive waste of time. I don’t think it’s scalable and it’s going to drive a cost tremendously. There’s guidance from CISA needed in the case of, again, VEX does both affected and not affected. If I only provide affected, VEXes, then how is that different than an advisory and when do I use one or the other?
It’s really confusing, messaging coming out of CISA right now, so a lot of people are waiting on guidance because it’s not overly clear.
Some of the folks that I’ve been talking to over the course of this podcast have said, okay, we have multiple software bill of materials. We have one for the application, we have one for the container or infrastructure that it’s running on, we have one for the deployment tools that build that software. And now we have the VEX, which is possibly describing vulnerabilities in all three of those. That’s a big inventory package to provide to someone.
Do you see that getting simpler or do you see the guidance that we’re waiting for from CISA and all these different organizations helping make that smaller or more consumable?
I hope so, but I’m not overly optimistic.
The assumption early on with NTIA and carrying forward to CISA, is that when you supply an SBOM, it’s going to be analyzed. That is a very big assumption and it requires the creation of tools, especially ones that are designed for enterprise scale, so that you can consume the SBOM, analyze it, surface any vulnerabilities or any other kinds of risk as well as then apply the VEX on top of it. And every single consumer would have to do the exact same thing.
The whole notion of SBOM being analyzed with a consumer, I think is problematic from the start. Most people simply just want to know what’s in your stuff, and they simply want to know what is my risk of using your stuff? I don’t know the percentage of consumers who are actually willing to go through the work of analyzing the tens of thousands or more of bills of materials from all the vendors that they do business with, as well as applying VEX on top of that. It’s a massive challenge.
I am more in favor of the easy button. The easy button is, give me a list of ingredients, give me the SBOM, and then assert what vulnerabilities you have and the exploitability of those vulnerabilities in that given product. If you do that, the consumers don’t actually have to analyze anything. They can consume it, they can store it, but they don’t have to go through the process of actually analyzing anything because the vendor has actually told them, here’s my inventory, here’s the vulnerabilities for that inventory, and here’s my exploitabilities for those.
We’re getting flooded with tons of different formats, tons of different requirements, tons of draft information working groups. It sounds like you think that we might be overthinking a lot of this, and maybe we just start with treating SBOMs as inventory management.
To go back to something that you said, that responsible vendors do these things that sort of falls in the responsible vendors area, but when we think about those responsible vendors, they might have 20,000 vulnerabilities and they might only get to triage the top 2% of them. If that goes into VEX, there might be a huge gap there.
What are your thoughts on what we can do to really keep those smaller or keep the interaction smaller?
Focusing on risk is a win-win for everyone. If you are the defender in your organization, you don’t necessarily care so much about is something exploitable or not, you actually care about the risk.
Right now we’re not communicating that. Your typical enterprise is going to have tens or hundreds or even sometimes millions of vulnerabilities in their environment. They all have criticality, or severity, but it’s really hard for these organizations to communicate, to determine risk. If the vendors actually focused on risk rather than just raw data, I think that would really help consumers out tremendously.
I haven’t shopped this around to many folks, but if you think about what VEX really is at its core, you’ve got statuses affected, not affected, et cetera. You’ve got different justifications and a few other things. If you think about what those things really are, this could have been as simple as a vector. Say for example, CV S or the OWASP risk rating, where you have a few different categories, predetermined choices with each one of those categories and an algorithm behind there to actually communicate risk to you.
That would’ve been a much better use of time than trying to create a format, in my opinion, and it would’ve actually served the needs of the defenders much better.
In a world where we’re increasingly going to cloud native, I think that again, was another missed opportunity, right? Much less of our software year after year is actually traditional, installed on-prem type software. Most of the software we consume is in the cloud.
The NSA also has guidance that basically says the same thing. They don’t call it VDRs, but it is an assertion of the vulnerabilities affecting a given product. That is the NSA’s recommendations, which both of those agencies seem to contradict the guidance from CISA, which is problematic The US government probably needs to align their agencies, because they’re all seeming to say slightly different things.
The case for VEX, especially with all these unanswered questions of VEX being used without an SBOM, and then there’s 200,000 plus vulnerabilities in the NVD, what are you going to do? You going to say, 200,000 plus vulnerabilities I’m not exploited by for every single vendor needs to do that. I don’t think that’s scalable, especially when you got tens of thousands of vulnerabilities reported every single year. That’s a lot of additional work on vendors.
So before any government type requirement for either VDR or VEX, especially VEX, a lot of guidance needs to be drafted first because right now it’s just absent.
As consumers or providers of SBOMs, is this the right time to really be thinking about VEX and generating these documents? Or should we really back off and wait a little bit until the dust settles around the construction site?
There are many people in the security space that question the not the intent of VEX, the intent of VEX is great. It’s how do you operationalize this stuff? There’s a lot of unanswered questions. VDR is fairly straightforward. You can do that today. It’s not overly hard. A lot of your SCA tools produce this stuff, especially the ones with call graph capabilities.
That is really easy to do today. If you want to communicate that kind of information out to your customers, which is really useful, then that is certainly viable today.
VEX is a little bit more questionable. There’s still a lot of guidance that, I think CISA needs to put together in order for organizations to operationalize it.
How do we start as consumers or as, again, vendors? How do we start generating these things if we want to go down that road?
I’m seeing the emergence of more open source tools that do this… is actually analyzing the call stack, or the data flow, between a vulnerable function in a library and your application. If there is a match, meaning that the call flow is possible, then you can automatically create a VEX statement that says that you’re affected and that you’re exploitable.
The interesting thing is that it’s the absence of data flow confirmation is not necessarily proof that you’re not exploitable. That actually does require some human research. There’s at least freely available tools to help organizations get started, which I think is really important.
I’m going to take this into the future and just throw something out there, and then I want to hear your thoughts on the future of VEX and VDR and where things go from here.
We hear a lot about AI, ChatGPT, all the AI related tools out there. We start throwing our code at this and then start throwing our vulnerability information at this to see someday it coming out and saying, based on your code, there’s no vulnerabilities here or it’s not exploitable and then having a human readable explanation or justification for this mitigation.
I do a lot of threat modeling in my day job at ServiceNow. I threw a sample threat model at ChatGPT and it sounded okay, but then when you actually read through it and actually was understanding what it was spitting back at you, you realize that yeah, this is not overly intelligent.
I think ML in general, if we limit the scope of what we ask it to do, I think you can do some really great things. For example, I mentioned earlier that the lack of data flow analysis confirmation is not necessarily an indication that something is not exploitable. It’s only that you haven’t found it. You haven’t found it because most of the static analyzers actually rely on rules. Rules that other human beings actually have to create.
But what if we didn’t have to create those rules? What if we could actually have a very task specific ML algorithm actually do that for us and account for some of the complexities of modern software like polyglot analysis, where you’re having to traverse multiple languages in a single data flow.
Most static analyzers don’t do that today, but there’s no reason why a highly tuned ML couldn’t do that for us. I think there’s certainly opportunities to get the exploitability right, as long as we have narrowly focused, very specific tasks that we’re asking of the ML instead of having this open-ended thing like ChatGPT is, which fails miserably on a lot of different tasks.
Your opinion five years from now, as we close out this podcast, where’s VEX? Where’s VDR?
I think it will be an integral part of vulnerability management. VDR especially will be useful for procurement. Procurement in, outside of cyber, they need an easy button. We’ll eventually get there, but it’s going to take a while to get consensus to, to be able to do something at scale.
The sharing information problems need to be addressed first and foremost, however. We can debate all we want about SBOM formats and VEX versus VDR and all. None of that really matters if you don’t have the ability to share things.
So hopefully in five years we’ll actually have standard mechanisms in place where organizations, vendors, consumers, et cetera, can actually share both SBOMs, VDRs, and VEXs because right now it’s not really available.
This episode of daBOM was created by me, DJ Schleen, with help from sound engineer Pokie Huang and Executive Producer Mark Miller. This show is recorded in Golden, Colorado, and is part of Sourced Network Productions. We use Captivate.fm as our distribution platform and Descript for spoken text editing.
You can subscribe to daBom on your favorite podcast platform. We’re going to be releasing a new episode every Tuesday at 9:00 AM. I’ll see you next week as we continue to diffuse daBOM.
Steve educates teams on the strategy and specifics of developing secure software.
He practices security at every stage of the development lifecycle by leading sessions on threat modeling, secure architecture and design, static/dynamic/component analysis, offensive research, and defensive programming techniques.
Steve’s passionate about helping organizations identify and reduce risk from the use of third-party and open source components. He is an open source advocate and leads the OWASP Dependency-Track project, OWASP Software Component Verification Standard (SCVS), and is the Chair of the OWASP CycloneDX Core Working Group.