IoT Village - The Joy of Coordinating Vulnerability Disclosure
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 374 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/50732 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Asynchronous Transfer ModeEndliche ModelltheorieOrder (biology)PlanningHypermediaLocal ringMultiplication signCASE <Informatik>VotingRight angleMedical imagingChemical equationRational numberCodeSource codeHacker (term)Product (business)Vulnerability (computing)WeightAlgebraic closureHand fanRow (database)Projective planeDirection (geometry)Open setVirtual machinePrototypeWorkstation <Musikinstrument>FamilyComputer configurationGame theoryVideo gameNatural numberLibrary (computing)Streaming mediaPoint (geometry)Touch typingService (economics)InformationOpen sourceWordWebsitePower (physics)Personal digital assistantBoundary value problemSlide ruleInformation securityGoodness of fitWeb page1 (number)Rule of inferenceNumberLevel (video gaming)Reading (process)2 (number)Connectivity (graph theory)Core dumpPerspective (visual)Scaling (geometry)Disk read-and-write headDifferent (Kate Ryan album)Data miningSummierbarkeitTraffic reportingMaizeVideoconferencingMessage passingSoftwareExtension (kinesiology)Musical ensemblePhysical systemBitQuicksortShared memoryOnline helpSound effectMereologyArmContext awarenessAliasingSequenceProcess (computing)IdentifiabilityFocus (optics)Series (mathematics)EmailIncidence algebraInstance (computer science)Data structureGroup actionArithmetic meanField (computer science)Computer hardwareCausalityAddress spaceRoyal NavyPressureAdditionPoint cloudFormal languageComputer programmingFlow separationCovering spaceSelf-organizationBeat (acoustics)System callType theory19 (number)Dependent and independent variablesEvent horizonQueue (abstract data type)View (database)Computer iconNP-hardAngleWater vaporCoordinate systemForcing (mathematics)Cellular automatonProfil (magazine)Mixed realityResultantGraph coloringFigurate numberKey (cryptography)AreaGauge theoryDesign by contractPatch (Unix)InjektivitätSingle-precision floating-point formatCAPTCHAData conversionCommunications protocolElectronic mailing listInsertion lossExecution unitSpectrum (functional analysis)Bit rateFile archiverSoftware bugSide channel attackCache (computing)Kernel (computing)Interactive televisionDigitizingStack (abstract data type)Sign (mathematics)Machine codeThumbnailInternet service providerBlogSystem administratorBefehlsprozessorGraphical user interfaceInheritance (object-oriented programming)Peer-to-peerProof theoryInternet der DingeFuzzy logicCross-site scriptingComplex (psychology)Latent heatMathematical analysisEmbargoCapability Maturity ModelNational Institute of Standards and TechnologyCircleSymbol tableComputer animation
Transcript: English(auto-generated)
00:00
From your perspective, what does CVD mean to you and kind of what are your motivations when you're working on an issue and you need to disclose something? So I think the main point of CVD is, of course, to protect users.
00:24
And my main thought is a bit historical. So I've been around in the 90s where disclosure was you find a mailing list where other hackers hang out and you zero-day everybody and nobody really cares. And that became Responsible Disclosure, which is the predecessor for coordinated vulnerability
00:46
disclosure. The problem with Responsible Disclosure is that it was really made to shame hackers that didn't follow the desired protocol of industry types. And the current iteration, coordinated vulnerability disclosure, is instead of trying to be coercive,
01:08
trying to actually work with people that want to submit issues. And it is much, much improved over that. And it does a much better job of making sure that every party to a vulnerability disclosure,
01:26
that means vendors, customers and hackers have their needs met. So that's what I think coordinated vulnerability disclosure is for me.
01:42
Awesome. Daniel, another researching perspective. What do you think about coordinated vulnerability disclosure? So we are interested in understanding how things work, and sometimes that involves understanding how things work that are not supposed to happen, which we then call vulnerability.
02:02
And of course, then we want to figure out the truth behind it. And that involves talking to the vendor. Very often they tell us that we got something slightly wrong and that is an opportunity to correct these mistakes before we submit the paper or publish the paper.
02:25
But also, from my perspective, it's the only ethical thing to do to protect the customers, to protect people who are using these products. And for me, I'm not in this game for finding vulnerabilities as a motivation by itself.
02:45
My main motivation is finding the truth, finding a better understanding of how things work. That's awesome. Katie, you've had a lot of experience monkeying around with CVD.
03:03
What does the CVD mean to you? Hold on a minute. All right. So CVD means to me. So when I think about it, I think of it from a risk-based perspective. And so what Chrome is alluding to here is that my background is government in nature.
03:25
So I spent about 15 years in the US government, 12 years of that in the US Air Force. And then several other years at the Department of Homeland Security where I ran the vulnerability disclosure profiles programs that most people are familiar with.
03:41
So like the NIST NVD program and the Carnegie Mellon CERT CC program and the MITRE CV program. So I was the sponsor for those programs. So in a single year, 2017, we coordinated and disclosed 14,800 vulnerabilities for public disclosure. So that's 14,000 IT vulnerabilities and about 800 ICS vulnerabilities.
04:05
In the following two years, so 18 and 19, we coordinated and disclosed over 20,000 cybersecurity vulnerabilities. So I've kind of seen things from lots of different perspectives. And when I think about CVD, I think about balancing risk, right?
04:21
So CVD is really a process. And every time you, when you're going through this process, understand like every organization is going to treat it differently. Because there isn't a standardized like one size fits all kind of thing here. There's differences that are going to happen across the coordination stack. So if you're looking at digital services, it's going to be different from software, which is going to be different from open source,
04:42
which is going to be different from hardware or ICS, like all the differences are going to come into play there. But the overarching sort of thing, the big takeaway here is that CVD is balancing risk. It's all about making sure that there is an opportunity for the product vendor
05:01
to fix the problem before an adversary has the opportunity to take advantage of that. So like, it's all about protecting the end user. And I think everybody that I've ever met in the entire ecosystem is all focused on that. Like we may be speaking different languages. We may talk past each other, but I think that everyone is trying to protect the end user.
05:21
So to me, that's what that's what CVD is about. That's awesome. And remind you, remind me to circle back to you later in our talk, talk a little about some of the psychology behind some of this. I would love to hear my dearest friend Lisa's thoughts on kind of coordinating vulnerability disclosure and
05:42
kind of what some of your motivations that you've seen or kind of what's behind your modus operandi. Yeah, so I think where I've seen a sort of an evolution of of what we do now, where I think Heartbleed was sort of the big start of where we paid attention, realized we
06:04
all need to come together a little bit, and then Spectre Meltdown sort of brought us even closer. So my thought is, is that not only do we work with the researchers, but we work across the industry to, you know, make sure that we're all doing our best to protect our customers.
06:22
So, like Katie was talking about, it's really our end users that are our focus and making sure that we have the right people involved to be able to solve the issue and then provide a security update so that our customers could get it and be best protected. That I didn't say CVD.
06:41
Oh, you just did that. I know. Yeah, yeah, yeah. So Omar, kind of you've been doing this for a while and you work for a very large company, so I'm sure you get the opportunity to see a lot of different vulnerabilities. So what does the whole process mean and what are some of the motivations you've seen?
07:00
Yeah, I think that pretty much everybody has summarized very well is protecting the end consumer at the end of the day, but I see it as a very, very complex ecosystem that we're trying to actually solve. And Katie mentioned in one side hardware, another side software, then if you decompose that, then you have open source, which
07:22
we're going to probably talk about a lot in here, and you have things that, you know, perhaps you cannot even control. I have had, I guess, the pleasure or another pleasure in some cases where we're looking at vulnerabilities and even in some cases, whenever we try to actually solve the issues, the companies don't even exist anymore.
07:42
That's another predicament, you know, that in some cases we actually have to take into consideration and it's a fairly complex ecosystem. And at the end of the day, what we have to actually put our heads together is how can we modernize our practice into the way that not only, yes, we deal with a single vulnerability, we reproduce it, we actually fix it, we find the patch, but how can we accelerate that process?
08:08
One, everybody's talking somewhat the correct vocabulary, or at least they can understand each other, and then we also understand the overall risk, right, and how to prioritize things and so on.
08:23
So it's fairly complex, so we can actually talk about it for quite some time, and that's what we're here for. But those are some of the initial perspective that I want to share. Thank you. I appreciate that. I want to talk about something we touched on in one of our prep calls. We have Anders and Daniel that come at this research angle from slightly different angles.
08:47
So maybe, and you two gentlemen can figure out who wants to start first, but thinking about the perspective of academia versus a professional bug hunter, a security researcher,
09:00
and maybe you can maybe describe some of the particulars in either area, and maybe what are some of the differences you might see between those two different types of research? So from the industry, or rather the bug hunter, the professional or not, there is very often an element of good old fashioned fun.
09:28
I started out hacking things, not because I wanted to achieve anything with it, just because it was great fun. When you become pro with it, there is different motivations. So if you work for a
09:45
company and you work on their products, obviously your motivation is to make those products secure. And sometimes my past job, I did some research and that was sponsored by that company, but essentially me doing what I like to do.
10:04
Their end of it was attention to that company's competences, and my end of course was having a bit of fun. So that is probably pretty typical of what hackers get out of hacking.
10:22
Yeah, from the more academic side, it's more of advancing the field, the knowledge in the field. And there you try to figure out how things work and what the implications there are, what the security implications of certain understandings are. For instance, understanding when you can execute a certain piece of code and that does something that is not intended, what are the implications?
10:51
And this is interesting from a security perspective than if it enables someone to do something that they shouldn't be allowed to do. Yes, often this involves disclosing a vulnerability to a vendor, but I would say
11:06
that, for instance, bug bounties that might be very relevant for people from the industry. I think this plays a smaller role in academia because you have to participate in CVD
11:25
anyway because if you wouldn't, then you would get a lot of problems in the academic community. There's a lot of peer pressure that you participate in this because that's the only ethical approach to handling vulnerabilities.
11:42
At the same time, if you keep them secret for too long, then there's also the question on why did you keep them secret for so long? Maybe also some perspective that I can share. In academia, you see more and more often the fact that academic publications are easier to publish if you have a CVE.
12:06
I don't think that's a good thing because I think that a CVE does not necessarily describe that a research result really brings you new insights.
12:23
A CVE is an identifier for a vulnerability, not an identifier that says this is something with a new insight. Also, if you participated in CVD and you mentioned this in the paper, this
12:40
also gives you bonus points, at least that's my impression, to get the paper published. Of course, it's good if you participate in CVD, but my feeling is also that this is going a bit too strong. It's now overemphasized in our community.
13:06
What about having an icon or a fun name? Of course, I'm really a big fan of logos and names and anything that is fun, but of course, there are multiple layers here.
13:27
For instance, we had this paper, Hello from the Other Side, which had this name not just by coincidence. It was about a covert channel in the cloud where we send data through the cache from one virtual machine to the other.
13:43
We really went crazy on that, so we built an entire TCP stack on top of that and tunneled an SSH session through the cache covert channel for whatever reason. We then sent a music video through this cache covert channel. Of course, we couldn't just take
14:03
any music video, so we had to make our own parody of Hello from the Other Side. This is just fun. I mean, we are just a bunch of people and we like to have fun. Once you've finished your project, it's really nice to close this up with something nice and funny.
14:24
Logos on the other side also help you communicate about the issue. I realized that every time I create slides, my favorite slides are the slides that don't
14:40
have any words on them, just pictures, logos, and icons, because then I can follow the speaker. I don't have to read because I'm very bad at reading. I can't read and listen at the same time, and I bet many people can't do that. So if there's only icons and symbols and logos, I can follow what the person is saying and look at the images at the same time.
15:05
I'm always annoyed if I have to speak about the vulnerability and I don't have a logo for it, because then I have to put text on the slide. Since Daniel brought it up, let me ask the other panelists, how do you all feel about branded vulnerabilities?
15:22
Do you like logos? Do you find that helpful to you and your constituents? Oh, I'm sorry, Daniel, no. When there's a logo or a PR or everything, it grabs the media's attention so quickly. And sometimes the issue is not that severe or very hard to be able to breach, and it gives a little bit
15:50
of extra fear to our customers and also encourages us to maybe reprioritize other issues first, which are more severe to our customers.
16:03
So although I think I get the point, it's easy to talk about it, they grab the media's attention really quickly, but if you put a caveat about the real severity of the issue in there, maybe that would help a little bit.
16:24
Yeah, so we often have in our team, we often have this discussion, should we have a logo? Should we have a website or shouldn't we? And we have meant so for most of our papers and most of the vulnerabilities that we discovered, we don't, because we just say they are not significant enough.
16:43
And I think that's also the responsibility of the researcher to first assess, is this something that's significant that I need this additional PR or is it not? Right. Well, then you're doing it the right way. So yeah, cheers. Maybe, but it's also, I mean, I will have a different view on what
17:04
is really important than you have, right? Everyone will have a different view on that. So something that I think is really, really important, you might say, well, it's not really exploitable in our use case. Right. We don't ever want to downplay a researcher's work. I mean, it's important, whatever
17:20
they do, and we appreciate especially doing CVD with them and not being zero-dayed. You know, but yeah, it's a more matter of, you know, especially in a bigger industry when you're trying to protect your customers best, how do you approach it? There is a fight for customer attention. And for customers to make the right security decisions, they need to be attentive about the right things.
17:50
And logos and hype in the press walks that, and we have bad security outcomes for our customers due to that.
18:02
Then of course, there's also the thing that even fixed vulnerabilities sometimes cause a cause of suffering for system administrators running around in the basement of patching systems and all the like. And to some extent, I sometimes find overhyped bugs disrespectful to those people.
18:25
Yeah. So in my case, I don't have an issue with logos or anything, even though in a previous call and a couple of exchanges, Daniel, I alluded into that. Whether it's a logo, as a matter of fact, even for Cisco, we had the first vulnerability with emojis. So we have emojis, logos, and everything else.
18:47
So at the end of the day, if it helps to do an alias for a vulnerability and bring some awareness, I'm perfectly okay with that. The only challenge that I'm seeing or opportunity is that in some cases, whenever we write information, and even vendors,
19:06
in my case, we have a research institution at Cisco called Talos and we find vulnerabilities in other people's product. So in that case, yes, we also have logos and we put them out there and we put their blog posts and so on.
19:23
But what we need to do collectively is to make sure that the media is not either downplaying it or over proportion. So we have to have a balance. And we all say the media, the media, the media, but it's our responsibility on how we create our security advisories.
19:43
And it's both ways. Even though we're vendors here and everything, but we have to point fingers to us. How clear is our advisories? Can we have some collateral that the end consumer actually knows about? What are the implications are? And we work with the researcher to make sure that we all understand what the problem is.
20:04
And in a previous conversation, I also mentioned that the biggest nerdfights in history is whenever it comes to CVSS scoring. And whenever it comes to risk. And at the end of the day, whether it's CVSS, whenever we
20:21
come up with some new ways and everything, we have to have some type of way of saying, yeah, you have to jump right now and fix this vulnerability versus the other hundred criticals that we were also dependent on. And especially nowadays, since it's not so much of a vulnerability coming from like a Cisco or an Intel or Red Hat or anything else, it's open source.
20:42
By the time that actually I'm boring you to death in here, probably three more CVEs that are super important have been disclosed and we don't know about. So that's the type of balance and the type of, I guess, for a corny way of saying orchestration that needs to take place in this CVE.
21:04
I want to add another point here regarding the overhyping. Having a logo, having a website for something, for a vulnerability, for a research result, does not necessarily mean that you overhype it.
21:21
We had experiences where we just put papers on archive without any website, without any logo and anything. And media picked it up and media reported about it and not necessarily correct in all cases. And we have this, for instance, for the takeaway paper where we analyzed some side channels on AMD CPUs and their media picked it up.
21:51
We intentionally didn't make any website or logo, but media picked it up and the reporting, it was not entirely wrong, but it was definitely, I would say, definitely overhyped.
22:08
And we discussed this also in our team and the conclusion that we had was that in some cases we should have a website, even if it's not a vulnerability that we want to hype because it's not that significant, but just to have a very clear message which says how relevant is it.
22:28
And I think that's the important part that we also always had on our websites, a short three -sentence summary, like what is it, who has to care about this, and what can an attacker do?
22:43
Yeah, I think that's great. That's great you do that and it's good advice. I think that I would love to see it be more adopted in the researcher world, for sure. I think it would help out. So before I move on to my next question, I'm just going to say, when I retire, all of you
23:04
in vendor world are doomed because I'm going to offer my services to Lreg for free, writing up crazy headlines. But to touch on a point that Daniel made, he invoked the Katie Maciurice clause. Let's talk about bug bounties and CVD.
23:22
Oh, Katie, maybe you should take that one. And actually, because our friend Katie Noble Trimble here actually has a lot to do with that in her organization, could you maybe talk about how coordinated disclosure interacts both good and bad with bug bounty? Yeah, so I love Katie Mo. I think that first off, let me just say that it is an honor to be confused with Katie Mo.
23:50
My hair color is like, you know, getting close to hers too now. I don't know. I think I have been accepted to conferences because they thought that I was she. When I showed up, I don't think they were nearly as excited
24:01
to see me instead of her. So, but so yeah, bug bounties. So bug bounties are a tool. I'm trying to be serious here about bug bounties. So bug bounties are a tool, right? So they're a tool in a toolkit, and they are there to incentivize, but they're a part of a well structured product security portfolio.
24:28
They're not the whole thing. And there are different motivations that will help people from different perspectives. So in the academic world, for instance, bug bounties may not weigh as much because a lot of academic institutions don't allow
24:43
them to accept that the hard, cold hard cash, right? If you are a professional bug hunter, you know, that might be how you pay your mortgage. And so there's a lot more tied to that. But the problem becomes that like, bug bounties are a wonderful tool. And they're, they're great, but you have to have a good program in
25:02
place already to accept that that information to be able to execute on that information and figure out like, how do you how do you actually how there are so many questions about it? How do you award? How do you manage it? Like, are we going to tie the payments to CVSS scores? Are we going to tie the payments to a well thought out proof of concept?
25:22
There's so many pieces that go into it. And so I'd say like bug bounty is not a one size fits all. It's gonna vary from organization to organization. And some organizations are going to have different timelines, different pieces. Yeah, it's complex. So bug bounty is not the end of the world. Bug bounty is a tool.
25:42
It's a tool in your toolkit. So it's a great tool. I love the tool because I'm the director of it at Intel. But it's not, it's not everything. So vulnerability disclosure is more than bug bounty alone. Yeah, I think when you think about it, too, like, there's a lot of smaller companies that probably, you know, don't have that or don't have
26:04
the funding, funding for it. But that that doesn't mean that they're not wanting to work with researchers, they just can't get the budget to do it. You know, so, so, you know, do your best to try to figure out how to reach out to those companies. Hopefully they have a web page or email address, you know, secure ad, p-cert ad, security ad. I know we struggle a little bit. But, you
26:28
know, there's plenty of us around that could help find the right contacts, because we're sort of come together from all over the place. And we're sort of an expanding group here. I like it.
26:45
Bug bounties are one of the wonderful signs of how the industry has changed. Back in the 90s, researchers were often rewarded with, the disclosure was rewarded with lawsuits. And now people in the industry are working with the people and have started actually paying the
27:05
researchers for their efforts. So I'm very much thumbs up on bug bounties, especially because it shows how the industry has changed on how they view hackers. Hackers are nowadays helpers, and not the enemy.
27:20
Yeah, I thought you were going to say t-shirts for a minute there in the early days. I just wanted the t-shirt. Oh, earlier than that, it was lawsuits. Lawsuits, then it became t-shirts, and then maybe stickers, and then, yeah, and now, and now financial.
27:44
Awesome stickers. So let's, we've touched on it a little bit. Let's see if we spend some time now that we're past the top of the hour here. Let's talk about coordinated vulnerability disclosure inside an open source context.
28:00
So I'll go last, but let our esteemed panel here, maybe, whether it's our researchers or our vendor friends, kind of describe what works out really well and what are some challenges for you within the open source world, which makes up about 90% of all software now.
28:24
Hey, Jerry, open source one. Ah! I guess I start. So basically, that's top of mind for me, and it sounds corny whenever
28:47
people say, hey, what keeps you up at night and everything else? It's actually open source right now. And it's not so much of the predicament of using open source. We have to use and embrace and contribute to open source. I mean, I'm a super big fan of that.
29:02
The challenge when it comes to open source is that it can be anything, right? It's like IoT can be anything, right? So, and it's critical infrastructure for a lot of things, right? So to give you some somewhat of a real life, you know, I guess, a realization that we had
29:21
a while back, probably about four or five years ago, whenever Heartbleed came, we were looking in the industry, right? Not only Cisco, but a whole bunch of other companies, right? So what to prioritize as far as actually giving funding, probably doing research, and so on. So we said, okay, so open SSL and things like that, which are actually super important for us, there's a lot of people looking at it right now.
29:49
So can we look at things that perhaps are actually critical infrastructure that nobody actually has probably taken a look at it, right? We're going down the list. And as a matter of fact, that number two is perfectly fine.
30:02
There are two guys that work on open SSL that get paid to do it, yes. Indeed. And the example that I was going to go is NTP, the Network Time Protocol, so NTP this specifically. And there's also two guys that don't get paid, which is bad, right? They're not even the full time job, right?
30:21
Well, I guess it's now, you fast forward all these years, it's a little bit better now, right? So it's a lot better. And it's not that the problem is the poor guys that actually are contributing to the code. It's a matter of scalability, it's a matter of actually even running static analysis into the thing that didn't even assist five years ago for these components.
30:42
Now, of course, we're modernizing our ways, even if you submit things on GitHub, a lot of things actually happening. We're getting better, for sure. It's just that it's getting way more complicated, right? And more people actually are not only, of course, contributing, but using it more than contributing, which is the other predicament.
31:02
And in some cases, we were talking about bug bounties, they're in some cases actually amazing. I'm a big fan of bug bounties, right? The challenge is that in some cases, we also don't think about, does this affect other vendors? Am I actually finding some type of vulnerabilities that
31:22
perhaps, yes, it's a SQL injection, cross-site scripting, which is actually pretty common, but I'm doing some fuzzing and I crash an application, what is the underlying issue, right? And in some cases, it's actually kind of a commodity, even though it goes back and forth, and it hasn't been shared. And then two or three months down the road, you say, oh, but this CVE was not shared with other diesel vendors that they're also affected.
31:46
So we go back and in some cases, you actually see people trying to find the same or found the same CVEs and they reported a different way, but there were no coordination. So those are the things that are crucial for us to be successful, and not only among vendors, but actually downstreams and upstreams as well.
32:05
So especially when it comes to IoT is number one. Yeah, so Omar, you brought up a few points. One is who do you bring in ahead of time? And I think that we're seeing that even if you're the competition, we still want to work together in the background here.
32:28
Our whole goal is to protect our customers, and we often have the same customers, even if we're competitors. So we like to make sure that we're all working together, and the idea is when do you bring someone in, and it's not only the researchers.
32:44
Okay, Crow, where you got your hand raised, go ahead. Can I tell you a secret? Yes. Open source has worked with our competitors for 25 plus years. Yeah, yeah, well, I would have to say, you know, with some of those other vendors in the industry, it wasn't always so nice.
33:09
We're getting better. But I think what, you know, not only the researcher should be thinking who else is affected, but when we are the receiving end, we should be thinking who else is affected and bring them in.
33:23
I recently worked on an issue, a very recent one, where we were in Keybase, and the researchers were in right with the industry people who were fixing the issue, and there was great, you know, coordination between, you know, amongst the group there.
33:42
It was pretty awesome to see of how much we've evolved. How do you do that, though, with like open source, though, when everything's shown? It's super easy. Well, and I guess I'll chime in before we let other folks. I found it very interesting how some vulnerabilities have been patched wide in the open under the cover of some other patch.
34:12
I don't know what you're talking about at all. So in general, open source, there is no single definition of what open source is.
34:28
You could have two young ladies in Bangladesh that have an amazing idea, and they're just trying to get this creativity out there to share with the world. You could have large corporations like a lot of the folks represented here.
34:40
I mean, you could have academics. So open source is a lot of different things, and there's really no one definition or moniker that fits that works with every community. But if you're thinking about the types of open source that might make up a product or some kind of a cloud offering, you're going to have the high end of the spectrum or things like the kernel group, which is very mature and organized, Kubernetes.
35:07
And then you start moving down to Apache Foundation, these other large communities. And then you get down to something where it's a single person, a couple people that are just playing around. And it's hard to put any kind of structure or process to all these different models,
35:25
because a lot of people that code for open source are doing it for free, for no remuneration from large companies that make a lot of money off of it. But they do it for free because they love kind of adding value and expressing their creativity.
35:41
And as Daniel mentioned, open source is very creative. And some of the larger projects, we do have ways to privately take in data. So we take in a private bug report that's well established within the community for a long time. And when you're looking at a less mature community or package or library,
36:03
they might not have that capability. And that's where larger kind of big brothers and big sisters like a Red Hat or a SUSE are canonical, kind of step in and try to help mentor these smaller projects to help them get these good practices set up so they can take in a private note. Because sometimes a lot of my team, we track between three and 5,000 vulnerabilities a year out of 450,000 packages.
36:30
That's a lot. Not all of them really kind of bubble up to the level of they need the attention of like a Specter Meltdown or a Heartbleed or a Blueborn, all these kind of the other big name nonsense things.
36:45
But they still need to get fixed. And what open source is really good at is you identify a problem, the team attacks it collaboratively, they develop a solution and release that update very quickly. And they don't like to spend a lot of time making a big deal out of it because they've already moved on to the next feature,
37:04
the next big thing. But yeah, it can be challenging, but there are definitely ways to do it. There are methods to do it. There are groups that allow this. I see I hear a lot of bleeping in the background. What's going on, folks? I'm wondering when you're going to change your hat again. Sorry.
37:24
There we go. Big red. All right. Let me get this panel back under order. When you're thinking about doing a CVD, what are some things you might want to prepare for?
37:44
What can you do to help make that coordination very successful? And let me start off with we'll start off with Lisa because she has a lot of ideas. So what from your perspective, you can what will make you successful when you're getting ready to do this multi-party coordination?
38:02
So I guess it depends on how familiar you are with it or not. Like if I was a researcher and I wasn't too familiar, I would potentially utilize CERT or something like that. If I felt like it was going to cross over more than one company in the industry, just to have them help.
38:23
Because I think it could be overbearing of figuring out how if industry company A wants this date to get it fixed and company B wants another date and company C wants another date. So I think that's one way that people, a researcher could use.
38:43
But I think the idea is to figure out when you start is what are your rules? What's your embargo date? How many days you actually want to give the company or vendor to fix the issue? Typically it's 90 days. I would prefer that as a company.
39:03
I think it's respectful to do that, especially when things get more difficult. So think about your rules and what you want to do and how you want to approach it. But I'll pause there because I know we're running out of time to get other people in. Anders, what are your thoughts about how you could make CVD successful?
39:23
Three things. Use it as a tool not only for vendors but also for hackers. Listen to what vendors are saying or what hackers are saying and try to make the best out of it.
39:45
And be aware that there's often a lot of complexities involved with it. Be patient. Omar, what are your thoughts on how you could make CVD successful? I think that I'm going to capitalize on something that Daniel mentioned in a previous call.
40:06
And it's in some cases, even if you provide some data to even a vendor or whoever, whenever you're coordinating, you have to make it first easy to understand. But at the same time, in some cases, getting the right people at the right place at the right time
40:23
to make sure that you understand the technical implications of a given vulnerability instead of running around. Because even if you have the 90 days or 60 days or 100 days, getting that streamlined and modernizing the way that we actually exchange the information among all the affected parties, that's number one.
40:43
And Art Mannion, which is actually a good friend of mine, he leads the CVD team. He's the first one that will tell you we cannot scale. This is a big ecosystem. Just having a Rolodex of all the vendors that actually use a component is foolish.
41:01
We will never, ever, ever be able to actually have a complete thing. But what we have to think about is what Lisa mentioned. In some cases, we are working with competitors even more than our companies. For example, I work with Lisa a lot. I work with Juniper even more, in some cases, more than I work with Cisco
41:21
whenever it comes to fixing a vulnerability, let's say in BGP or SPF or whatever the case might be. So that type of thing of, oh, I'm not going to talk to these competitors. Is there a problem on everything else? That's 20 years ago fallacy. And then the last one is that at the end of the day, I have the reality and I tell my guys at Cisco
41:44
is that whenever I push a button, I publish a CV out there. The number ones that actually are reading that CV are the bad guys. And probably at the same time that you're committing an open source code, probably that's going to be the case too. So what we have to do is think about how can we exchange information to the consumer,
42:06
to downstream and upstream providers in a more modern way. I know that sounds corny, but can we actually make it more, and we're trying to do that, machine readable. So if in the case that we actually have that Rolodex of people and everything else, get some tool,
42:24
and that's what Art and the guys in CERT are also doing. We're creating tools that allows us to do that. And even if you have, I don't know, even two years ago that didn't exist and it wasn't even a thought. So we have to move faster. That's the number one thing that I want to say.
42:41
So as I close, I'll say Art Mannion has the best facial hair in the industry. I love Art. I want to thank our panel here. This was a great hour together. I really appreciate your time and expertise. We're going to be hanging out for a little bit on this awesome Discord channel that my kids made fun of me all week about because I've had it up.
43:05
Thank you everybody for coming and your attention here, panelists. And thank you to the audience for DEF CON for having us here at the IoT Village. Enjoy your day and enjoy the rest of the con. Got some great stuff lined up the rest of the weekend. CBD! All the way! Oh yeah! We're out!