We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

ETHICS VILLAGE - Responsible Disclosure Panel

00:00

Formal Metadata

Title
ETHICS VILLAGE - Responsible Disclosure Panel
Title of Series
Number of Parts
322
Author
Et al.
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
In today's climate of data breaches and information leaks, how do we in the infosec community disclose the vulnerabilities we discover responsibly? Who are we responsible to? Can we set a standard practice that is ethical, fair and effective? These and other questions will be discussed by some familiar faces on our Responsible Disclosure Panel
Musical ensembleCodeComputer virusPerfect groupComputer programmingRight angleSoftware testingStandard deviationDependent and independent variablesInstance (computer science)Data conversionSoftware bugDifferent (Kate Ryan album)WordCoordinate systemVulnerability (computing)Metropolitan area networkProcess (computing)BitInformation securityTwitterPatch (Unix)MereologyWritingMultiplication signFormal languageRow (database)RainforestInternet service providerVolumenvisualisierungMobile WebTouchscreenOpen setComputer-assisted translationHost Identity ProtocolTraffic reportingState of matterType theoryArithmetic progressionModemReal numberExistenceMeeting/InterviewLecture/Conference
Dependent and independent variablesMereologyNumberProcess (computing)Axiom of choiceDivisorState of matterSoftware bugHidden Markov modelCodeMultiplication signContent (media)AuthorizationPoint (geometry)Video gameVulnerability (computing)Client (computing)EmailBit rateSet (mathematics)Software testingProduct (business)Position operatorBitGame theoryInformation securityDrop (liquid)Control systemSoftwareCategory of beingOrdinary differential equationExpected valueGame controller2 (number)Service (economics)Lecture/ConferenceMeeting/Interview
Open sourceDecision theoryDifferent (Kate Ryan album)Process (computing)Patch (Unix)TunisPhysical systemInformation securityRight angleGoodness of fitPerspective (visual)Flow separationMathematicsSoftware engineeringComputer virusMechanism designOrder (biology)Point (geometry)Multiplication signGame controllerDesign by contractSoftware bugInheritance (object-oriented programming)Streaming mediaSoftware testingPlastikkarteSoftwareHeegaard splittingWindowSinc functionShape (magazine)Form (programming)Software developerSelf-organizationFunctional (mathematics)Utility softwareProduct (business)Open setDrop (liquid)Vulnerability (computing)Internet forumMeeting/Interview
PlanningCASE <Informatik>Vulnerability (computing)Internet der DingeExterior algebraComputer virusSimilarity (geometry)Position operatorBitArmVideoconferencingDependent and independent variablesQuicksortGame controllerPhysical systemMultiplication signStandard deviationDrop (liquid)Lecture/Conference
Normal (geometry)Software bugEmailSoftware industryNumberGame theoryDifferent (Kate Ryan album)HypermediaStandard deviationComputer configurationBlogExpected valueNeuroinformatikInformation securityTerm (mathematics)Goodness of fitProjective planeVulnerability (computing)Context awarenessPoint (geometry)Internet forumSelf-organizationPhysical systemProcess (computing)Gauge theoryVector potentialInformationDependent and independent variablesInheritance (object-oriented programming)Traffic reportingRight angleExterior algebraAreaCondition numberVideo gameHacker (term)Software testingOrder (biology)Computer programmingConnected spaceDrop (liquid)MereologyDigitizingSpacetimeBitComputer hardwareVolume (thermodynamics)Rule of inferenceBuildingSoftwareProduct (business)Patch (Unix)WordLecture/Conference
Software bugSelf-organizationType theoryFormal languageInformationExpressionComputer programmingLecture/Conference
Software bugFormal languageLine (geometry)Computer-assisted translationBitLecture/Conference
Row (database)Metropolitan area networkRegular graphEmailSoftware bugMultiplication signComputer programmingDependent and independent variablesLine (geometry)Figurate numberLecture/Conference
WordAuthorizationOffice suiteOvalPower (physics)Hacker (term)BitProcess (computing)Physical systemUtility softwareInformation technology consultingLevel (video gaming)Software bugLecture/Conference
FlagWindowSoftware bugConfidence intervalAnnihilator (ring theory)Service PackVulnerability (computing)Power (physics)MereologyComputer filePhysical systemRing (mathematics)TelecommunicationCivil engineeringExistenceStandard deviationPattern languageChemical equationPerspective (visual)Lecture/Conference
TwitterData managementSoftware1 (number)Software bugGrand Unified TheorySelf-organizationDrop (liquid)Shift operatorComputer programmingPoint (geometry)WordCASE <Informatik>Lecture/Conference
Statement (computer science)Software bugVolumenvisualisierungSoftware testingMetropolitan area networkVideo gameProcess (computing)Dependent and independent variablesChainLecture/Conference
Multiplication signCodeSoftwareRule of inferenceVulnerability (computing)Mechanism designRegulator geneRevision controlNeuroinformatikInformation securityRight angleAuthorizationCausalityOperator (mathematics)AreaSpeech synthesisCASE <Informatik>EmailMereologyLine (geometry)Dependent and independent variablesAnalogyThread (computing)Physical lawFreewareDegree (graph theory)Video gameCybersexProcess (computing)Arithmetic meanLattice (order)Type theoryRootComputer programmingGroup actionGodPerspective (visual)Physical systemWage labourTrailOnline helpCharge carrierArithmetic progressionPoint (geometry)Workstation <Musikinstrument>Software bugLecture/Conference
Transcript: English(auto-generated)
important. They're important. I'm Katie Moe. If you want to know who I am, I don't know. I'll tell you later. Is this on? Hello. I'm in the room. Chris Weiss, CTO and founder of
Veracode. I've been doing disclosure stuff since the loft days in the 90s. Was a buddy of RF Policy 1.0. And Render Man, I hack stuff, lately sex toys, but also recently decided to not disclose to certain agencies and countries and such. So. Alright, I
accidentally ceded my time because they were like serious introductions. So here we go being super serious as I always am. Um, Katie Masouris, founder and CEO of Luta Security. Um, I launched, I held the Pentagon launch, hacked the Pentagon, made it so that, uh,
our people were no longer thrown in jail for that and instead paid some money. I launched Microsoft's Bug Bunny programs, wrote Microsoft's Vulnerability Coordination Policy, created Microsoft Vulnerability Research, created Symantec Vulnerability Research, wrote Symantec's Vulnerability Disclosure Policy, co-author, co-editor of the ISO Standards for Vulnerability Disclosure, Vulnerability Handling Processes, former pen
tester at At Stake. Okay. There will be a test later. Alright, so we're going to take questions. Who was first? Katie or well-ponded Congress? Well, for sure. 1998? Yeah, he was 1998. I was 2018. It only took 20 years for them to invite me. Oh,
okay. We're making some progress. It's progress. Okay, so. It wasn't the cat ears. They're vestigial. So, um, in deference to your employer, would you like to remember one of the questions that were approved by your employer or should I just shoot from the
hip? Yeah, okay, so. It's only us and Bruce and a camera. So, um, so I'm Big Easy. A couple of these folks up here are my friends. I hang out with them. You know, we see
each other at conferences and things like that. To start off the Ethics Village, I wanted to have a conversation not necessarily about responsible disclosure. I want to have a conversation about the ethics of responsible disclosure. So, when we're talking about
this, we're not necessarily talking about responsible disclosure. We're talking about whether or not it is still ethical to responsibly disclose. That is the question that we would like to explore. And if anybody in the panel would like to step down now, get the fuck out. Can you define what you mean by responsible disclosure? Is that, like,
tell the vendor and allow them to fix it before you tell the world? Is that your simple definition? That's right. I think that it is no longer ethical to maintain this practice as a security researcher. I'm an ICS SCADA security researcher. I work at the University of Illinois. I've been there for seven years before that. I did
financial. I've been a part of responsible disclosure and a lot of things ICS SCADA. And I'm not speaking for my employer or anybody else. So, I'm going to fuck off with virus. But the reason I wanted to have a panel was to just talk about this issue
of, you know, Microsoft has a bug bounty program. I don't work there. I am not going to point this finger at anybody who may have used to work at Microsoft. Okay? But I'm saying, like, as an example, Cisco might have a vulnerability disclosure program or big companies have. Well, come on. They have these programs. And I can remember back,
going back to the 90s, I started one of the first Internet service providers in the state of Kentucky. And back then, when people were hung up on the modems, we used to just knock them off with the blue screen of death. And I remember when responsible disclosure
first came up, and we were hoping to try and get somebody like Rainforest Puppy to come for this. Maybe next Ethics Village, we can get him out to discuss this. But I really started to think about this. And as I invited panelists, and then I saw Render Man last night. And luckily, we were both still sober enough to remember that this panel existed. I had to shake him a little bit. You really should be here at 3 o'clock. Because I
caught your take in Twitter a few weeks ago about responsible disclosure. I really think that the question really is, as security researchers, should we still responsibly disclose? Okay. Katie looks like she's really anxious to talk. So go ahead. I really
am. Responsible. You keep using that word. I do not think it means what you think it means. No, seriously, uh, we, we stopped using it in the ISO standard. We stopped using it in Microsoft, and that is thanks to a conversation with Jake Koons, who is formerly Open Security Foundation. Now he runs, uh, Risk Based Security. But Jake Koons came up to me
after I was on a responsible disclosure panel at RSA in 2010. And he was like, can we stop using that word, it has moral judgment, blah, blah, blah. Okay, so, with that word in mind, I know what you mean. It's that whole, uh, you know, it's that whole thing of letting the vendor, uh, letting the vendor take their time to patch before you go public
with it. Having had to write Microsoft's policy, we wrote it with three roles in mind. Finder of the vulnerability, coordinator of the vulnerability in multi-party instances, like meltdown specter type of things, and vendor, receiving the vulnerability report. It was very important to me as the creator of Microsoft vulnerability research that there was no language in there that said we wouldn't release security in details of the security
vulnerability before a patch was ready because it was important to, to be able to pull the trigger, pull the ripcord, if there was evidence of attacks and whatnot. That is a nuanced difference between Microsoft's policy and Google's policy, which has a strict deadline of disclosure. So there's ways that you can deal with this, and I absolutely
made sure that Microsoft never used that word while I was there. Alright. Okay. So, thank you, Katie. Okay. So, you got anything to say, Bruce? I'm glad that Katie was able to strike that for Microsoft. I mean, the responsibility for going to use the word relies on the people that wrote the code and the vendor, and
that's the way I've always held it. You're responsible for the shit that you write, and if it's bad, it's your problem. I'm sorry. Bruce, can you slow down your words just a little bit? We've got some lag. Bruce, can you slow down your words? You don't talk real slow there. Oh, that's perfect. That's perfect. Talk like you're from South Carolina, or from
the South. I'm from New Orleans, and we have a drawl. You need a drawl because we're lagging a little bit. Alright. Well, I will keep it brief. I think that the responsibility lies with the vendor, and ask if this is a party. You're responsible for writing the code. You're responsible
for maintaining your code, and at the end of the day, you bear that burden. So, I'm trying to push the ball forward, striking it from the disclosure debate, and trying to enforce some accountability on the vendors to do the right thing and push the ball forward when it comes to actually running stuff that doesn't suck.
Okay. So, what about the right side over there? Do you have anything to add? I think that the big thing right now is, like, we all consume a lot of these services and such that we're finding vulnerabilities in, and we have a stake in this. Previously, I found a bunch of stuff with air traffic control systems, and it's really weird
giving a talk about that when you have to fly home afterwards. So, my ass is in the game. I have a stake in this, but I think lately, and where my position comes from, is that you could be helping various authorities or regimes or whatever, but you're not
able to facilitate things that you might have other ethical issues with, and it becomes this thing of, yeah, you could be saving life in limb, but you could also be making something more secure that could be then turned against people. So, it's tricky. Okay. Well, so, when I look at it, I think there's a short-term benefit to disclosing
issues and there's a long-term benefit. You know, the short-term benefit is that one little bug gets fixed, and, you know, there's certainly some benefit to that. The product gets a little bit better, but the long-term benefit is that companies will start to realize
if they're going to get all these point whack-a-mole vulnerabilities that they're going to have to deal with, they're going to have to come up with a better process for dealing with them, and then they're going to realize it's even cheaper to use the
techniques that researchers are using themselves and fix the things. So, like with Parisa's talk at the keynote for Black Hat, she talked about the 90-day, why they had that 90-day drop-dead date where they'll disclose, and the reasoning behind that was it just
would force vendors to get better at responding, right? And what's that? Well, a lot of companies have lawyers, too. I mean, yeah, individuals, yes, but the compliance rate of fixing within
90 days is now at 98%. So, over the time that they've done that, they've gotten a set of customers to be able to respond within 90 days, and the fact that Google did that means that you could do the same thing, too, and they'd probably be able to respond within 90 days. So, I look at this as a long-term game to doing this, too, of getting
vendors to have an expectation of doing the right thing. Well, this now leads into the second part of my question. Would you like a chance to hurt the first one before I put in the second part? Alright, so I say this whole thing is irrelevant because this assumes two factors. One, it assumes that the vendor is ethical. Two, it assumes that
it is always legal to act subjectively ethically as the finder, and I can sit here and tell you all kinds of scenarios where that is horseshit. I work in M&A. If we don't buy the company, the bugs get burned. I end up sitting on a bunch of ode. If we don't do that, everybody goes to jail for insider trading. I've also done pen tests where
we broke a bug in a vendor, and we turn the bugs over, and then that client chooses to sell the bugs, and a non-zero number of those bugs have actually killed human beings. Like, these are things I've seen. So, the concept of debating a generic ethical approach to discovery and or disclosure, like doing the right thing is always legal,
is fundamentally flawed. So, and then... That is absolutely correct. Thanks. But the second part of my question is, what I feel and why I am trying to forward the idea that it's no longer ethical to use the disclosure process is what vendors are doing
with automatic updates. So, I have no choice as a user but to accept the automatic update and have no idea what other feature changes a vendor are pushing on me. So, you have
an important security update. Now, all of a sudden, you've lost a lot of features in the software that you originally bought, and then you got a lot of features you didn't want, like, oh, I'm going to start reading your email and targeting advertisement based on the content of your email. Oh, and you just signed off on that in the EULA, and vendors are using these, this automatic update to push features out into users.
Or resend features. Hmm? Or resend features. Or resend features, exactly. And then I think that's why, as researchers, as what does a researcher do when I submit that the process is fundamentally dysfunctional?
But that's the process of updates. Yeah, but the update is the symptom. The update is one of the symptoms. So, the vendor is like, oh, I can use this mechanism to change the product to suit my needs economically, but not necessarily, oh, this is a security
update. You always see these things. You have a security update, and then all of a sudden, things change fundamentally in the software that you use to where it's not the same anymore. I think the fact is that they're bundling these things, that in order to get the security update, you have to accept this other stuff that you may not want. Exactly. I don't understand how that's the
disclosure's problem. Yeah. Because you have no control over that. Some companies are going to do it, and some are not. What are you going to do? You're going to research how they're bundling anti-features with fixes before you disclose? I mean, that's making it too hard for the disclosure to even make a decision. But you'd also decide that if the company's a dick about it and does stuff like that, you can say, I don't want to deal with
you. I mean, the only way I could see what you're talking about being relevant is in an extremely activist perspective where you're saying, all right, I'm just going to drop ODA time after time on this vendor without telling anybody to force them to release updates that only fix the security flaw and don't package other shit, which, okay, I mean, I like hostile play. I'm cool with that.
Well, that's the way we did it in the 90s. This is an ethics talk. No, that's exactly why this is an ethics talk because I want to know, what do we think? Everybody's got cards. What do you think? Is what vendors are doing with the process ethical or unethical?
So you're advocating splitting up the security updates from all of their updates, right? That's kind of where you're going? Yeah, I think that's one thing. I don't want to see feature changes in security updates. That sounds great, but I don't think
this has fucking all to do with it. Yeah, so something that actually freaked me out when I was working at Microsoft, a CISO of a major, not utility company, but let's just say a very important company, whatever, they basically said, look, we'll do automatic updates for our corporate IT for Windows, but there's an XP controller on a smelting
device that is basically the investment has to last us for 50 years, so we are never patching that in any way, shape, or form. They air gapped it. Good luck with that. But essentially, the customer was like, we don't apply updates in certain scenarios.
And then another weird thing that I learned was that some customers basically wanted to do update Tuesday, update quarterly instead. They were asking because it cost them to do the testing of the fixes, so it's weird what people will actually accept in the end use scenarios, and that was something that even as a vendor, we were surprised by
our customers. We also were surprised that they were not willing to give up XP. We basically kept trying to kick them off and make support that much more expensive, and they just were like, oh, thanks for telling us that instead of $25 million extended support contract, it will be $50 million next year. Thanks for telling us. We'll just shred it into our budget, like super weird stuff like that. So anyway, this is all beyond the
control of the disclosure, right, of the person who found it. And can we bring open source into this? Are you going to make open source teams do separate updates too? They have been working on an update, and they get a security fix in there, and you're going to make them do extra work to come out with a separate
I don't have any control. I'm just a moderator of a panel. No, I'm just saying this is more work for the development organization if they're working towards a new release, it's easier to stream in the security fixes. So there's going to be a higher cost for security fixes if it needs to be separate. The ethical debate is a little different on open source, right? Because if you find
a bug in something open source, what's stopping you from also writing the patch? Not every vulnerability researcher is a good software engineer. It doesn't have to be a good patch, but yeah. It'll be a shitty patch. It'll be a shitty patch. In the guise of anything is better than nothing, if you find the bug, by definition,
if it's open source, there's nothing stopping you from fixing the bug. So can I ask a question to the panel since there's people living around doing this a long ass time, and I'm not trying to call anyone old. What was the first real auto update that was released?
Despite my recollection, I did work for Symantec under contract to evaluate the auto updating of virus depictions, I think in 2000. When did auto updating come into vogue, and has there ever been a separation between functionality and security updates in auto updating?
Windows 98 is the earliest one I can remember. Yeah, that's pretty close, I think. Well, and so Windows or actually Microsoft had several different updating mechanisms. It's not like each individual team there was at war with each other and just wrote their own shit or anything. Nothing like that ever occurred. But anyway, there was at some point, there was some point, like I think 16 different
updaters from different teams, so weird stuff was happening, and then they finally unified. Yeah, exactly, right? And then they finally unified under one system, but even that, you know, flawed. Okay, so I also want this to be interactive because you guys showed up here for some reason.
And we have some pretty badass panelists. Does anybody have any questions? About the topic. Well, can, this is going to be, come up here.
I don't need a microphone. Well, for the video. Thanks. So you were talking about the, you know, responsible or we want to call it coordinated disclosure,
whatever we want to call it. What's the alternative if we're saying coordinated responsible disclosures on ethical? Are we talking full disclosure or what sort of alternatives would you or the panel recommend or suggest? I recommend we acknowledge arms dealing as arms dealing and leave it at that.
What is the virus disclosure policy? I'm an arms dealer, fuck off. That's my disclosure policy. I mean, unarmed society is a polite society and I have yet to hear a policy yet that makes the society unarmed.
Okay, so now this Canadian needs to speak up. So for me, it's a case of, okay, at least, you know, if I find something, I can't leave it alone. I have to at least tell somebody, you know, do best efforts.
Sometimes that takes an ungodly amount of effort to even find out who the hell to talk to. Like, you know, finding vulnerability in air traffic control, there's, you know, it's a global standard, but, you know, ICIAO doesn't necessarily have teeth to enforce anything and it's got to be adopted, everybody.
I was like, where do you even start with something like that? So I was like, okay, I'm giving a talk at DEFCON where I'm saying, here's my evidence. I can't prove that they've mitigated any of these threats. Please prove me wrong. Six years later, DHS actually proved I wasn't crazy, which is a odd position to be in. But at the same time, I'm looking at it now and I'm thinking, I potentially help them secure things that could be now used against us.
Because yes, my ass is on a plane flying home. Those similar kinds of systems, though, are used to drop bombs on people. Let's go to quandary. So it's one of those, you also have to look at what the system is.
If it's some, you know, little IoT gadget or something like that, you know, maybe a little bit of PII or something like that. That's one thing. But as you said, you know, everything's a dual use. So if there's the potential that the technology or the owner of the technology use it negatively, I'm like going to reconsider.
So it's, yeah, drink. So, you know, I think I want to use the word responsible on the vendor side because I just don't think it's talked about enough in that context. And, you know, as an example, vendors have been getting more onerous with trying to report a bug.
You know, some of them make you go through the bug bounty program if they have one with all these terms and conditions. There was a good blog post by someone from Project Zero about trying to report a bug to Samsung. And the number of different click-throughs they were supposed to go through. In order to report a bug, you had to agree not to ever disclose the bug until they fixed it.
Like that's ridiculous, right? So they decided we can't sign off on that because we have a 90-day, that's our policy is 90-day disclosure. So how can I submit a bug saying I won't do that? So eventually routed around their whole foreign-based system and found someone to send an email to.
But that's not always easy when it's a foreign company, right? Like finding a right person to speak to when it's a Korean company is not that easy. So I think vendors are making it increasingly difficult for people to disclose to them. They're starting to game the system more.
Or it's just they don't realize they need to take reports. Like I've been doing, like I said, with a bunch of sex toy vendors that have basically failed to realize, hey, we're hardware manufacturers making a manually operated device. Now they've added connectivity. They're a software company. They don't think of themselves that way.
So trying to wake them up to this process because, like we've all found at one point or another, there's, who the hell do you email? There are also sex toy vendors who are selling people's data in volume. I mean, you know, everything's, you know, information is valuable, but. So the idea, the question was, right, what's the alternative if coordinated volume disclosure?
So there isn't any one answer to it. I mean, I think the thing is flexibility in the process and being able to kind of gauge what your principles are as the finder, what your principles are ideally as a responsible vendor.
Because I agree with Weld here. It's the responsibility for dealing with these vulnerabilities, short term and long term, is absolutely on the vendor. The vendor has had a huge advantage legally with lobbyists and the Computer Fraud and Abuse Act and before the exemptions to the Digital Millennium Copyrights Act. They had a huge advantage over being able to threaten and intimidate and silence researchers.
And one of the main things when I first was asked to be involved with the ISO standards, it started actually trying to define the rules of researchers. And I was like, excuse me, I don't know any hackers who strive for ISO compliance. So can we please make this about the vendors and what they should be doing and stuff.
So that's why the ISO standards are like that. I do think that deadlines are actually important. The original, you know, all original vulnerability disclosure policies set expectations for deadlines. That is the norm. And what I've seen in, especially in media coverage, people freaking out about people disclosing the presence of a bug,
not even the full technical details and confusing that and then, again, blaming the researcher. It's like, kill the messenger a little harder, why don't you? I think part of my whole life's mission, I mean, this is my 19th year coming to this town for this purpose.
Yes, I am old. This is what happens when your hair goes gray. Where do you plug an AC charger in on Bruce? Right, but look, the point here is that is the alternative something else? Is it always better to do full disclosure without waiting? Is it always better to wait forever? I don't think either of those is the answer. I think reasonable deadlines are important.
Setting expectations is important. And then no matter what you do, reasonable people are going to disagree about it. I'm actually super curious. I mean, like, so many bugs have been a part of where the parent organization that owns the company. Alcohol abuse.
The parent organization that owns the company that I find the bug in decides, you know what, this bug adds too much risk, we're going to sell the company. Okay, well, I'm under NDA because it's a pen test, so I can't tell anybody. But that company doesn't exist anymore for me to pressure to fix it. What happens? Or when I'm on an M&A gig and we don't buy the company, and now we're sitting on a bunch of silent ode,
some of which is in very large companies like the kind that get targeted by spy agencies. What happens to those bugs? Or bugs where you legit disclose to a vendor, the vendor is forthright and says that they're going to take care of it, and they issue a fake patch and then sells their own bug to an intelligence organization. Also a scenario I've been through.
What is the response to these gray spaces? I'm curious. There's always a whistleblower. It's a gear drop. So the response to these gray areas is I have to put myself personally at risk simply because I know something? I owe the world this? No, I'm saying it's an option. It's still up to you to decide what you do.
Fair enough. To drop details to a reporter or some other interested party. Yes, the Snowden option is an option. I will never take it, but it exists. There's a question in the audience. This is a question.
It raises a question. I was recently contacted by a security researcher who claimed to have found a floor in our product that led to, as far as they were concerned, a floor compromise. They would not disclose the details without a considerable fee that we have to pay for them in mid-corp.
Obviously, this represents… That's not in the ISO standard. Hold it. Well, let me recap the question for the audience. He said that there was a security researcher who contacted him with a bug,
a supposed bug that he never disclosed any details of, and they asked for Bitcoin. Lots of Bitcoin. Lots of Bitcoin. Even though it's gone down a little bit today, it still was a lot probably. Oh, yeah. This was a cool building. He was trying to buy a house, and this particular researcher,
and I use the word very questionably here, trying to be respectful of the progression, but I don't think he is, has a history of doing this. He would have gone so far as to use other people's research in order to try to get the magic game. Did he say he was going to drop it publicly if you didn't pay him?
Yes, he did. Okay. Yeah, so, I mean, that's just… That's a very good one. Yeah. Can I just say that's not a researcher. That's a criminal. Well, it's not. Obviously, that is literal extortion. Like, that is the definition of extortion. So I don't think that, I mean, is extortion unethical?
I think so, right? Yeah, I think we've got that one. Nice place here. Wouldn't it be bad if something burned down? I mean, but you bring up a point that in the increase of bug bounties, a lot of researchers are asking, and some of them are doing straight-up extortion like that, and some of them are simply asking if there is a bug bounty present,
and they're not accompanying that with a threat, and what I deal with a lot is organizations who confuse those two. They're like, how dare this researcher ask if we have a bug bounty program? And I'm like, well, they did this work, and they're just asking, so do you or don't you, you know, type of thing. Did they threaten to do anything with the information? No? Well, then they're not threatening you.
That's not extortion. I've seen even where language and culture issue get in the way. There was an interesting graphic I saw the other day of where the bug bounties were coming from and then who was fulfilling them, and there's a lot of overseas bug hunters, and I've had numerous exchanges where it was clear we were having a language breakdown,
and a person could read it as extortion, or you could read it as do you have a bug bounty that I can access, and so it's a pretty nuanced line there, I think. Yeah, I had to testify before Congress about this a little bit, you know, so just a little. So the Uber data breach, yeah, just a little testimony.
A little bit cat ears. So the whole Uber data breach, 50 million records downloaded by a Florida man. Why is it always a Florida man? Exactly. But what happened was, you know, Uber, this guy emailed them.
He didn't actually know about the bug bounty program. They referred him to the bug bounty program saying, oh, we have one over here, friendly researcher who's telling us about a flaw, and, you know, the maximum payment was $10,000 for the bug bounty. Literally the emails that were released, he's like, yeah, I was thinking more six figures. So he successfully extorted them for ten times the amount of their regular bug bounty,
and Uber during that hearing actually said, yes, that was an extortion payment, and we should not have laundered it through our bug bounty program. So absolutely, like, there are ethical lines that were breached in that, and I think Uber took responsibility for it, and I think that was the right thing for them to do.
I have a question, and I'm going to change the subject just a little bit and maybe turn this around. I had a career at a power utility before I was at the University of Illinois, and I had the, you know, a really cool job where I hung out with hackers and I hacked the shit out of my SCADA system. So I got to disclose a lot of O-day, some of it which is still out there a decade later.
Now, the question, because I heard the word criminal earlier, when a vendor comes in and threatens the career of everybody that is working at a place, says, oh, if you want to be a consultant when you retire,
you should really forget about this and just let this go. Should that be criminal? And is that illegal? I don't know. Coercion from people in authority to suppress bugs.
If we're going to talk about criminalization in policy versus private, how about we talk about why the hell is it a standard practice that the Oval Office decides they get to move into private sector afterwards? We're going to talk about criminal. We'll talk about it at that level. Yeah, well, I mean, it's, you know, personally, as a disclosure of many vulnerabilities that I know are still out there,
you know, from an ethical perspective, how long do you wait when civilization teeters in the balance? I made this panel to ask this question.
So what can you live with? So I think part of it… 15 years? Well, I'll have a much better talk for Shmukhan, Bruce. So I disclosed the bug to Microsoft. This was around like 2003 or something. And they took a whole year to fix it because it was in their file auditing system,
which helps them get C2 compliance. So the existence of the bug made everyone who had to use a C2 compliant windows non-compliant, which is why C2 compliance is ridiculous. But it was a bug where if you used hard linking in NTFS, you bypass the auditing system.
Seemed to me like an easy thing to fix. They said we had to completely rewrite the auditing system to have like new flags on every single file. It wasn't something that we could just easily update. We'll do it with NT service pack 3 or 4, I don't know, or Windows 2000 service pack 3 or 4.
And they gave a good explanation to me. They gave me a good technical explanation. They gave me status along the way every couple months that they were actually working on the problem. And they said there was just no way to fix it faster. And the fact that they gave me confidence that they were actually acting in good faith. Some bugs just do take that long. And so I waited, right?
But if they were completely silent with me, they didn't acknowledge I gave them the bug, or they were completely silent with me, I would have no idea that they're ever going to fix it. So why not disclose? So I think that's part of it is that vendor finder communication can give you confidence to wait. That is in the ISO standard.
Yeah. It's all about good faith. Like, you know, did you try your best to disclose the vulnerability? You know, did you power enough doors, ringing enough phones? Sometimes, yeah, you hit a wall, and it's really difficult. And you need to just say like, there's a problem here that needs to get fixed.
If the only way I need to, you know, that I can do this is to, you know, send up a flare and set something on fire here. It may need to be, may be necessary. But again, it's, if you've got a case where, yeah, they seem to be a little slow, but because they're having to gut half the system and start fresh, you know, you have to take that into consideration.
I think it'd be unethical on the side of the researcher to say, nope, nope, only 90 days. Triple shift, you know, like, no, it doesn't work that way. Why would the researcher have to, in your words, try their best to resolve these things? It's not, the onus should not be on the researcher. No, but before they, you know, drop ODA public to at least have given it a fair shot
to have tried to report it and do the... If a fair shot even exists. Yeah, that's right. So many organizations actually don't have any clear way to report them. You were giving the, you know, example even with ones that have bug bounty programs.
They don't have a clear way to report. So, yeah, I mean, it shouldn't be on the researcher to try and find a contact via Twitter. It's just not a function. It's not. I mean, I'll put myself on the spot. I'm sitting on a bug right now and the largest healthcare management software in the world, like, have actual confirmation that 93.1% of the entire agency, the entire industry uses it.
And it was a pen test for a company that owned a company that owned a company and about halfway up the chain, like day two into my job, they said, you know what, this is actually a massive, massive risk. We're just going to sell the company so we don't have to deal with this bug. So they did. They never told the vendor. I can't legally tell anybody. I'm just sitting on this bug.
I mean, this is like money. This is like millions, plural, of dollars, right? There is no... Like, what is my ethical response? What is the threat to life and limb? Do you have anything, Bruce, on this? I was just going to ask if... Is it okay to drop Oday? I mean, to render man's statement before, like, why is it on the researcher?
Like, is it okay just to drop Oday? I think there's some nuanced answers, but in general, can I just blanket drop it and feel okay about myself? Yes, and it was ruled that code is speech. So at least in the United States, they can't go after you for that. You know, it's, I think, where some problems may arise are things where the laws,
especially in the United States, will allow companies to go after the researchers for doing so regardless, right? And even the legal threat and the threats to the employer, that was what you brought up as well. But, I mean, who in this room has dropped Oday?
Put your hand up. Yeah, my hand's up. We and Katie did it together. Yeah, we freaking did it, right? That didn't come out right. We dropped Oday. We dropped Oday. And it's like that was, see, the thing was, it was so good. No, what happened was we, this was a carryover from at stake days.
We had an advisory. We were trying to contact the vendor for, I think, four months, no response. We called them on the phone. I hate phones. And like, you know, we had email threads for four months. So transition, we got bought by Symantec and everything like that. Eventually, we're like, we're going to publish a non-detailed version of this just to warn users of the threat.
Because especially as a, you know, security defense company, we had the right to protect our customers about a vulnerability we knew about, right? So we dropped Oday. And oh my God, the anger, mails. And I think, I wish I had saved that voice now because there were swears I did not know and that is unique for me.
Was that Arabic? Well, I think part of it was this is the first time this company had to deal with this type of issue, which is, there's always a first time for every company. They called us irresponsible. You always remember your first time. That was true. There's always a first time.
So I think that's, if you're the finder and dealing with a company for the first time, it's going to be a lot more work. Because they're not going to have any way to even communicate with you, really. And think about that. At the time, Symantec was the largest security software vendor in the world. We were threatened by the vendor for dropping a no detailed advisory to let people
know that something they had in their possession was insecure and that the vendor hadn't responded. So think about how hard it is for an individual researcher to deal with this. It is not, the onus is not on the researcher. This brings up a question that came from the back. Medical devices. So you have an O'Day on a medical device like a pacemaker or an insulin pump or something like that.
And you try to responsibly disclose this vulnerability to the maker and the manufacturer. So where does the line get drawn if nothing is done about this? I think you can send that to the FDA and they're going to take care of it.
Not true from experience. Really? Yes. What did you say? Not true? Somebody who actually has relevant current experience in that exact area needs to talk. So my name is Steve Christie Coley. I work for the MITRE Corporation.
We provide subject matter expertise to the FDA in exactly this area. The mic doesn't reach back there. It's tied up. Come on up. Really worth it. I can do it. This is what I want for the next 15 minutes. Come on up here so they can hear you.
So my name is Steve Christie Coley. I work at MITRE, supporting FDA, providing subject matter expertise in the area of medical devices when they receive vulnerabilities. And so FDA has regulatory authority over medical devices when it comes to safety.
And of course, cybersecurity can have an impact on safety. I've seen them wield their influence, which is much easier than in apparently the unregulated world of software. So anybody who's had any difficulties, certainly come to me or you can reach out to the
FDA as well. They are literally here. You can go to the biohacking village and meet them. Yes, there has been some critique of their practices in the past, but they've been doing
as much as possible to make it better. I think that's an important point, is that everybody was really, really terrible or so terrible they didn't even exist on the scale of terrible to not terrible. And I think that we should be encouraging, especially the regulatory authorities who
regulate vendors when they're making progress. Agreed, not a good process before, not good outcomes, but we should be encouraging those regulatory authorities to come down on the vendors who they regulate. When they want to. Well, we just encourage them to keep going. Push, push, push. Yeah, I mean, it's a case of if you are making something that has the ability to affect
life and limb, that's going to have to have some responsibility attached to it. Okay. So, I talked to some Congress people last night and they asked how can we more, you know, have greater engagement with the researchers, how can we, you know, help them and everything. And I said go after the vendors, Congress critters. This is what you should be doing.
Write some more laws that actually apply to the vendors and regulate the vendors and reform the Computer Fraud and Abuse Act. Well, and incentivize the vendors, too. It can't all be sticks. There's got to be carrots there. Yeah, I mean, can we just get a no-fault disclosure mechanism? I mean, because one of the… No-fault disclosure mechanism.
That's a great idea. I mean, like, I'm a sitting example of, like, yeah, sure, all the things you guys said. Also, none of those will help me right now. Right, right. So, like, I'm just putting myself on the spotlight. I think maybe a vulnerability researcher should be treated like a common carrier, like in telephone systems, you know, agnostic.
This is not anything other than what has been discovered, and they shouldn't be faulted for anything that they've discovered because they're not necessarily the root cause of that. It was some mistake made at the vendor. And you were asking me about a question. It was pulling on my pants, so I probably shouldn't get a question. I was just trying to take them off.
Just one. No, I think some people on the panel will know this is probably a question I would have been uncomfortable asking maybe six or seven months ago, so I apologize. It might be controversial. But I think from a regulatory perspective, when something affects life and limb, I'm curious what role the government should play, not only just for vendors but also for researchers, because not all speech is protected, right?
So if you're disclosing something that you found that could cost someone their lives or cause them serious personal damage, not companies but individuals, like, what role does the government play in that? Well, one, I think you're assuming that a researcher is going to necessarily know all the uses of the code that they found the vulnerability in, right?
They are not necessarily going to know that something they found over here is actually also code reuse used in some kind of life and limb scenario. So I think there's a degree of having to just put it this way. The researcher already did a bunch of free labor, right? Having them understand all the use cases of the code,
I don't think that's in scope. And, frankly, I think what we need to think about is, like, what is the real threat to life and limb? Is it full disclosure or is it non-disclosure of discoverable bugs? And I think that's the biggest danger, honestly. It's really nuanced, right?
I think, like, we're making this black and white, but I think the question that I'm really interested in is, does the role of the government include any regulation around researchers and how they – I think it's comparable for that, right? Because if you look at public safety, it's illegal to walk into a room and yell fire if there's no fire because you're inside a panic. If there is a fire, you're not lying, so you're fine.
There's no rule that says you have to yell fire if you see fire, right? So just because we may subjectively decide that ethics exist on both sides, it doesn't mean that it's the job of policy to police both sides. Where I think government needs to step in more so is – so over at B-Sides, there was a talk at the underground track
about the federal FBI Cyber Ninja Program, basically providing access to FBI people and their technical operations that before you pull the trigger on something and go all the way,
let them know they're actually working with the EFF. Kurt from the EFF was there with Russ from the FBI, where the analogy they drew was you find a big bag of drugs on the street and you're like, I should probably drop these off at the police station. Said no one black ever.