Biohacking Village - Biohacking Risks
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Alternative Title |
| |
Title of Series | ||
Number of Parts | 374 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/49885 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Computer wormService (economics)Real numberBitInformationDigitizingProduct (business)Mathematical modelStandard deviation
01:10
DisintegrationDigital signalCondition numberComputer Graphics MetafileDrum memoryBayesian networkCone penetration testPlastikkarteWechselseitige InformationCombinational logicDigitizingBitVirtual machineDifferent (Kate Ryan album)Cartesian coordinate systemRemote procedure callMathematicsPersonal digital assistantDemosceneModal logicFront and back endsType theoryTrailPhysical systemCore dumpOnline helpTraffic reportingComputer animation
02:59
Computing platformLattice (order)MathematicsDigital signalPunched cardNumberBitSet (mathematics)MathematicsPlanningLevel (video gaming)State of matterPersonal area networkInformationComputer programmingType theoryRepresentation (politics)Mathematical modelScaling (geometry)Context awareness
04:38
Mathematical modelPlanningRing (mathematics)Information securityGame theoryProcess (computing)Punched cardMathematical model
05:25
BitConvex hullHacker (term)Ring (mathematics)Blind spot (vehicle)PlanningPunched cardPasswordLaptopMultiplication signSlide ruleMathematical modelGoodness of fitGame controllerNumberPlastikkarteDivisorInformation securityComputer animation
06:51
Numerical taxonomyMathematical modelProcess (computing)Modulo (jargon)Computer networkSanitary sewerPhysical systemVulnerability (computing)CodeControl flowMathematical modelGroup actionInformation securityMathematical modelMultiplication signOrder (biology)Information privacyBitNumerical taxonomyFormal languageWordVector spaceGame controllerKey (cryptography)Residual (numerical analysis)Likelihood functionVulnerability (computing)CodeElectronic mailing listMathematical analysisSpecial unitary groupPhysical systemCharacteristic polynomialCartesian coordinate systemResultantAreaBoundary value problemProcess (computing)Ferry CorstenPoint (geometry)Group actionExploit (computer security)PasswordSoftwareSurfaceSlide ruleProcess modelingComputer animation
11:48
Mathematical modelPhysical systemIntelControl flowMathematical modelGroup actionNumerical taxonomyFormal languageInformation securitySlide ruleInformation privacySelf-organizationOffice suiteWeb pageNumerical taxonomyAbsolute valueComputer animation
12:35
Mathematical modelInformation privacyProcess modelingOctaveConvex hullLatent heatPhysical systemComputer wormMathematical modelMathematical modelGoodness of fitFigurate numberOffice suiteInformation securityInformation privacyCartesian coordinate systemAreaDenial-of-service attackMechanism designWave packetSelf-organizationNetwork topologyGame controllerInformationObject (grammar)Metropolitan area networkWhiteboardLimit (category theory)Form (programming)Service (economics)SoftwareBitPlastikkarteChainSoftware engineeringTurbo-CodePhysical systemDiagramProcess modelingPoint (geometry)Transport Layer SecurityComputer animation
19:04
Mathematical modelProcess (computing)Mathematical modelSlide ruleData conversionBlind spot (vehicle)Dimensional analysis
19:55
Mathematical modelNumberPhysical systemLevel (video gaming)Process (computing)UsabilitySystem programmingProcess modelingFocus (optics)GUI widgetCovering spaceInformation privacyInformation securityIRIS-TMultiplication signDifferent (Kate Ryan album)Inclusion mapSlide ruleRevision controlElectric generatorDomain namePhysical systemBuildingBridging (networking)Mathematical modelInformation securityInformation privacyWeightProcess (computing)MathematicsData structureObject (grammar)Staff (military)Data conversionConsistencyKey (cryptography)Game controllerComputer animation
22:44
Uniqueness quantificationPhase transitionMathematical modelError messageIdentifiabilityInformationInformation privacyInformation securityService (economics)Category of beingStandard deviationAuthorizationAuthenticationModal logicMaxima and minimaData integrityIdentity managementPhysical systemGroup actionSystem programmingPersonal digital assistantAddress spaceVirtual machineComputer configurationGUI widgetTerm (mathematics)HookingMassProcess (computing)CompilerNewton's law of universal gravitationDemo (music)Machine visionSpacetimeSlide ruleMathematical modelArithmetic meanLatent heatInformation securityInformation privacyGroup actionProcess (computing)Denial-of-service attackError messageCodePhysical systemMereologyIdentifiabilityDifferent (Kate Ryan album)BitCategory of beingCartesian coordinate systemStandard deviationRule of inferenceSoftwareHacker (term)Multiplication signMoment (mathematics)Object (grammar)Traffic reportingBuildingLaptopSystem administratorCausalityProxy serverLevel (video gaming)InformationPerspective (visual)LoginReflektor <Informatik>Computer animation
29:09
Mathematical modelContinuous functionComputer Graphics MetafileDiagramMaxima and minimaGamma functionWechselseitige InformationWeb pageUser interfaceLaceMenu (computing)Normed vector spaceEmailFaktorenanalyseInformation securityError messageService (economics)InformationStandard deviationAuthenticationData integrityMultiplication signType theoryError messageDivisorPresentation of a groupReading (process)Service (economics)InformationVulnerability (computing)Complete metric spaceDiagramProcess (computing)Mathematical modelCartesian coordinate systemSynchronizationElectronic program guideAreaData structureMappingSource codeAdditionPhysical systemComputer Graphics MetafileDatabaseProduct (business)Complex systemRight angleBayesian networkCellular automatonReflektor <Informatik>Denial-of-service attackTraffic reporting
34:23
Integrated development environmentPhysical systemProcess modelingProcess (computing)GUI widgetError messageBit error rateContext awarenessComplex (psychology)Type theorySoftwareComponent-based software engineeringMobile WebComputing platformMultiplicationComputer Graphics MetafileLevel (video gaming)Denial-of-service attackPhysical systemStandard deviationFunction (mathematics)Connected spaceStaff (military)Decision theoryPoint (geometry)Error messageMereologyGame controllerSlide ruleBitLatent heatComputer Graphics MetafileContext awarenessLevel (video gaming)Complex (psychology)Arithmetic meanState of matterSoftwareMathematical modelCombinational logicCASE <Informatik>WebsitePairwise comparisonProcess (computing)Analytic continuationLink (knot theory)Group actionMaterialization (paranormal)Digital rights managementDesign by contractNatural numberAuthorizationIntegrated development environmentWave packetInformationAdditionAuthenticationRight angleWeightCAN busIteration
40:45
Process (computing)MultiplicationComputer Graphics MetafileLevel (video gaming)Context awarenessDesign by contractPhysical systemElement (mathematics)Term (mathematics)Function (mathematics)Control flowRule of inferenceStandard deviationSystem programmingInterface (computing)Local ringSensitivity analysisIndependence (probability theory)DivisorPhysical systemLevel (video gaming)ChainRow (database)Computer programmingMechanism designGame controllerSoftware frameworkDivisorAuthenticationNatural numberBuildingCartesian coordinate systemEncryptionClient (computing)PasswordGroup actionMultiplication signSystem callPattern languageNumberComplex (psychology)Combinational logicDesign by contractMereologyError messageProcess (computing)Term (mathematics)SoftwareGoodness of fitAdaptive behaviorLocal ringData integrityData modelInformation privacyTimestampData centerRule of inferenceInformationMathematical modelLoginTouchscreenSelf-organizationData managementElectronic mailing listSystem on a chipSerial portData structure
47:06
Electronic data interchangeLatent class modelMathematical modelTotal S.A.EmulationMereologyExecution unitMaxima and minimaGroup actionMathematical modelProcess (computing)Element (mathematics)NumberProduct (business)MereologyVulnerability (computing)Perspective (visual)Musical ensembleInformation privacyInformation securitySlide ruleTouch typingNatural numberCASE <Informatik>Computer Graphics MetafileSelf-organizationLatent heatNumbering schemeCartesian coordinate systemElectronic mailing listData integritySoftware developerLevel (video gaming)Physical systemGame controllerRankingBackupTotal S.A.Point (geometry)
50:16
Data managementClient (computing)Information securityInformationPlastikkartePhysical systemState of matterGroup actionFocus (optics)Mathematical modelSoftware bugTerm (mathematics)MetrePoint (geometry)Data managementMathematical modelData centerEncryptionError messageTerm (mathematics)Design by contractProjective planeChecklistRepetitionPhysical systemEvent horizonSoftware as a service
52:32
Mathematical modelGUI widgetProduct (business)Continuum hypothesisControl flowSoftware frameworkBitAddress spaceData managementNatural numberInformation privacyProcess modelingContinuum hypothesisNumbering schemePerspective (visual)Multiplication signDimensional analysisVector potentialMathematical modelProcess (computing)Game controllerGroup actionUniverse (mathematics)Basis <Mathematik>Point (geometry)Right angleProduct (business)CircleProjective planeComputer animation
55:11
Information privacyInformation securityRepetitionProcess (computing)Group actionPhysical systemSelf-organization
Transcript: English(auto-generated)
00:00
Hello, my name is Bill Doherty. I'm the CISO of Omada Health. Joining me today is Patrick Curry, our Senior Director of Compliance, and today we're going to talk about threat modeling in digital healthcare. Patrick and I are the co-authors of the includes no dirt threat model, which we'll be discussing today. If you'd like to follow along with us,
00:23
you can go to includesnodirt.com and download our white paper on that threat model. We've also got some specific exhibits that we've created just for this discussion. That's at includesnodirt.com slash defcon.pdf. And thank you for paying attention and watching our talk here today. Standard disclaimer, we have put together a discussion that is based
00:47
on some real world concepts, but this is not a real world situation. So any information we share today should not be construed as being related to the products or services
01:03
of our employer, Omada Health, or any of our partners or our customers. Now that's out of the way, we can dive into it. A little bit of background on us. Omada Health is a digital healthcare company. And what is digital healthcare really? It is the combination of technology and clinical expertise and humans to deliver better health outcomes. And we've been
01:28
doing this for about nine years now, I think. And we really focus on digital care made human. So it's not just the machine that you're interacting with. We do it a little bit
01:41
differently. What we do is we try to partner up devices, applications with remote monitoring and specialists who can provide assistance in our specific diseases to help improve
02:01
health outcomes. And we do that through behavior change, remote monitoring with digital devices, care delivery, lab diagnostics, medication tracking, and a whole bunch of backend systems that are necessities in healthcare like outreach to patients, enrollment and eligibility,
02:25
billing and reporting. That's all of the messy stuff behind the scenes. We have four which is type 2 diabetes prevention, type 2 diabetes treatment, hypertension. And last
02:41
year we added behavioral health, so treating anxiety and depression. Recently we just bought a company called Physera that does a digital physical therapy, so for musculoskeletal or pain treatment. So that's us. We're going to talk today a little bit about SAM. SAM is a
03:04
representative of a real participant in our program. She's not real, but SAM would be a participant who has type 2 diabetes and is using our program to try to better manage her disease state. And she does that through tracking her blood values. That information
03:23
gets shared with her coach. Her coach is then giving her advice on meals and exercise and potentially talking about her insulin levels, things like that. And ultimately what we're driving towards is behavior change. We're trying to give our participants lifestyle improvements
03:46
that will help them better manage their chronic diseases. And we're going to come back to SAM and how she's managing her diseases in a little bit in the context of a threat model. But this is what OMADA does. We do whole person health care. And we do this with connected devices
04:04
and lesson plans and coaching and all kinds of stuff. And we do a lot of it. We have, since our inception, served more than 350,000 participants. We have over a thousand satisfied customers. And one of the largest data sets in
04:25
behavioral health, as of last week, we had over 80 million weigh-ins from our digital connected scales. And our participants really seem to like our program. We have a 92% CSAT. So that's enough about OMADA. We shared that with you because we want you to understand
04:42
who we are and why we came to do this. So why should we do threat models? Patrick and I started this about two years ago. In health care, we are required to do annual risk assessments. The problem with that is nobody ever tells you how. And we've been
05:02
doing them for a couple of years. And we decided that we needed to up our game. And the reason we needed to up our game is because we were doing kind of a typical risk assessment process where we'd sit in a room and we'd just think about things that could go wrong and then we'd assess our risks. And the reality about all things in security and compliance is everybody
05:25
got a plan until they step into the ring and the first punch comes. And then your plans go to hell. And we knew we had blind spots. And we wanted to get rid of those blind spots. I love this cartoon, by the way. On the left side, this is typically how we would deal with
05:41
things in health care. We're going to encrypt the laptop because HIPAA says that all the data has to be encrypted at rest. And then what would actually happen is somebody would force us to reveal our password anyway. Every time we give a talk on this, we update this slide.
06:00
And sadly, I'm never out of companies that have had major breaches in the last six months to update. But these are examples of really bad things that have happened. Health care is by far the number one most breached industry, but everybody gets breached. And the underlying factor for all of these companies is they all had really, really good smart security teams
06:25
that were working really hard, that had lots of controls and lots of vendors and lots of stuff in place to try to protect their systems, and yet they still had problems. And the reason they had problems is because they had blind spots. And so threat modeling
06:43
is a way to try to eliminate some of those blind spots. And the fundamental truism in our business is nobody ever says thank you for the work you did to prevent the disaster that never happened. So there's no A for effort here. But doing threat modeling and doing them
07:07
consistently will over time improve your security and your compliance and your privacy. And it is by far the right thing to do. So let's define it a little bit. And in order to really talk about this, we have to have a taxonomy. We have to all be using the same
07:23
language. Lots of people interchange the word threat with risk. We do that too, accidentally. But we had to come to a common language. And when Patrick and I were working on this model, we were using the same word to mean different things. So we eventually wrote it down. This is our taxonomy. First thing is a system. And a system is anything you want to model.
07:46
Lots of threat modeling focuses on applications and software. And that is certainly a system that can be modeled. But so can a business process or a network or a vendor. And the defining characteristic of it is we want to protect it from specific threats. We just did our annual
08:06
risk assessment and just completed it. And this time around, we modeled 26 business processes end to end. So systems typically have defined borders. You know what the entry point into the system is, you know what the exit is, and you can then model it for threats. Those borders
08:26
are sometimes called trust boundaries, which are areas where principles can interact. Sometimes they're called attack surfaces. The key point is understanding all of the areas
08:41
that an attack or risk or a threat can come from. Vulnerability is a weakness in your system. Vulnerabilities are things that can be exploited. So if you have a weak password policy, that is a vulnerability. It can be exploited. If you leave your front door unlocked, that is a vulnerability. That doesn't actually mean that someone will breach your password or open
09:02
your door, but it is vulnerable for exploitation. A threat is an actor. A threat can be a person. It can be an employee of a third party. It could be its own business process. It could be a piece of code. And threats exploit vulnerabilities, and we call that an attack
09:24
vector in our taxonomy. Risk in this world then is the bad outcome that results when a threat exploits a vulnerability. And we can then measure risks by measuring the likelihood of it happening. That's the probability. And the impact of, or the cost, if it does happen,
09:45
that's the impact. And that's typically how people think about risks is they, you'll see this often, people trying to measure the impact by putting a dollar amount on a probability and that gets you to an adjusted risk score. And then we talk about inherent risks and residual
10:02
risks, and that's often how risk assessments are done. In our taxonomy controls are things we do to reduce the probability or the impact of a risk. So if your door is unlocked, that is a vulnerability. The key and lock is a control, and you can lock your door. That doesn't
10:24
necessarily mean that nobody will open your front door. It just has lessened the probability of it. It might have increased the impact, by the way. So controls have, there's no panacea to them, but we do need to model what are the risks and then what are
10:46
the controls, and when we do that, we can then figure out what are the residual risks. Threat modeling is just an analysis. It's a way of systematically going through and looking at vulnerabilities and controls and threats against a defined list of risks,
11:00
and defined list of risks is really important because we can sit around and talk about every bad outcome under the sun. A meteor may strike the planet, but that really isn't a risk we're going to try to go model as threat modelers. And then lastly, action items.
11:22
This is the result. This is what we're trying to get out of a threat model is we've looked at all the bad things that could happen. We've measured the probability impact. We've assessed the controls we have, and then now we've got a whole bunch of work that we want someone to go do to reduce the risk, and we're going to reduce the risk by
11:41
creating new controls that either reduce the probability or the impact. So that's our taxonomy. If I can add to that while you change slides on that, one thing that was super critical for us is just exactly that taxonomy. Coming from different disciplines, from IT security and from healthcare compliance, we spoke very different languages when it came to risk
12:04
and threats, and realizing that, reconciling that, and making sure that we had a consistent discussion was really important for us to be able to make breakthroughs on this. So if you decide to adopt this model and go forward with it, don't underestimate how
12:20
important it is to create that taxonomy when you're speaking to your risk organization or your compliance team or your privacy office. Getting on the same page is really important. Absolutely. I could not agree more, and thank you for jumping in so I could take a drink. We would love to think that we were the inventors of all threat models and the geniuses
12:43
who wrote this down. The truth is we're not. There are lots of very, very good threat models out there in the ether, and we borrowed heavily from them. And so we wanted to walk you through some of those traditional threat models so that you would have these resources available to you
13:01
to go do your own research and hopefully take what we've done, take what these other people have done, and apply that into your own business, whether it's in healthcare or any other. So our starting point was this wonderful book here by Adam Shostak. I think he may be talking
13:20
at Black Hat or DEF CON this week on threat modeling. It is fantastic. If you don't own it, I highly recommend it. He didn't pay me to say that. And we'll talk a little bit more about what's in that. But that's really on the software design standpoint. On the privacy design, there's this model called Linden. And again, it's excellent.
13:45
So Adam's book largely focuses on the stride threat model. This is something that came out of Microsoft. And this was a way of getting software engineers to assess the major threats to applications. And they had narrowed down to really six areas. So spoofing, so somebody
14:08
illegally accessing an application, tampering, somebody modifying the data, repudiation, somebody performing an act and we couldn't figure out who it was, elevation of privilege,
14:21
somebody gaining credentials that they shouldn't have, denial of service, shutting it down or information disclosure, which is what Patrick and I worry about a lot, which is breaching information. That's the stride model. It's excellent. Please go read about it. If you haven't already done so, get Adam's book. On the privacy side,
14:45
the Linden model is also excellent. And when we started researching this, what became really apparent to us, and we'll talk more about this as well, is that sometimes privacy and security are polar opposites of each other. So in the stride model, we're worried about repudiation.
15:06
Can somebody do something and then deny they did it? In privacy, we were worried about non-repudiation. Can I do something anonymously? And so we borrowed heavily from the Linden model as well. But there's some other models out there that are also good. Bruce Schneier
15:21
wrote extensively about attack trees. Attack trees is one way of brainstorming where you start with an objective, like I want to open a safe, and then you walk down a tree of all the ways you would do that. So how could I open the safe? Well, I'd have to learn the combo. That'd be one way. Or I could cut it open. And to learn the combo, how would I do that?
15:43
And you walk down that, and then you start figuring out what is possible, what's not possible. And then once you've done that kind of a model, you can then insert controls in there to break up the attack tree. Kill chains came out of the military. And again, it's a way of modeling
16:04
what needs to be done for somebody to execute an attack. And if you interrupt any step of the chain, you can impact or possibly prevent the attack. Both excellent models. Security scorecards. These came, or excuse me, security cards. These came out of the
16:23
University of Washington. This is a way of training threat modelers on how to do threat modeling. It's actually a deck of cards. And I have a deck that's at my office that's under lockdown from COVID. But it talks about a lot about motivations and resources and methods. And it's really just a training mechanism, but they're worth checking out.
16:43
And there are a ton of other models. I found this white paper from Carnegie Mellon on threat modeling. It's excellent. I highly recommend the pasta model just because I like the name. But there's lots of models out there. But none of them really fit what Patrick
17:02
and I wanted to do, which was we wanted a single approach that we could look at our software applications and our vendors and our business processes and deal with all the intersections between compliance and security and privacy and ultimately reduce the risk of
17:20
our organization. Lots of people do brainstorming. Brainstorming is the simplest form of threat modeling. It has its place. We do it, too. But it also has its limitations. And we talked that said earlier that we don't model everything. We model the things that are applicable
17:43
to our system. And this is kind of a typical traditional threat model. We talked about Sam earlier. We're going to talk about Sam again later. But this is a situation, a diagram of how a continuous glucometer, a wearable glucometer might work, might interface with our company. And
18:05
we draw this on a whiteboard and then say, OK, if we were going to attack this, how would we do it? And everybody starts drawing things. And we say, well, I do a man in the middle of the application or I do a denial of service on the API or I do a breach of the partner or I'd figure out some way to hack the device to harm the
18:22
participant. And that is one way of doing threat modeling. But it ignores lots of things. It ignores the motivations and capabilities of the attacker. It ignores the objectives of the system. It also ignores all the controls we already have in place.
18:45
Because we've already done things like we've got TLS written up here. That's a control that encrypt the traffic. So we want something that doesn't ignore all that and is a little more structured than just a couple of smart people and a whiteboard. And threat models done via
19:06
brainstorming, they're limited by your imagination and failures of imagination lead to blind spots. So this was the problem we were trying to solve. And now Patrick is going to talk about what we actually did and specifically the includes no dirt model.
19:22
Yeah, exactly. Thanks, Bill. So exactly that's what we were trying to solve for. There's a lot of dimensions to that. I think the one comment I would make on the brainstorming thing, and I think we've seen this before in prior practices, if your thoughts are limited or if you don't think of something, you don't actually expose that in your conversation. So what this process
19:43
actually allows us to do is force us to think of things that may not be top of mind when we're actually doing the work. And that regimentation and that process actually drives us to that one. Okay, next slide. So what we were looking for in coming up with this process, something that was easy for a non-SME to understand, something that we could in and something that would be easy for
20:06
someone to perform. So something that we could give to a non-expert, say someone on my team or on the privacy team and have them not only understand what we were after and what we were trying to do, but actually something that they could deliver and run through in a fairly short
20:22
amount of time. We wanted something that was flexible and repeatable, something that we didn't have to shift every single time that we did the questions and something that we could do over and over and over again. We wanted something that was usable anywhere. We didn't want to have to design something that was great for business process, but really crappy for IT structures or
20:42
vice versa on that. And of course, since we were putting a lot of effort into this, we wanted it to be memorable because why not when you're building it? And some creative uses of anagram generators actually got us where we are. So next slide. So what we created with this process, it's a systematized approach to analyzing risks that pays a couple of different
21:03
dividends. One is it's systematized and it's easy to execute like we were just discussing. It's also interestingly started to be a key for us to explain how we think about risks. So it structures educational conversations when we have them with staff so that they understand what we're trying to do. It's a repeatable process with objective scoring. So another
21:25
huge win there. Something we can do over and over again, sometimes on the same system to see changes or see how things evolve. And it gives us an objective score with some weights that we're going to see in the example we'll show you that help us compare across risks or across even
21:40
domains in what we're looking at, how we think about risks and what do we do first? There's never enough time. There's never enough resource. How do you focus your time? We wanted a system centered approach. So something that isn't really focused on the thing that you're modeling, but something that actually is focused is on the thing that you're modeling, but not on the process itself. We tried to kind of bridge the gap between say having
22:04
a Stride and a Linden for different things and create something consistent. Bill mentioned we focused on established controls and that's really important. If we've tested the control already and we're sure it works and in our audit practice we know it's actually running,
22:20
then we don't actually have to include it in the model. And in the example we'll show you, we've gone through a few things and eliminated it because they either don't apply or we know that it works. Lastly, we wanted to have a model that covers all of the domains we think are important. Privacy, security, and compliance. So not having three different models or three different versions and being able to include different regulatory regimens. Next slide.
22:45
All right. What we created with includes no dirt, the model that we have. Your mileage may vary on this one. We designed it for our own uses at Omada Health. And if you're in the healthcare space, it may be directly applicable with the questions that we have. You may have
23:02
to swap out some of the specific regulatory questions. If you're in an adjacent industry, it may need some questions adapted. Please, by all means, take the questions and modify them to your own use to target exactly what you're trying to get to. Next slide. All right. And one last, the teaser, it is the includes no dirt model. And it has been
23:22
arranged to actually be memorable. So, identifiability, non-repudiation, clinical error, linkability, unlicensed activity, denial of service, elevation of privilege, spoofing, non-compliance to policy, overuse. Specifically, they were thinking of overuse of information and data as really pertains to the HIPAA space that we're in. Dirt, data error,
23:45
information disclosure, repudiation, and tampering. So all those parts play together to make the model that we're using. Next slide. So for every risk, there's a property and a goal, and it comes from a specific place. So you skip down a couple. Clinical error,
24:02
the risk is clinical error, a clinician making a mistake that may otherwise have been prevented. The property or the goal of that is the application of correct clinical standards. So making sure that the clinician both knows what they're doing and actually can do them at the moment in time where they actually need to do it. That's in the realm of compliance,
24:21
and we've sorted on this slide the specific things as to what the goal of what we're trying to do is and where it comes from. Next slide. Now, you may have noticed that some of these things overlap, and that's where proper judgment by the risk assessors as you go through this
24:40
is important. And I don't think I can say that enough strongly in the title slide. The risks that apply depend on the system being modeled. So some of these, you'll have to look at what you're trying to do and figure out, does this apply? Does this not apply? How does it apply? And some things you may just factor out as you go through it. So these three in
25:02
particular are complicated because they are very related, and Bill alluded to it at the beginning. Security and privacy sometimes are at opposite ends of what they're trying to do, and these tend to reflect it here. So, identifiability, the property of a system that allows users to
25:20
trace to a specific user. So there the objective or the goal is anonymity, making sure that that's not actually possible. A risk of non-repudiation, non-repudiation, the process by which it's proven that a user took an action. The goal there is plausible deniability, so that somebody
25:42
is, it isn't clear if someone did something or not. The risk is repudiation, and here what we're trying to get to is non-repudiation, so where someone can actually show that it actually hasn't happened yet. So these of note actually came from different parts, some from Stride, some from
26:05
Linden, so privacy and security blended together. On the next slide, the next couple will actually unpack that a little bit. So, deconflicting these goals can be kind of complicated on this. Different stakeholders will have different needs for these things. So, for example, with anonymity and identifiability, there's that risk goal. There are some times where you're
26:25
building a system where that absolutely is required. We've got a whistle here for a whistleblower. If I'm designing, for example, a software application that holds anonymous reporting for whistleblowers, because that's required under healthcare compliance rules and other codes as well,
26:42
anonymity is really important. But then for other goals and other people in the system, less important. So a hacker, for example, they're in the middle, really relies on plausible deniability, that for what they're trying to do, either ethically or otherwise, making sure that there's that deniability as part of it. Repudiation and non-repudiation as well.
27:07
Let's get into a little bit more of a specific example on the next slide, though. So let's say Human Resources wants to build, or wants an employee complaint application that lets employees report sexual harassment. Lots of different goals, lots of different stakeholders,
27:22
and figuring out how to balance between them gets really important. So employees want to be able to report if they want to have anonymity, that their anonymity is protected. So if they want to say something anonymous, they can't, and no one will know who it is. Well, HR wants to help ensure anonymity, but also wants to make sure that there's less of a possibility of abuse
27:43
in the system, that you don't have one person anonymously reporting the same thing over and over again to drive a larger depression. IT needs to provision and deprovision administrative access to this, but they have absolutely no need to see the complaints that are registered. Security runs DLP on every laptop, logs who has access to the application, important from
28:05
a security perspective, but then that becomes potentially challenging depending on the fact, the level of access and the fact that if they're provisioning it and they log who accesses, they have a proxy for who's actually reporting things. The legal department wants to be able document complaints and collect evidence to take action, which is somewhat
28:27
the opposite of anonymity. It's hard to take action and have a cause that comes out if we don't know actually who did something. So you have lots of different people running the system, running in a system with different needs, different roles, and those roles will conflict. So we can't
28:46
completely make everything anonymous because then security and IT can't do their work. Legal will be able to document complaints and the harassment, if in fact it is actually occurring, will keep continuing because there's no way to investigate it. So all of these
29:01
parts play together and need to be balanced in the work that you're doing when you do the modeling. All right, I think it's back to you. All right, so let's give it a try. Time to come back to Sam. So Sam is a typical patient with type 2 diabetes and she's been using
29:26
for years a blood glucometer, which means she's constantly pricking her finger, taking blood readings. We know that she would benefit from having a continuous glucometer, that she
29:40
is a wearable and that is automatically sending readings to her coach so that we get greater telemetry so that we can respond quicker. So we want to introduce CGMs into our product set, but we want to do it safely. So what Patrick and I did is we filled out an includes no
30:03
dirt threat model on the concept of a CGM to try to help us figure out where we need to pay attention. And again, you can download the one we've filled out at includesnodirt.com slash defcon.pdf. I highly encourage you to do so. We talked about brainstorming, the includes
30:24
no dirt threat model. We've included a structured brainstorming worksheet that allows us to go through a system and kind of helps guide where we go. So again, I'm going to come back to this
30:41
is our diagram. In our diagram, we have a wearable glucometer. The glucometer syncs to an application on the patient's cell phone, which then transmits the data to the partner. The partner then sends it to our endpoint and it gets stored in our database. It then sends
31:04
information back to our application. It also surfaces that information to the coach. And you'll see here that in addition to the CGM, the participant still has a BGM. So still does the occasional finger sticks. And that information is also being sent to us. So
31:24
we've got two sources now of blood sugar data. Okay. So this is the diagram we're going to be working off of. In our worksheet that we provided you, it's highly structured. One of the first things we do is we mark which threats we think apply. Who are the actors that are involved here?
31:44
So certainly the participant is involved in this whole process and the coach is involved, but we've got a vendor. We've got potentially other partners. We're going to be doing claims on this and billing, we're reporting. So there's business processes, there's people.
32:03
For this one, we're not so worried about natural disasters. We're not so worried about geopolitical unrest, but other threat models, those might come into play. And we do some brainstorming on vulnerabilities. So what are areas that could be vulnerable? They
32:23
get an incorrect reading or the service becomes unavailable or the coach misinterprets the data. Now up in the right hand corner there, I've got a little diagram where I show the questionnaire and also the structured brainstorming. And this is an iterative process. We sometimes start with
32:41
the questionnaire and we sometimes start with the worksheet, but it's typical when we are doing one of these on a complex system, we are going back and forth. So we'll be going through the questionnaire, which is highly structured and that will trigger us to go, oh, wait, because we've said no on this question, we think that there's a vulnerability there,
33:02
let's go write that down on our worksheet on vulnerabilities. Let's go ask somebody to get more information. And we go through this process until we think we've got the questionnaire complete and the worksheet complete. And so we did that. And when we did that, we were able to take those
33:24
vulnerabilities that I've listed here, there's five of them in our example, and we're mapping those to specific areas in the includes no dirt model. So you'll see like anonymity, we don't want anonymity, it doesn't apply in this one, but clinical error certainly does.
33:43
And denial of service does and spoofing does. So in the interest of time in our presentation, we're not going to go through all of our answers for every risk, but we're going to go through the answers we did for the risks that apply and talk about why they apply.
34:03
And again, you can download our example and see our answers on all of them. So the factors that apply are clinical error, unlicensed activity, denial of service, spoofing, non-compliance, data error, information disclosure, repudiation, and tampering. These are the things we're worried
34:20
about in this particular threat model. So Patrick, let's start with clinical error. Sure. So what we did in this one to make it a little clearer is for the next few slides, on the left side of the slide is a snapshot of the answers. And all of these answers are in the materials that we put on the includes no dirt website under the DEFCON link. On the right side,
34:45
we've clarified a little bit about what these actually mean in the context of the CGM work that we did for this specific example. It can be a little hard to read through those. So we've kind of pulled out what we think the important answers are. So here for clinical activities, what we're thinking about in the specific example of this continuous glucose monitor is,
35:04
are we doing something that relates to the treatment of a patient? Well, of course we are in this particular example. So the specific question that we probe into here is, does this system or process, does this combination of things have either inbuilt controls that
35:20
prevent something from happening in the first place or other review-based controls that can, if something does happen, we'd be able to find it corrected as quickly as we can. So answer here, yes. And it's a little complicated because it's both CGM, both a device that is created by a partner of ours and the software that we create for coaches
35:40
to be able to use that information. Each one of those has specific detective and preventive controls that operate in its own environment. And this is a great example of what Bill just said related to the iterative nature of this. When we hit this question, it's like, okay, wait, that's both the device and the software. How does that work? And we had to
36:02
fork a little bit, came back, yep, that's exactly right. Questions 3.2 and 3.3, we delve down a little bit into some additional control work. And another thing I would say here is not all controls are technical. In this particular case with clinical error,
36:20
part of preventing clinical errors is ensuring both the proper training of your clinicians. So being able to say, yeah, everybody that has access to this was properly trained, I almost said licensure, that's coming soon. And that the delivery and the quality of the deliveries is important and up to the standards we set in our clinical practice guidelines.
36:40
So that becomes a review process and a quality check that our clinicians will do on the staff to make sure by reviewing their output that things are going the way we want it to. All right, next slide. So one thing I want to say before we move to the next slide, you'll see there on the left-hand side on question 3.0, when we answered yes, it gets one point for that. If we'd answered no, we can skip the rest of the questions
37:06
on clinical and move on to question four. So those two things are really important. At the end of this, we're going to total up all the points and that will drive a risk score. But being able to skip a whole bank of questions when they don't apply means you can go through
37:22
the model much faster. So for simple systems that don't involve clinical, don't involve patient activities, we can maybe model them, as we said, very quickly, like 15 minutes. For something that's complicated, and this would be a fairly complicated one, it might take us several hours.
37:42
But the model fits whichever size and which system, whichever level of complexity we're dealing with. Definitely, thanks for pointing that out. That weighting is really important because it helps to also create that apples to apples comparison that we talked about a little bit earlier. Second one, unlicensed activity, and I spilled it a little bit ago just talking about
38:04
it. Does, in this case, the work that we do require licensure from either a site license or personnel licenses for the people that are delivering care? Just like the last question, yes, it does. And it's complicated because different parts of this thing require different
38:21
licensure. So the CGM manufacturer requires licensure by various federal and state authorities. And our clinicians internal to us require credentialing to make sure they're able to deliver the coaching that's appropriate for diabetes. So there are national standards for that.
38:41
Also important here is the fact that we rely on other people in these things. And just like we talked about, if we've tested the control, we know it works. We don't have to bring that into the discussion here. We don't have, we may in diligence and checking acknowledge that yes, our business partners have the appropriate licensure, but we don't have to dig
39:01
into that to make sure it's as robust as it is. Our contracting processes make sure that those exist. So we use that as effectively a control and we focus on the things that are important to us, which is our own internal clinicians. I think you're next. The denial of service. So this is where we start looking about how mission critical the
39:20
system is that we are modeling. For a connected glucometer, the availability of the entire system is very important. And if any piece of that system is having a problem, the connectivity between the device and the partner, the partner and us, us and the participant isn't working, then it's going to have a significant impact on the effectiveness of getting that telemetry data
39:44
to the coach, back to the participant and being able to make decisions on that data. So when we go through our model, we ask, is it a mission critical system? If it is, that raises the point value. And then we look at how we define targets and what
40:00
are those targets and how are they enforced? And again, Patrick said, not all controls are technical for the partner. They've got technical controls to ensure their availability. For us, we've got contractual controls where we define an availability target with monitoring and penalties. And that's how we manage the risk on our side.
40:28
Spoofing, we want to make sure that we are getting the right data. And we want to make sure that only the right people can access that data. And so spoofing as a threat is where
40:42
we look and model in authentication. And this is a good example of where we can rely on existing controls. So we have defined authentication levels for our participants. We base it on
41:02
NIST863B, and they're defined as an AAL level one. Our coaches are defined as a level two, which means that they not only have to have a username and password, but a second factor. We test those. We know it works. So as long as the system is going to use those controls we've already defined, we can check those boxes and move on. We don't need to spend a lot of time
41:26
detailing how authentication for this particular system is going to work, because it's a client of the greater system within our care delivery. All right. Noncompliance. So not surprisingly, when we have to address a particular business
41:44
process or system as a HIPAA covered entity, there's lots and lots of legal requirements that get attached to something. This is probably what's called the worst example of the complexity here, because for this combination of devices and software that we're building,
42:01
it's everything from HIPAA, privacy policies, the terms of use both for the device and for our software application. There's a number of healthcare compliance issues. There would be FDA obligations for our business partners, the contracts that Bill just mentioned. So because this is clinical in nature, it relates to a device. There's patient data involved with it. This one's
42:23
particularly complicated. As I mentioned kind of early on, when we were talking about the problems that brainstorming can create, this question is specifically designed to bring up those non-obvious things that you may not have top of mind when you're actually doing it. If you're adapting this questionnaire, certain things that you want to target, definitely adding
42:44
it to this list is important because for example here, terms of use may not have been something I would have thought about, but it has to include what we're trying to include in this specific example. So that was important to kind of drive through that. And this also, if you look down at the very bottom left of the screen, we can also check through the
43:03
applicability of some of the credentials that we have. We're a SOC 2 and a high trust certified organization doesn't apply in this particular case because of the nature of what we're trying to do. Okay, next one I think is mine as well. Data error. So here in this particular example, we're digging heavily into data integrity and for a medical process and a clinical
43:25
record-keeping process like this would create. So we're essentially creating a part of a medical record on glucose monitoring, glucose management for our participants in the program. It's really important to make sure that this is ingested and maintained in an accurate and viable way.
43:42
Again, here in other includes no dirt models that we've actually done, we've tested some of that. We've tested, for example, the APIs that we do data ingestion with. So we can kind of check that off and go, yeah, it's acting as we intended it to and move on and focus the mitigation control work that we're trying to do here for other things. So information disclosure,
44:06
this is where we're worried about confidentiality of the system. We've got rules within HIPAA, within our customer contracts on how we protect data to make sure that it isn't disclosed where it's
44:22
not supposed to be. And again, here we're largely consuming controls that we've already tested previously. So HIPAA requires us to encrypt PHI at rest and in transit. So we can ask, are we doing that? And if so, how? And since those are really well-known
44:44
patterns for us, we can accept them and we can move on. It doesn't actually require a ton of discussion. Down at 12.6, data locality. This is a really good reminder for us. We have obligations to keep all of our data within the United States,
45:02
process stored and accessed. And so, especially when we're talking about a third-party vendor, this is a good reminder of, hey, let's make sure we know where their data centers are and where the data goes as it traverses its way to us. Repudiation, we've talked a lot about repudiation already. Does it require non-repudiation?
45:25
Yes. What are those mechanisms? And again, in question 13.3, there's lots of mechanisms that we have in place, but we want to make sure we address them. How are user activities being logged? Do we have accurate timestamps? How long are logs retained? Things like that.
45:46
That lets us know that this particular system is going to fit into our overall framework. And tampering, we don't want anyone to mess with the data. So, again, what are all the
46:04
mechanisms in place to prevent tampering? Now, for this particular system, there's some interesting tampering things we need to deal with, like the chain of custody of the device between the manufacturer and the DME and the DME shipping it to the participant. And also,
46:22
how do we make sure that the device that gets shipped gets assigned to the appropriate patient in our data model? And that's a fulfillment question, because we have to make sure that every device that gets shipped, that serial number comes to us, assigned to the correct
46:41
person. And if we don't, that then makes its way back up to not just tampering, but to data integrity and who has access to it. So, again, the model is iterative. It lets us go through it, and it reminds us to check how are all these things being addressed for this.
47:03
It is a very structured way of brainstorming. And we get to the end, we get a score. Patrick, you want to talk about this? Oh, sure. Yeah. So, as Bill said, as we get to the end, we get a score. So, the product of all the numbers that you saw on the side. So, for example, when we've weighted the first element as one, when we're talking about clinical
47:25
controls in place or not, all of those add up together. And in the particular governance, risk, and compliance system that we use, we can weight the scores. So, some are stronger than others. But essentially, the product of that turns into a total score. We can rank that
47:41
total score as a low, medium, or high. Again, to be able to focus our efforts and make sure that we know kind of, is this something we need to address immediately in the grand scheme of things? Is this something we can actually wait on for a while because it's not as critical as other things that we're looking at? On the right side, you actually see the list of action items. You saw that in a prior slide as far as how those work. Here's the specific
48:05
action items for this one. So, for example, for this particular model. So, for example, creating clear instructions for participants on device calibration can help with data integrity issues because it's not clear if someone enters data incorrectly, the treatment
48:21
would apply correctly. So, this addresses the clinical vulnerability issue. Backups to BGM and CGM also touches on some of the same issues. Sometimes that's required just because of the nature of the BGM and so forth with all of these. Each one of those action items is
48:40
designed to address one or more vulnerabilities. And that's part of the process of this is everything that you've identified should have an action item at the end of it to make sure you're hitting everything you need to from a control perspective. And we've done a lot of these. I mentioned earlier that we launched a behavioral health application and when we did that, Patrick and I did a threat model and I think we came up with 19 action items.
49:06
And those were specific things that we wanted to ingest into the system before we went live. So, we did that at the very early stages as we were just planning,
49:21
which was six months before the launch, which meant that we had as risk assessors the ability to have a meaningful impact on the security and privacy and compliance of that application before it ever launched. And everybody involved also understood because they went through the process with us why. They knew why we had those action items and what was the specific
49:46
vulnerability we were trying to address. That latter is very important. You may have groups that let's just say are not necessarily as inclined to be helpful when working with the risk assessing organizations. We had, in this particular case, this behavioral health example,
50:04
some of our developers come back to us and say, oh, we get it now why this is important after they've executed through the process. So, it becomes educational as well as helpful just as a reminder as to why we're doing it. So, a couple of points to wrap this up.
50:21
Vendor management, the threats you aren't seeing can also kill you. I use this example. This was a letter that Quest Diagnostics sent out about a year ago on a breach. And the important thing about this is not that Quest sent it out, but that they sent it out because one of their
50:43
vendors had a problem. And that vendor was acting as Quest BAA, but Quest ultimately got sued for this breach. So, when you're doing threat modeling and risk assessments, it's important not
51:00
only to look at your own systems, but your third parties as well. And we use the same methodology, the same checklist to assess all of our vendors. Now, we have lots of vendors. We are a SaaS first company. We've got SaaS all over the place. When somebody comes to us with a new vendor and let's say it's a project management tool, we can go through this checklist
51:21
pretty darn quickly because it doesn't have clinical error and it doesn't require a licensure. But when somebody comes to us with a new device vendor, we're going to go through this same model very, very carefully. And so, when we use threat models to assess vendors,
51:42
it's the same basic questionnaire. We're doing that checklist. We may or may not also do a brainstorming, but we then use that to influence our legal terms. So, if we define the vendor is going to be mission critical, well, that means we have to tell the legal department
52:03
to make sure we have an SLA in the contract. And if we're worried about encryption of data at rest, then we have to include that term in the contract and we have to assess that vendor to make sure that they are doing things we want. We put legal terms in to say they have
52:22
to keep our data in the United States, but we then also verify where their data centers are. So, this model works for assessing vendors and it works very, very well. All right. So, when do you actually use this? We talked a lot about different possibilities for it and we've got a chart here that addresses a little bit of when to use it. As you can kind
52:44
of tell from our examples, we use it all over the place. Initiations of significant projects, the behavioral health example that Bill just did, vendor acquisition for the first time, and then annual assessments, both from a risk assessment perspective and from a vendor assessment perspective. Bill mentioned earlier that we did it for
53:03
26 material business processes on the risk assessment we just completed. I can't overstate how interesting and helpful that was this year because traditionally from a compliance and a privacy perspective, risk assessments are really brainstorming in nature.
53:20
This forced more rigor than I think we'd even seen last year when we did it a little bit closer to this way. And it took out the potential for missing things because you're not asking the questions in a regimented way. And that was a huge win, resulted in a lot more action items for us to do, but that's a good thing in the grand scheme of things. There's a lot more
53:40
because we're aware of it for us to look at. On demand too. There are times when, sometimes from an audit perspective, things just crop up and you think, maybe we should take a look at that in more detail. There's the regimented way to actually look through it. All right. It does exist in a continuum of activity from a risk
54:01
assessing perspective. And we've got a little bit of the dimensions that we think about it here. So threat model in the upper right, really it's best practice seems to be when it's a new process or something that we're encountering for the first time, but we have no idea about the risk on it. So risk unknown in a new process. On the left side, there are things that we
54:20
have an idea about, like processes that we're actually doing and either how much we know about it or how much we don't know about it. So on an annual basis for the last few years, what our risks are as a company are pretty stable. We kind of know the general categories of risk. So we may not necessarily know with an existing product, how it shifted over time.
54:43
Take a look at that from an audit perspective. And these overlap. A threat model in general, when you're talking about a threat model designs a control, we'll have to retest that control at some point. So that in our world still stays in my universe. That may get handed off to an internal audit group for them to be able to test that control eventually.
55:01
But it's all related. It does create a completely virtuous circle, I guess you could say, from a control management perspective. So, final thoughts on all of this. When you are insecure to your compliance, your risk assessing organization, your job is actually to say yes. We security practitioners,
55:24
we get a bad rap because people think we always say no. Our job is actually to figure out how to enable and empower the business. And so really, Patrick and I firmly believe that it's our job to say yes safely. And one of the ways we can say yes safely is to go through
55:43
a regiment regimented process of assessing risks, and then coming up with action items and say yes, it's fine to bring this new vendor on it's fine to do this new process. But here are our recommendations for the ways to in harden it to improve the security and compliance
56:01
and privacy of that system. And with that, thank you for listening to us. It's been our pleasure to talk to you. And we look forward to the Q&A portion here at DEFCON Biohacking Village.