We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Why Patients Should Hack Med Tech

00:00

Formal Metadata

Title
Why Patients Should Hack Med Tech
Title of Series
Number of Parts
85
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
What do Apple, John Deere and Wahl Shavers have in common with med-tech companies? They all insist that if you were able to mod their stuff, you would kill yourself and/or someone else... and they've all demonstrated, time and again, that they are unfit to have the final say over how the tools you depend on should work. As right to repair and other interoperability movements gain prominence, med-tech wants us to think that it's too life-or-death for modding. We think that med-tech is too life-or-death NOT to to be open, accountable and configurable by the people who depend on it. Hear two hacker doctors and a tech activist talk about who's on the right side of history and how the people on the wrong side of history are trying to turn you into a walking inkjet printer, locked into an app store.
47
Computational geometryProcess (computing)FirmwarePhysical systemAxiom of choiceLevel (video gaming)Computer hardwareCore dumpBitType theoryPower (physics)Software developerInformation securityNumberScripting languageAlgorithmMereologySoftware maintenanceHacker (term)SoftwareQuicksortInformation privacyExecution unitContext awarenessScaling (geometry)Dependent and independent variablesFigurate numberHypothesisAbsolute valueAnalytic continuationService (economics)TrailDifferent (Kate Ryan album)Well-formed formulaStudent's t-testInteractive televisionCybersexIdeal (ethics)Uniqueness quantificationProgramming paradigmSet (mathematics)DeadlockFreewareRegulator genePhysical lawComplex (psychology)Asynchronous Transfer ModeAreaDegree (graph theory)DeterminismGastropod shellCommunications protocolResultantSpacetimeFamilyInstance (computer science)Computing platformGame theoryWave packetExpert systemStreaming media2 (number)Virtual machineBranch (computer science)Line (geometry)Position operatorLoop (music)MultiplicationRoundness (object)Replication (computing)Source codeMoment (mathematics)Multiplication signDigitizingTask (computing)Vulnerability (computing)Motif (narrative)HookingConnected spaceMixed realityDynamical systemTraffic reportingSpring (hydrology)Contrast (vision)Product (business)CoroutineObservational studyAuthenticationVolume (thermodynamics)Closed setProof theoryEntire functionArtificial neural networkDigital rights managementPoint (geometry)Faculty (division)Sheaf (mathematics)Sinc functionPrisoner's dilemmaGroup actionInternet forumLie groupPoint cloudPattern languageGame controllerVideo gameOrder (biology)Modal logicInternet service providerTelemedizinEvent horizonRow (database)Normal (geometry)Inequality (mathematics)Stability theoryMathematicsVideo game consoleDigital photographyChannel capacityVariety (linguistics)TowerTable (information)CognitionInformationMechanism designLattice (order)PressureOpen setMoment <Mathematik>Software protection dongleUniverse (mathematics)Staff (military)Library (computing)Goodness of fitNatural numberInheritance (object-oriented programming)1 (number)Case moddingCalculationFrame problemProcedural programmingChainOpen sourceBlack boxPasswordParadoxSurfaceControl flowRight angleSystem callAdditionOffice suiteTelecommunicationAutomatic differentiationBuildingParameter (computer programming)Computer scienceFacebookBoss CorporationGreatest elementIntelligent NetworkVector potentialRevision controlMachine learningComputer fileCrash (computing)Functional (mathematics)Electronic mailing listSubject indexingDisk read-and-write headFitness functionResolvent formalismTwitterRemote procedure callFormal grammarLecture/Conference
Transcript: English(auto-generated)
Saturday, first talk of the day, let's give him a great, big DEFCON welcome. Come on, let's hear it, yeah! Yeah, woo! All right, hey, welcome everybody, we're really appreciative. You know, I think it's wild just to think about how far DEFCON's come. We're at DEFCON 30, how insane is that? You look around, walking through the halls,
there's marble and this is just so interesting to see where we've come. So I think it's worth a second to just look around, see who's there, see how much we've grown and appreciate that like, we have just begun to burn down all the shit that's wrong in this world and fix it with our hacker minds.
So guys, give yourself a round of applause, like that is important and you being here is part of that and we really appreciate that. Thank you so much for being with us, especially at 10 am on a Saturday, which in DEFCON time is like 3 in the morning in the real world, so really appreciate it. My name is Jeff, my friends call me Replicant. I'm a pediatrician and an anesthesiologist
and I do some security research about clinical medicine technology with this guy. I'm Christian Demeth, QWADI is my handle. I'm an emergency medicine physician and security researcher. And I'm Cory Doctorow, I've worked with the Electronic Frontier Foundation for 20 years in different capacities. I'm currently the special advisor.
I write science fiction novels and I'm on the computer science faculty at the Open University and the library science faculty at the University of North Carolina. So a little known fact, QWADI actually brought me to my first DEFCON, which I think was DEFCON 18 or 19 back when we were baby medical students. And I was thinking the other day about what has happened in medicine since then.
I mean, it seems like the 10 plus years have gone by quickly, but in medicine, that's a lifetime. And there have been some incredible advances in some of the ways we used our technology to treat patients that we can really look at with two lenses, a lens of promise for the future and the incredible achievements we can make,
but also one of peril of a future in which we want to avoid really significant concerns and tensions with privacy and autonomy. And so just to kind of give you a little bit of a consideration for that, we have gene therapies now that treat diseases that were fatal to children when I was in my pediatric residency. And that's incredible.
That's incredible promise. Looking at it the other way, though, if you don't have insurance, you got to pay half a million dollars a year to keep your kid alive, right? We're going to talk about artificial pancreases, incredible technology that hackers have pioneered that can help to improve your diabetic control and really increase your quality of life. But we can see a future where some people may be forced to use a black box
that they don't fully understand that collects data that they can't access that depends on DRM shackled consumables that they need in order to continue to have a function. Or a future in which telemedicine is available to everybody and people can see a primary care provider and they can use personalized medicine to customize treatments for them.
You know, you can see a flip side of that coin where private equity is buying up your primary care provider and Amazon's acquiring your doctor's offices and you're starting to get targeted ads based on the data they're mining from you. So I really want to explore those tensions today as we talk. And we thought no better a person to help us talk about this
than the activist Cory Doctorow. So we're gonna let Cory take it over for now. Give it up for Cory, give it up. Thank you all. So I want to talk about a group of people who rely on medtech and also rely on modifying medtech and some of the ways that their own safety has been weaponized against them
and some of the stuff that's come out that's made life better for people who rely on it. So I'm talking about people who use power wheelchairs which are a significant part of the 50 billion dollar durable medical equipment market. There's about 3 million Americans who use powered wheelchairs. It's the complex rehab tech area of Medicare and Medicare is pretty dysfunctional in this regard.
They very narrowly interpreted their mandate and so if you use a power wheelchair and you rely on Medicare to provide it to you, Medicare will only give you an indoor powered wheelchair. Although many of us like to leave our homes and they will also refuse to cover any preventative maintenance.
So this is a recipe for disaster. You have a chair that's being used in ways that it's not supposed to be used and you can't perform preventative maintenance on it. One problem of the way that Medicare procures these chairs is that they go to lowest bidders and the way to generate a low bid is to have economies of scale. And so two private equity roll-ups, a company called New Motion
and another company called National Seating and Mobility have bought virtually all the other companies that make powered wheelchairs. So people who use power wheelchairs buy from one of those two companies typically and private equity firms, they have a common playbook which is to load up their acquisitions with a lot of debt and then squeeze them to service that debt.
They pay themselves a special dividend on acquisition and then to make good on that debt, they then have to squeeze them. One of the areas they've squeezed is by cutting service and so parts are billed at very high prices and it takes a very long time to get serviced. So all of this was examined in detail in a report that came out this spring
called Stranded from the Public Interest Research Group or PERG. Stranded found that 93% of the power wheelchair users they surveyed had had a need for service in the last year. 62% of them had waited four weeks for that service and 40% of them had waited seven or more weeks for service.
Yeah, seven or more weeks. Sorry, I thought that was months. No, seven or more weeks and you have to understand that in some instances this meant not only that you couldn't leave your home but possibly that you couldn't leave your bed. So it makes it very hard to have a family life, have a personal life, do shopping, maintain your job and do all of these things.
So the question is why can't people who are literally stuck in bed for seven weeks waiting for a part, why can't they just fix their own chairs? And partly that's because the part stream itself has been starved by the duopoly. So in Stranded, PERG collected stories from people who use power wheelchairs
about their problems getting parts in service. They found multiple people reporting that the $6 inner tube that their chair used cost $300 as a Medicare billed part and that it would take six to eight weeks to procure. So you would have a flat for six to eight weeks
while you waited for your chair to get fixed. There is an instance of a $20 power button that literally turned on the chair that cost $500 and took four months. But even where people can get parts and they do, they source them from eBay and they source them from Amazon and there's one great story about a stability wheel that they were having trouble sourcing a couple
and then their son looked up and he was like, that's just a skateboard wheel. And show them how to buy that wheel with cool orange glitter and whatever and they just replaced it, right? So sometimes you can just fix your chair by going around this, treating them as damage and routing around them. But sometimes you get blocked by digital rights management.
So these chairs use digital rights management to restrict access to their management consoles. That means that you can't get diagnostic information out of them. It also means that you can't make routine adjustments. So for example, there's often a delay built into the steering mechanisms. As you get more proficient with your chair, you might want to reduce that delay. You can't do that on your own.
Also, if you change the pressure in your tires for different terrain and you want to adjust the torque in the motor, that's also an alteration you need a security dongle for. So the good news is that Colorado in June passed the HB 221031, the Consumer Right to Repair Power Wheelchairs Act, which substantially fixes a lot of these things.
They did run up against a really important problem though, which is that removing DRM is a felony under federal law. Section 12-1 of the Digital Millennium Copyright Act provides for a five-year prison sentence and a $500,000 fine for providing a tool to bypass DRM. And so they couldn't authorize people who wanted to fix wheelchairs
or who used wheelchairs or used wheelchairs and wanted to fix wheelchairs. They couldn't authorize them to make or provision each other with tools that would allow them to affect these repairs. So instead, they did an end run around it and they ordered the wheelchair companies to just provide the tools that would allow them to read out the diagnostics and so on. This is a good solution, but really it's not enough.
And so I just want to finish by saying that Electronic Frontier Foundation, we're representing Matthew Green and Andrew Bunny Wang in a lawsuit to overturn Section 12-1 of the DMCA. And finally to say that, as I noted, there's some like deep structural problems that make it hard for people to use powered wheelchairs, right?
There's the duopoly, there's Medicare only paying for indoor chairs and not supporting preventative maintenance, and better repair doesn't solve any of those problems, but it does fix wheelchairs, right? And that in itself is something worth doing and we can walk and chew gum. We need to do both. That story, how many of people out here were surprised?
About that story. Have you guys experienced modern healthcare that raised your hand? Raise your hand if you've been frustrated with the inefficiencies, the lack of communication, the broken insurance system. Yeah, okay. We can all relate with that, absolutely.
As clinicians, we relate with that. And as patients ourselves, we do. We're going to transition a little bit to another very strong theme in modern medicine, which is just that we often are unaware of how the tools we use actually function. That's a replica can take us away. So my day job, I'm an anesthesiologist,
and I somewhat facetiously say that I hack people's brains. So I turn you off and somebody pokes you with a hot knife and I turn you back on again. If it's a neurosurgery, we blow on it before we put it back in. But it's widely accepted among our profession that it's less than ideal to wake up during surgery, right? And so one of the things that we use as a monitor that helps us,
in addition to a couple of other variables, kind of keep track on how deep a patient is under anesthesia. It's called a BIS monitor. We've used it for about the last 20 years. It has been the topic of thousands of academic papers that really investigate how anesthetics even work. So something that we're all very familiar with. And without getting too much in the weeds, this is a monitor.
The name BIS is derived from how it works. It takes the electrical signals of the brain, the EEG, and it processes it to produce a unitless, dimensionless number that people can trend. So 20 patients nice and deep under anesthesia. 80, they're about to wake up and sue you from malpractice. So for a long time, that name BIS was derived because most people understood
this is something that looks at what's called the bispectral index, which doesn't matter. It's just a way that you can analyze that EEG. Last year, a really awesome doc at Harvard, Christopher Conner, reverse engineered these previously proprietary algorithms. Nobody's ever really seen under the hood here. He reversed those algorithms and showed that actually this device
isn't producing a bispectral index at all. It's looking at a completely different aspect of the EEG, which is a little unusual because a lot of the research that we've based and used this monitor to conduct has operated under this base assumption that the manufacturer has never bothered to correct or operate. And it's kind of a little bit unfortunate because I think in situations like this,
when we're using these clinical devices without fully understanding where they're getting their information, how they're producing, and how we use it, we miss opportunities to innovate. We miss opportunities to be able to use these devices in different situations or to say that they may be less than ideal for a particular use context. And I just don't really understand why we have to live in a paradigm
where these things are so locked down and proprietary. And if this is a concern for clinical devices that are relatively reliable that we've used for the last 20 years, I'm even more worried about this coming tsunami of clinical AI ML algorithms. If I had a Dogecoin for every Silicon Valley guy at a medical conference who said,
I've got this AI algorithm that's going to revolutionize the way you practice medicine and save you billions, I'd have like four dollars. So Corey, why should we not lift our hands and welcome our new clinical AI overlords? So I think that this is probably an audience that is
well up on all the different ways that ML can go very wrong. We have a whole village here at DEFCON where you can see people giving ML all kind of hallucinations and tricking it in lots of ways deliberately and accidentally. It sometimes has some weird failure modes. And of course, that's true of people, right? People make mistakes, people have biases, and so on.
But there's one thing about a number that's given to you by software that is, I think, more dangerous than a number that's given to you by a human, which is the degree of trust we put into it. That if you take a process that would normally take someone back and have them say, wait a second, that can't be right,
and you have a computer emit that as a precise number instead of as a kind of squishy judgment, you can empiricism wash your weird ideas. And people go, yeah, I guess algorithms can't be racist. There's no such thing as racist math, to which I say, meet my friend the phrenologist, he'd like to measure your skull with his calipers.
So I think that when you combine the already difficult situation in which people often defer to medical professionals about things that they are uniquely situated to describe because they're part of their subjective response to their pathologies, and then you add a computer in the mix that says, no, everything is fine,
it becomes very hard to imagine how patients are going to be able to exert bodily autonomy and autonomy over their care. And I did want to add about that awesome paper about the BIS monitor that this audience, I think, will appreciate. The guy who reversed the BIS monitor, one of the ways that he was able to do this is by building an emulator, which turned out to be really easy
because the core DSP in the BIS monitor is a TI DSP that's used widely in video game systems, so he could use MAME, which is just great. That's rad. So we've talked about algorithmic bias and empiricism-washing and how we're all kind of really aware that these algorithms can have bias
unintentionally sometimes, just solely based on the data that you put into it or the training set or the demographic composition of the individuals that comprise it. But then it becomes even more concerning, and a lot of this talk is talking about dystopian futures, when you consider adversarial machine learning.
And so if you're not familiar with that, many of you are, think about ways in which an intelligent adversary could attack machine learning algorithms to manipulate the outcome, right? They could attack classifications. They could attack training sets, change what the ground truth is,
and design an attack that manipulates the outcome of the algorithm. That could be done in a variety of really scary ways for a lot of really scary purposes. If it's to manipulate you into buying something, you can see a financial motivation in the healthcare space. You can imagine organizations, companies, entities doing that as to compete with one another
and make their particular AI algorithm less effective, for example. There are even some papers out there that are quite concerning, where it's not necessarily the entire population that may be impacted by adversarial machine learning, but you can craft attacks that the outcome of that only impacts a certain group of people.
Terrifying implications there, and one of which that I'm going to have a little bit of a call to action. This whole talk is kind of a call to action, but of the people in the world that are best suited to understand the perils of this and be equipped to help defend the future of humanity in this, I think hackers are probably right there at the top, right?
So two things. One, continue the transparency that we've talked about, this BIS monitor as an example, about how we as hackers are generally in support of far more transparency, especially with these algorithms that touch every aspect of our life, and then also that we possess a unique skill set.
One that can understand how malicious adversaries, adversaries can attack these, how we can defend against them, and how we can better secure the infrastructure that will then hopefully, with the promise of a lot of this technology, potentially give us huge insights into clinical care, right? Improve treatments, new medications that can completely
do away with pathologies we never thought would be possible. And so that kind of peril and promise, we need you out there to make sure that these things that would thwart that future, that promising future, don't come to fruition. So we're hopeful in that. Let's switch a little bit from the doom and gloom to flip the script
and talk about what happens when hackers pwn themselves and are actually able to take the initiative and innovate on some of this stuff. Yeah, absolutely. So as you probably know, 1 in 10 Americans has diabetes, 92 million Americans are pre-diabetic, and diabetes, while anyone can get it, is disproportionately falls on marginalized people.
It's a disease of poverty. And so people who have diabetes are structurally, find it difficult to demand high quality care and to push back against abusive practices by medtech firms. So in 2013, some people with diabetes decided to do something about this. Two hackers, Dana Lewis and John Kostek,
took a continuous glucose monitor and figured out a way to hook it up directly to an insulin pump and wrote an algorithm that monitored your blood sugar, tried to predict where it was going, and tried to dose you with insulin as you went along. And the closed loop pancreas, artificial pancreas, was born.
They call themselves loopers. And they gather on a platform called openaps.org. A lot of the people who built these tools early on were parents of young children. So my friend Sul Cajaro is a video game developer. He worked on a bunch of Salarkey games in the old days. His young son, who was two years old at the time,
had just been diagnosed with type 1 diabetes and was in daycare. And the people who worked at the daycare were very diligent and caring, but they weren't experts in managing diabetes. And so he wanted to be able to oversee, partially automate, and correct,
and get alerts on his son's insulin levels, blood sugar levels. And so he became a core developer on the looping tools. And there's a lot of hacker overlap with this looping stuff. And it's one of these great examples of hackers helping normies,
where the stuff that we build for ourselves ends up sort of leaking out into the rest of the world. And there's a reason that hackers want to build looping software. And it's not because they're too lazy to manage their blood sugar. It's because doing a routine task perfectly all the time is why we have computers. There's a reason we replace all of our routine tasks as hackers with shell scripts.
Do you remember when Unix systems used to ship without a pre-built cron job that rotated the log files, and they would just crash every three days because no one could remember to rotate their log files? And replacing the routine things in your life with a shell script is especially important if when you screw it up, it's hard for you to think right.
And if you screw up your blood sugar, it can impair your cognition. So we lost a dear friend last year. Excuse me, I always get choked up at this point. But Dan Kaminsky died last year. He had diabetes. He had management problems with it. He was in lockdown. He was isolated. Other people couldn't see what was going on. And you can see how even someone as brilliant as our pal Dan
couldn't manage a routine task perfectly all the time and could experience a literally fatal cascading failure, which is why we love this stuff. So you have these hackers who are hacking hardware, hacking software, making their own algorithms. And to do this stuff, they need to rely entirely on jailbroken hardware
so they can affect these changes. So I'll bring it back to you guys. Yeah, I mean, a question that we very commonly get is this sounds awesome. Why aren't more doctors recommending these systems to their patients? Why aren't more patients coming and asking for this type of care? And we just want to kind of hit a little bit on some of the reasons why we need to do some more work. So the first is education.
Again, these were not tools and technologies that existed even 10 years ago when we were training, let alone the endocrinologist who's been out in practice for 30 years, right? And even as widely adopted as these are in the hacker community, I think Loop, which is one of the biggest platforms, has about 9,000 people using it. And there are 1.9 million type 1 diabetics. So we really have a lot of work to do to sort of raise awareness there.
Doctors who learn about this are going to just inherently worry about its clinical efficacy and safety. Put aside the fact that we give diabetics a vial of 100 units of insulin and tell them to go figure it out on their own. But people are going to say, oh, can't they screw it up if they set this up themselves? And we're starting to just now get some really interesting data
to support the efficacy of these different types of devices. There was a really interesting paper published last year by some folks at Stanford and Loop in Miami that was really unique in its design and kind of demonstrated the promise of decentralized clinical trials. They basically just found people who were signing up for Loop. So Loop is one of the systems. And they said, hey, we're just going to pull some data from you
if that's OK with you. Give us your baseline data. We're going to see how you do over the next six months. But we're not going to tell you how to use this tech. And all of the patients who were in this study had to work with the community, troubleshoot guidelines on their own. So it was really not a very paternalistic platform to say, you need to follow this protocol exactly. But the results were pretty incredible. Patients after using this closed loop system
were able to spend longer time in a normal blood sugar range. They were able to avoid really significant low blood sugar episodes. And there were no episodes of the more feared complication, DKA. So really impressive technology that we're starting to see is efficacious and is pretty low risk that people can use. One thing I do want to kind of comment on some of these studies
and in the population in general is that it does sort of tend to reflect some of the inequities that we previously discussed. So the 500 or so patients in this study, 90% of them were white. 85% of them were college educated or higher. 70% of them made more than $100,000 a year. And 95% had private insurance. So there's work for us as we continue to push some of these open source platforms
to make sure that the inequity that pervades modern medicine and formal clinical trials doesn't persist in these spaces. Lastly, we'll just talk really quickly about this idea of risk. So in order to give something to a patient, you need to have a discussion with them. It's called informed consent. We talk about the risk. We talk about the benefits. We make sure that you understand those.
Quadi and I are working on developing this concept of a cyber informed consent that we'd be happy to talk about with people if they're interested. But if your doctor is going to be giving you these connected technologies, we'd like to think that they should be able to have that kind of discussion about the potential risks. And there are risks that arise from the connected nature of the techniques. Yeah, I just wanted to just quickly poll the audience here. Like who here has had gone into a hospital and had a procedure done
and a doctor or someone else has gone to them and talked to them about the risk? Okay, you might have an infection or you lose blood or anything like that. Raise your hand. Okay, how many people out there have ever had someone talk to them about the risks of their privacy or security of connected medical devices when they get them? Anyone?
Yeah, that's a problem, right? But it's a problem in so many really interesting ways. First of all, it's a problem because it's not the right thing that should happen, right? We should be informed of these things. Two, often you do not have a choice, right? What dictates what insulin pump you get when you're a diabetic isn't necessarily a free market thing where you can go and decide
and look along all of them and say this is the one I want because it's the one that protects my privacy the most or it's the one that has the greatest hardware features. You don't have those choices in our modern healthcare system. It's an insurance thing. And the last thing I would say about cybersecurity enforcement, I know cyber, God, this isn't a whiskey, but I'm not supposed to say that at Defcon.
But the other interesting thing is that the people telling you about the risks have no idea what the hell they're talking about. How many of your doctors could even articulate basic security and privacy concepts so people tasked with asking you to consent to this don't themselves understand it,
don't talk to you about it, and there are structural reasons why that's a really hard nut to crack. And so changing that requires education, awareness, and a lot of other things. We're going to also talk about the elephant in the room now,
which is a lot of the features that the Looper community was able to accomplish with this technology were because the devices themselves were vulnerable. That is an interesting prospect and I have a hypothesis here. I want to just do a little thought experiment in the audience
because I think this is a unique composition. Who here would wear a vulnerable insulin pump infusing insulin into their body? It has Bluetooth connectivity with no authentication that you can change the settings off just by connecting to it. Who here would wear that device and rely on it to take care of them every single day?
Please raise your hand wide. Yeah, I'm sorry if it allowed you to do those feature sets. Forgive me. I put a little caveat on it. You can wear this device. It's vulnerable, but it allows you to have all those cool features that the Looper community has made. Please raise your hand. There's a question about is there a way to secure it and have all the feature sets?
We're going to get to that in a minute, but the question to the audience is who would wear a vulnerable pump if it allowed you to have all these cool features that the Looper community is making?
Raise your hand. Who here would never wear a vulnerable pump that had no authentication? Well, so this is the elephant in the room. This is the seeming conflict between the Looper community and the security space. We've had multiple security researchers over the years discuss real vulnerabilities
in these connected medical devices that pose really scary safety risks, not just to your privacy, but to your physiology, to your well-being. The FDA, in my opinion, has done a fantastic job over the last 10 years pushing device manufacturers to do the right thing about security, right? They've done the post-market guidance.
They have a pre-market guidance document that says if you want to market a device in the United States, you have to be this tall on the security side. You have to do these basic practices. And I will say this. The FDA also is the first to the podium to defend security researchers when device manufacturers try to take a shit on them, right?
When device manufacturers try to intimidate and or do less than desirable things for security researchers, you know, the FDA is at the podium defending this work. So, but they're tasked with the safety of medical devices, right? And so if there's an issue, if there's a vulnerability in a medical device,
they might issue a recall. Then those devices are no longer going to be in the community for Loopers to innovate on, right? This is the seeming conflict. And one of the points of this talk was to try to say we want our cake
and we want to eat it too. You know, what we should be doing instead of fighting between the Looper community against the FDA is instead using this as an example of a market failure. That consumers, patients, parents of type one diabetics want access to the data.
They want better control over the technology that keeps them alive. And that's something that they demand and they deserve. At the same time, we want secure medical devices that don't have hard coded passwords. What we should be doing is pushing device manufacturers to do the right thing.
And what we also don't want to do in this dystopian future is allow for documents like the pre-market guidance, recommendations that devices should be secure to be a reason medical device manufacturers cite as to why they can't be open. Why you can't have access to your own damn data.
Because they'll say, oh, we can't expand the attack surface. Why don't we just challenge that paradigm and say, why don't you employ better secure development practices? Retool your security infrastructure to be both open, transparent, allowing for patients to have access to that data, and then far more defensible from a security posture.
Is that fair to say? Is that the best thing we could do? Yeah? But we need you out there telling people that it should happen. So I want to close this out by talking about how this all interacts with cyber law and the stuff EFF does and competition.
So medtech companies don't like that patients are jailbreaking. And they use laws like the Digital Millennium Copyright Act to take down software that allows you to change your firmware. It might allow you to change your firmware to make it more vulnerable. It also might allow you to change your firmware to make it more secure.
So for example, Abbott Labs got GitHub to take down LibreLink, which was software for the Libre II glucose monitor in 2019, citing the Digital Millennium Copyright Act. And they argue that this was about patient safety. A thing that is a common motif in medtech research, especially around medical implants,
and especially around medical implants for people with diabetes, is that firms do not respond in a timely fashion, or sometimes at all, to really significant vulnerability disclosures about their products when they're not being used by loopers to give themselves a better healthcare experience. So for example, in 2019, a couple of hackers,
Billy Rios and Jonathan Butts, did a responsible disclosure to Medtronic to tell them that their mini med paradigm pump was super vulnerable. And they sat on that volume for two years. And then last year at Black Hat, the researchers who revealed the volume released a proof of concept called a universal remote for killing people.
And that's what actually got Medtronic to take action. So this is a pattern that's repeated across all the major hardware vendors, including companies like Johnson & Johnson. This track record of foot dragging when there are issues that are live threats to patients, but leaping to action when there's a live threat to profits.
In the shareholder communications that these firms make, they're pretty blatant about what they want to do. They want to build closed ecosystems for closed loops, where you have a single vendor providing the algorithm, glucose monitor, the pump, and sometimes they talk about proprietary consumables, either proprietary formulations or proprietary packaging for the formulations.
Basically, they want to turn your artificial pancreas into an inkjet printer and use all the printer tactics that we have historically seen in the printer world. And this has triggered an absolute inferno of mergers and acquisitions activity as private equity companies see a lot of potential upside
from building these closed ecosystems. And the closed ecosystems beget more closed ecosystems. So there's a great advocate for this stuff, a woman who calls herself the savvy diabetic. Her name is Joanne Milo. And in June, she sent a letter to the FDA, objecting to the merger of a glucose monitor company called Dexcom with a pump company called Insulet.
And although this was super anti-competitive, there's a reason Dexcom was doing it, which is that Medtronic had bought another insulin pump company and locked out Dexcom. They did that with a company called Companion. And Tandem had also blocked them. So here you have this company that makes the glucose monitor
and is watching all the insulin pumps get bought up by other glucose monitor companies and locked down. So they're like, we have to have an insulin pump company too. But for all that they're the victim in this, they're also a terrible company with a long history of threatening their patients when the patients take actions to try and come out and extract their data.
So none of these firms have their patients backs all the time. And if we allow them to merge and create these closed ecosystems, it comes at the expense of patients who have idiosyncratic problems with their health that they want to resolve by mixing and matching pumps and algorithms, consumables and monitors.
It also harms them in that it makes the supply chain brittle because if your pump only works with one glucose monitor and that glucose monitor can't be found because of a supply chain problem, then your whole pump breaks down. We saw what single sourcing vendors did during the pandemic and after the pandemic with things like the baby formula shortage. So it is true when these firms say patients might harm themselves
by modifying their devices. It is true. And it is true when their security researchers or security staff say, we are only locking these down because we want to help our patients. It's true that that's a thing that they do. And it's something that comes up a lot when I speak at DEFCON and in other hacker forums
about competition more broadly. I work on the competition team at EFF and we're talking about dismantling big tech and letting smaller firms enter the market. And oftentimes, I'll speak at an event like this and someone will come up and say, you have no idea the eye-wateringly terrible stuff I block every day in my job at Apple or Facebook or Google. And they're absolutely telling the truth.
But the thing that they need to recognize, that we all need to recognize, is your boss will pay you to defend me from his enemies. Your boss is never going to pay you to defend me from your boss. And this is why, ultimately, if we're going to have an arbiter that decides what mods are safe and which ones aren't, it can't be the manufacturers.
There's a role for the FDA to show up and say, no, don't do that, you'll kill yourself. But if we rely on the manufacturers to do it, sometimes they'll be sincere, sometimes they'll be talking out of their ass. And what they really mean is, don't do that or you're going to spook our shareholders. And we shouldn't have to figure out which one they mean. We should have access to a democratically accountable system that tells us what the truth is.
Can we just clap on that? On that note, we have a little less than 10 minutes. If anybody has any questions, we'd love to get a couple of those out of the way. We've got a mic back there. Folks are able to get in line. And thanks so much for coming out.
You talk about these acquisitions, and why isn't the FTC getting involved in antitrust? Yeah, you know what? I got good news for you about that. So the FTC for 40 years took a nap. The official doctrine on antitrust enforcement in America
and most of the world for 40 years has been something called consumer welfare, which basically ignored all monopoly problems and allowed, for example, Microsoft to corner 95% of the OS market and lots and lots of mergers. Two companies make all the beer in the world. There's one professional wrestling league. All the glasses are made by the company that makes all the frames and owns all the retailers. It's terrible.
But five years ago, a law student named Lena Kahn published an astoundingly good Yale Law Review paper called Amazon's Antitrust Paradox that demolished the arguments for antitrust forbearance. Today, that law student of five years ago is the chairwoman of the FTC. She has promulgated amazing new guidelines to block future mergers,
and she's just announced antitrust scrutiny of privacy practices by firms, bypassing the deadlock in Congress and promising to regulate firms on privacy directly through the administrative branch. She is a hero. She needs our support. There is a public listening session on September the 8th that the FTC is holding on privacy. You can go and intervene.
Your position as technologists is really going to make a difference. We are in a moment in which we have better news on antitrust than we have had since I was 10 years old. I cannot overstate how fucking great the antitrust picture is right now. It is amazeballs.
Hi. I have a question about trust. As somebody who lives and breathes, this is a BiPAP user. Closer to the mic, please. Hi. As a BiPAP user, what you've been describing as sort of the dystopian future is already kind of reality for me. Philips Respironics has been killing people
knowingly for years, and it took them three years to actually recall their products. Similarly, ResMed, if you don't pay them enough, I'm not going to tell you that you have changed Dokes breathing. Even though it's just a bit flip that you need to do, behold that. How can we actually have trust in the medical institutions today?
That's a really good question. Sunstein is the best disinfectant. And Corey, I mean, why is this black box proprietary aspect of some of these things seen as like a competitive business practice? Why is there this ouroboros between, oh, if we can't keep these things secret, we can't innovate? So I think the problem is, again, an antitrust.
I think that when you have an industry dominated by like five firms, if they all settle on the same convenient bullshit, like if I told you about it, I'd have to kill you, no one who's credible, which is to say no one who works for a comparably sized firm steps up and says, no, no, no, wait, that's nonsense. We absolutely can share this information with people.
Not only that, but when firms are very concentrated, they have a lot of money to spend on lobbying. So the way I think that you get good regulation is by making sure that firms are neither too big to fail nor too big to jail. People looked at that photo of Donald Trump at the top of Trump Tower with all the people who run all the tech companies around a table in 2016 and said, isn't that terrible
that they're meeting with Donald Trump? And I'm like, you know what's really terrible? That they fit around one goddamn table, right? Because if they can fit around a table, they will sit around a table. And when they sit around the table, they're gonna figure out how to screw us. And so this is why as a prerequisite, it's not enough, but it is an absolute prerequisite for good regulation.
Firms, sectors have to be diverse with firms who will blow the whistle on each other when they're telling convenient commercial lies. So I saw the five minute mark, so I'll try and make this quick. I have one of the Abbott freedom or Libra, whatever the hell it's called.
It's gen one, it's NFC only. And your question about what I trust this with, I don't need insulin, thank God, yet. But there's a gen two with Bluetooth. I'm not getting that. So I have to upload my data to the cloud as a necessity to inform my doctor. Also, I had to advocate myself to get this.
And then I was gonna pay out of pocket. It just happens to be on my insurance. So I'm very lucky. I'm very privileged to have insurance. So that's kind of where we're at with diabetics in general. It's yeah, that was my, I had another question, but I don't think there's enough time. So I'll let other people go. Thank you. Well, I think that, you know, one of the things all firmware can do is let you disable the Bluetooth on your device.
And that's one of the things, one of the reasons that we should support all firmware. Real quick addendum. Also, the other thing that I could have done with that hacked firmware is not have to replace this every two weeks. Right. I'm fortunate that I have good health insurance.
However, my insurance dictates what devices I can use. So I'm wondering if you have any comments on where the insurance companies fit in all of this. And if you follow the money, there's quite a bit of money in the medical insurance business.
That's a really great question and comment. And I mean, I'm sorry that you're in the position of so many other people, which is you may want to select the device for a particular reason, but you can't because of your insurance. You know, one of the interesting arguments I've heard or opportunities is to talk about how these devices pose risks to some of those payers.
And what am I talking about? Health insurance is going to care if they have to spend more money on you. They also care how much the devices are that they have to spend to treat your illness. And so if we can talk about risks to privacy, risks to security, and what the ultimate outcome of that would be from their bottom line,
it may be a persuasive argument. They also are very commonly looking for reasons to try to save a buck. And so as these things become more and more connected, as these devices, and there are more and more vulnerabilities found in the wild, their recalls can be quite costly. That'll eventually hit insurance company's bottom lines.
And I could see them in the future doing a risk calculation to say, well, this pump might cost 50 bucks more than this other, but it's more durable. It's more secure. We're having less headaches with this. And so more and more as we can, they themselves can realize that, I think we'll move in a better direction, but also you as a patient should let them know
that is really important. Being a voice and advocate for choice and of that choice, privacy and security being part of that is very persuasive to the people that regulate insurance companies as well. And so it's gonna be a long haul, keep going, but raise your voice in that so that we can change that dynamic. And then also just try to advocate broadly
so that it isn't so stark a contrast between device manufacturers. Really, we wanna raise the entire device ecosystem into their security and privacy. That would help all people across all types of insurance. Yeah, we don't wanna create a market where like, if you're lucky, your insurance lets you eat in the restaurant
where they're forced to wash their hands after using the toilet, but otherwise they don't. We just wanna eliminate the restaurants where they cook your food without washing their hands. Like that's what we actually want. We just wanna abolish the bad devices. What an appetizing metaphor to end on. We got 90 seconds before we turn into a pumpkin, so I don't want someone to ask a great question that we can't answer.
So thank you guys so much for coming. We're gonna head over here. Anybody in line who wants to come and find us, we'd be happy to talk outside. Really appreciate everyone coming. Have a great con. I'll be at the EFF booth later as well.