BIO HACKING VILLAGE - DAY ONE
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 322 | |
Autor | ||
Mitwirkende | ||
Lizenz | CC-Namensnennung 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/39806 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | |
Genre |
DEF CON 26173 / 322
18
27
28
40
130
134
164
173
177
178
184
190
192
202
203
218
219
224
231
233
234
235
237
249
252
255
268
274
287
289
290
295
297
298
299
302
306
309
312
315
316
00:00
Web SiteBitDemo <Programm>ZweiFluidikCASE <Informatik>Quick-SortMittelwertKontextbezogenes SystemDreiecksfreier GraphOrtsoperatorProfil <Aerodynamik>MathematikLikelihood-Funktionsinc-FunktionLesezeichen <Internet>SoftwareschwachstelleSelbst organisierendes SystemInformationComputersicherheitFrequenzHalbleiterspeicherTypentheoriePerspektiveZahlenbereichMultiplikationsoperatorDifferenteDatenverwaltungEntscheidungstheorieFormale SpracheTelekommunikationMaschinenschreibenVorzeichen <Mathematik>KoordinatenKomplex <Algebra>FernwartungVererbungshierarchiePerfekte GruppeCybersexTermInterpretiererAbstandProzess <Informatik>ResultanteMAPEinfach zusammenhängender RaumMomentenproblemRFIDBesprechung/Interview
09:46
Prozess <Informatik>Arithmetisches MittelSoftwareschwachstellePerspektiveTabelleSoftwaretestFigurierte ZahlDifferenteGewicht <Ausgleichsrechnung>Exogene VariableMultiplikationsoperatorComputersicherheitQuick-SortGruppenoperationMessage-PassingMomentenproblemProgrammierungSuite <Programmpaket>WellenlehreHilfesystemFernwartungLikelihood-FunktionRechenschieberGewöhnliche DifferentialgleichungTwitter <Softwareplattform>Digitale PhotographiePunktMAPMereologieRuhmasseCybersexSpieltheorieRegulator <Mathematik>Klasse <Mathematik>SystemaufrufErwartungswertInzidenzalgebraProdukt <Mathematik>HypermediaWort <Informatik>BitCASE <Informatik>
17:49
PunktKoordinatenFunktionalProzess <Informatik>GamecontrollerQuick-SortMultiplikationsoperatorPhysikalisches SystemDefaultWasserdampftafelSchlüsselverwaltungSoftwareschwachstelleRechter WinkelCASE <Informatik>Divergente ReiheFahne <Mathematik>ZahlenbereichArithmetisches MittelArithmetische FolgeDatenmissbrauchPlotterSelbst organisierendes SystemBitProdukt <Mathematik>GeradeGebäude <Mathematik>Figurierte ZahlPasswortExogene VariableComputersicherheitBildgebendes VerfahrenProgrammfehlerDifferenteWeb SiteInformationWellenpaketGruppenoperationFramework <Informatik>Kette <Mathematik>EntscheidungstheorieQuelle <Physik>Dreiecksfreier GraphAlgebraisch abgeschlossener KörperFörderverein International Co-Operative StudiesMalwareBeobachtungsstudieExploitProgrammierungReelle ZahlHypermediaNeuroinformatikAnalysisServer
25:52
SoftwareschwachstelleTelekommunikationProdukt <Mathematik>ComputersicherheitInformationPhysikalischer EffektWurzel <Mathematik>ComputerspielMultiplikationsoperatorCASE <Informatik>Hinterlegungsverfahren <Kryptologie>Umsetzung <Informatik>Exogene VariableNichtlinearer OperatorFokalpunktDemo <Programm>BeweistheorieMaßerweiterungGüte der AnpassungMAPProzess <Informatik>BildschirmmaskeNP-hartes ProblemProgrammierungReelle ZahlProgrammierumgebungGrundsätze ordnungsmäßiger DatenverarbeitungGebäude <Mathematik>SensitivitätsanalyseAdressraumChiffrierungE-MailWeb SiteKette <Mathematik>Gesetz <Physik>InjektivitätZentrische StreckungPhasenumwandlungSoundverarbeitungTrennschärfe <Statistik>DatentransferMatchingRechenschieberZellularer AutomatSoftwareentwicklerGruppenoperationSelbst organisierendes SystemPunktt-TestAnalysisRechter WinkelComputervirusQuick-SortEntscheidungstheorieMetropolitan area networkAbgeschlossene MengeOpen SourceKernmodell <Mengenlehre>Natürliche ZahlProgrammverifikationSoftwaretestAutomatische HandlungsplanungWurm <Informatik>StellenringYouTubeStichprobenumfangCoxeter-GruppeAutorisierungFlächeninhaltVektorraumKonfiguration <Informatik>Projektive EbeneUmwandlungsenthalpieUmfangDreiecksfreier GraphLuenberger-BeobachterFolge <Mathematik>RichtungMultiplikation
33:55
Prozess <Informatik>sinc-FunktionAbschattungFramework <Informatik>RechenschieberComputersicherheitSystemverwaltungProgrammbibliothekDatenfeldNichtlinearer OperatorAnalogieschlussSchwebungMereologieWasserdampftafelProjektive EbeneArithmetische FolgeGruppenoperationSystemaufrufProdukt <Mathematik>OrtsoperatorProgrammierumgebungCASE <Informatik>Hasard <Digitaltechnik>EntscheidungstheorieGasströmungSkriptspracheInformationPhasenumwandlungBildschirmmaskeSoftwareentwicklerZellularer AutomatFormation <Mathematik>BitCybersexÄhnlichkeitsgeometrieKette <Mathematik>DatensatzMomentenproblemAdressraumPunktAutomatische HandlungsplanungAbgeschlossene MengeTelekommunikationE-MailStellenringQuick-SortProgrammierungResultanteAusnahmebehandlungSoundverarbeitungElement <Gruppentheorie>MultiplikationsoperatorDifferenteLeistung <Physik>GefangenendilemmaAutorisierungMinkowski-MetrikHilfesystemReelle ZahlInzidenzalgebraZahlenbereichHypermediaARM <Computerarchitektur>VererbungshierarchieKontextbezogenes SystemDatenmissbrauchRegulator <Mathematik>WissensbasisEnergiedichteGleitendes MittelSchlüsselverwaltungBefehl <Informatik>Radikal <Mathematik>Grundsätze ordnungsmäßiger DatenverarbeitungSelbst organisierendes SystemInjektivitätSoftwaretestSpezifisches VolumenOrdnung <Mathematik>RichtungLogistische VerteilungVektorpotenzialVirenscannerQuaderUmwandlungsenthalpieComputervirusFahne <Mathematik>Weg <Topologie>SpieltheorieVerknüpfungsgliedEinflussgrößeGemeinsamer SpeicherTermBesprechung/Interview
41:58
RichtungExogene VariableAggregatzustandVirtuelle RealitätFakultät <Mathematik>ProgrammierumgebungKette <Mathematik>TermProgrammierungGüte der AnpassungEinsMultiplikationsoperatorInformationMalwareOrdnung <Mathematik>Datenfeldt-TestVollständiger VerbandNeuroinformatikComputervirusInterface <Schaltung>GrundraumMailing-ListeBitProgrammierparadigmaComputersicherheitStellenringZahlenbereichPhysikalischer EffektRegulärer GraphCodeMathematikFehlermeldungFlächeninhaltBildschirmmaskeKonfigurationsraumSchreib-Lese-KopfSoundverarbeitungComputerspielRuhmasseSoftwareentwicklerGrenzschichtablösungKonfiguration <Informatik>DatenreplikationPunktAnalogieschlussGruppenoperationQuick-SortWeg <Topologie>HackerOffice-PaketProdukt <Mathematik>PhysikalismusDemo <Programm>TouchscreenKreisflächeInternetworkingCoxeter-GruppeCASE <Informatik>Einfach zusammenhängender RaumPerspektiveQuellcodeDigitaltechnikGenerator <Informatik>SchlussregelInzidenzalgebraInformationsspeicherungFormation <Mathematik>FitnessfunktionAuswahlaxiomDatenmissbrauchSoftwareschwachstelleMAPHalbleiterspeicherRechter WinkelAdditionGesetz <Mathematik>ErwartungswertMinimalgradSensitivitätsanalyseTwitter <Softwareplattform>Wort <Informatik>Inhalt <Mathematik>FunktionalNichtlinearer OperatorKartesische KoordinatenInternet der DingeMinkowski-MetrikElektronische PublikationIntegralArithmetische FolgeFormale SpracheResultanteAutorisierungPhysikalisches SystemNeuronales NetzSchnittmengeVirtuelle MaschineDatenbankZweiPasswortNachbarschaft <Mathematik>SystemplattformURLRechenschieberAlgorithmische ProgrammierspracheApp <Programm>DigitalisierungCodierungMereologieTypentheorieSprachsyntheseKontextbezogenes SystemGamecontrollerBeweistheorieAnwendungsspezifischer ProzessorVorlesung/KonferenzBesprechung/Interview
50:56
CASE <Informatik>SoftwaretestInternetworkingPunktWiederherstellung <Informatik>GeradeNeuronales NetzDreiecksfreier GraphSpitze <Mathematik>PackprogrammCodeFlächeninhaltSimulated annealingPhysikalische TheorieGrundraumNetz <Graphische Darstellung>Kernmodell <Mengenlehre>GruppenoperationAuswahlaxiomPunktwolkeExogene VariableHackerFeuchteleitungSchnittmengePhysikalisches SystemLogiksyntheseART-NetzInformationZellularer AutomatInterface <Schaltung>WiderspruchsfreiheitMathematikDifferenteErweiterte Realität <Informatik>Lie-GruppeComputersicherheitEinfach zusammenhängender RaumMultiplikationWärmeübergangBitSchnelltasteDomain <Netzwerk>Physikalischer EffektEindeutigkeitRobotikGarbentheorieProgrammierungPhysikalismusVektorraumStochastische AbhängigkeitEntscheidungstheorieGesetz <Physik>Design by ContractMultiplikationsoperatorProgrammfehlerVerkehrsinformationWeb SiteKugelkappeKontrollstrukturQuick-SortGenerator <Informatik>Ordnung <Mathematik>Elektronische PublikationSoftware Development KitComputervirusFormale SpracheMAPFormation <Mathematik>AdditionSoftwareentwicklerMatchingMereologieBildschirmmaskeSoundverarbeitungAnalysisProdukt <Mathematik>AbfrageProzess <Informatik>Ideal <Mathematik>TelekommunikationElektronischer ProgrammführerPräkonditionierungNeuroinformatikVererbungshierarchieWärmeleitfähigkeitURLRegulator <Mathematik>Rechter WinkelZahlenbereichWellenpaketProgrammiergerätBildverstehenZentrische StreckungDatensatzSchnitt <Mathematik>Objekt <Kategorie>Selbst organisierendes SystemProgrammierumgebungBildschirmfensterTypentheorieEinsSpeicherabzugAtomarität <Informatik>Data Miningp-BlockComputerspielApp <Programm>Quelle <Physik>Fächer <Mathematik>RechenschieberProjektive EbeneTreiber <Programm>InformationsspeicherungMetropolitan area networkTurm <Mathematik>BeweistheorieFaktor <Algebra>VersionsverwaltungMotion CapturingExistenzsatzReelle ZahlTermArithmetisches MittelTeilmengeStichprobenumfangForcingProgrammierparadigmaSimulationFunktionalStapeldateiGraphfärbungSuite <Programmpaket>CodierungÜberlagerung <Mathematik>Eigentliche AbbildungStrömungsrichtungUntergruppeFormale GrammatikRichtung
59:54
PerspektiveEndliche ModelltheorieExogene VariableFormation <Mathematik>Güte der AnpassungFunktionalComputersicherheitTermDatensatzRegulator <Mathematik>SoftwarepiraterieProdukt <Mathematik>ZahlenbereichNeuroinformatikPhasenumwandlungIdentitätsverwaltungRechenschieberVorzeichen <Mathematik>Formale SpracheDatenmissbrauchTeilbarkeitEinfach zusammenhängender RaumStandardabweichungGrenzschichtablösungWhiteboardGeradeRechter WinkelTypentheorieBitAutorisierungFlächeninhaltSichtenkonzeptInternetworkingIntegralUmfangBildverstehenProzess <Informatik>Wort <Informatik>MAPComputerspielMultiplikationMaßerweiterungFitnessfunktionNatürliche ZahlVererbungshierarchieInformationARM <Computerarchitektur>MultiplikationsoperatorMereologieGenerator <Informatik>Physikalischer EffektUmsetzung <Informatik>KryptologieEinsDifferenteGarbentheorieRahmenproblemSelbst organisierendes SystemAussage <Mathematik>CybersexCASE <Informatik>Virtuelle MaschineEntscheidungstheorieWeb-SeiteMetropolitan area networkAggregatzustandPunktKette <Mathematik>ProgrammierungComputervirust-TestGruppenoperationCoxeter-GruppeFramework <Informatik>AnalysisGesetz <Physik>Luenberger-BeobachterOpen SourceTeilmengeZentrische StreckungWurm <Informatik>Dreiecksfreier GraphTrennschärfe <Statistik>RichtungStichprobenumfangZellularer AutomatSoundverarbeitungSoftwareentwicklerSoftwaretestBesprechung/Interview
01:08:53
MereologieMessage-PassingComputersicherheitGreen-FunktionDienst <Informatik>DatenmissbrauchAutorisierungCASE <Informatik>ErwartungswertDruckspannungInternetworkingElement <Gruppentheorie>PerspektiveMetadatenURLUmwandlungsenthalpieSoundverarbeitungDatensatzEntscheidungstheorieMultiplikationsoperatorSchreib-Lese-KopfInterpretiererLesen <Datenverarbeitung>ParametersystemVerkehrsinformationFamilie <Mathematik>OrtsoperatorPortscannerTermRegulator <Mathematik>BitWeg <Topologie>AdressraumPunktPasswortQuantenzustandKontrollstrukturTurm <Mathematik>ComputerspielInformationSpeicherabzugTelekommunikationAusnahmebehandlungComputerunterstützte ÜbersetzungFramework <Informatik>Quick-SortBildschirmfensterGenerator <Informatik>PhasenumwandlungReelle ZahlProgrammierungInverser LimesHasard <Digitaltechnik>Automatische HandlungsplanungZellularer AutomatVektorpotenzialCybersexGasströmungStellenringAggregatzustandProdukt <Mathematik>Kette <Mathematik>Ordnung <Mathematik>Trennschärfe <Statistik>Selbst organisierendes SystemMomentenproblemDatentransferAnalogieschlussProzess <Informatik>E-MailComputervirusSoftwareentwicklerZahlenbereichSoftwaretestBildschirmmaskeGruppenoperationProgrammierumgebungHypermediaDifferenteInjektivitätRichtungNichtlinearer OperatorFormation <Mathematik>ResultanteKontextbezogenes SystemMAPLogistische VerteilungLeistung <Physik>Radikal <Mathematik>Fahne <Mathematik>GefangenendilemmaBesprechung/Interview
01:17:51
SoftwaretestWort <Informatik>VideokonferenzFestplatteCASE <Informatik>Elektronische PublikationChiffrierungEgo-ShooterÜberlagerung <Mathematik>Design by ContractPunktVorhersagbarkeitMultiplikationsoperatorVerschlingungPortscannerSystemidentifikationQuick-SortGenerator <Informatik>Rechter WinkelLesen <Datenverarbeitung>Elektronischer FingerabdruckGeradeDigitaltechnikPasswortDatenmissbrauchFunktionalBitInternet der DingeComputersicherheitProdukt <Mathematik>IntegralInformationAlgorithmische ProgrammierspracheNeuroinformatikBildschirmmaskeZweiTermInzidenzalgebraAdditionResultanteMAPCodeSoftwareschwachstelleMereologieURLInternetworkingNachbarschaft <Mathematik>Minkowski-MetrikApp <Programm>Leistung <Physik>TypentheorieFitnessfunktionSystemplattformArithmetische FolgeHalbleiterspeicherDigitalisierungPhysikalisches SystemKontextbezogenes SystemSchnittmengeEinfach zusammenhängender RaumPhysikalismusMalwareQuellcodeGamecontrollerInformatikVirenscannerStrömungsrichtungNeuronales NetzKartesische KoordinatenErwartungswertKreisflächeSoftwareentwicklerPlastikkarteBesprechung/Interview
01:26:47
Generator <Informatik>PunktZweiBaumechanikSensitivitätsanalyseExogene VariableMinimalgradNeuronales NetzDesign by ContractRechter WinkelProzess <Informatik>Zentrische StreckungComputersicherheitSchnittmengeDifferenteBitVerkehrsinformationGruppenoperationIdeal <Mathematik>ProgrammfehlerNetzbetriebssystemForcingDatensatzWärmeleitfähigkeitLeistung <Physik>InternetworkingMAPSuite <Programmpaket>Objekt <Kategorie>Arithmetisches MittelTermEntscheidungstheorieMetropolitan area networkSoftwareentwicklerGeradeErweiterte Realität <Informatik>Offene MengeAutorisierungRechenschieberWärmeübergangGesetz <Physik>CASE <Informatik>TypentheorieAbfrageWeb SiteEinfach zusammenhängender RaumProgrammierumgebungFaktor <Algebra>Physikalische TheorieBesprechung/Interview
01:35:43
ComputersicherheitWort <Informatik>RahmenproblemEndliche ModelltheorieInformationBitExogene VariablePerspektiveSelbst organisierendes SystemInternetworkingTermRechter WinkelAutorisierungProdukt <Mathematik>ComputerspielGüte der AnpassungFunktionalEinfach zusammenhängender RaumDatenmissbrauchAussage <Mathematik>Vorlesung/Konferenz
01:40:45
InternetworkingTermGarbentheorieTypentheorieComputersicherheitStandardabweichungBitPhysikalischer EffektComputeranimationBesprechung/Interview
01:42:56
FunktionalKryptologieInternetworkingInformationKontextbezogenes SystemSichtenkonzeptMultiplikationsoperatorExogene VariableParametersystemFlächeninhaltVorlesung/KonferenzBesprechung/Interview
01:44:59
Internet der DingeGesetz <Physik>Message-PassingInternetworkingMetadatenGenerator <Informatik>ErwartungswertVideokonferenzFlächeninhaltFormale SpracheGraphfärbungMereologieGeradeComputersicherheitURLDatenmissbrauchMultiplikationsoperatorPortscannerVererbungshierarchieCASE <Informatik>AuswahlaxiomTermProzess <Informatik>PunktGesetz <Mathematik>Metropolitan area networkLesen <Datenverarbeitung>Green-FunktionSmartphoneSoundverarbeitungKontextbezogenes SystemZellularer AutomatKontrollstrukturApp <Programm>Hinterlegungsverfahren <Kryptologie>Quick-SortBitExistenzsatzSpeicherabzugData MiningÜberlagerung <Mathematik>GruppenoperationBildverstehenProdukt <Mathematik>Elektronische PublikationQuantenzustandMAPTreiber <Programm>Einfach zusammenhängender RaumAusnahmebehandlungTurm <Mathematik>Natürliche SpracheRegulator <Mathematik>PräkonditionierungFestplatteErweiterte Realität <Informatik>DoS-AttackeElektronischer FingerabdruckRechter WinkelMaßerweiterungEgo-ShooterZweiVorzeichen <Mathematik>Element <Gruppentheorie>Hill-DifferentialgleichungExogene VariableChiffrierungDesign by ContractOrtsoperatorAutorisierungDifferenteDigitaltechnikWeg <Topologie>Algorithmische ProgrammierspracheBeweistheorieMinkowski-MetrikInterpretiererComputerunterstützte ÜbersetzungBildschirmfensterZahlenbereichPasswortNeuroinformatikInformationsspeicherungGüte der AnpassungPhasenumwandlungIdeal <Mathematik>NP-hartes ProblemArithmetisches MittelInformationSoftwareentwicklerTelekommunikationBesprechung/Interview
Transkript: Englisch(automatisch erzeugt)
00:00
Hi, hi, it's 10 a.m. guys, hi. Okay, so quick and dirty intro is, I remember you, hey. Quick and dirty intro of the Biohacking Village is that this is our fourth year. Last year also, I talk fast, please let me know if I talk too fast.
00:22
Fourth year, first year we had nine talks, second year we had 27 talks, two demos, third year we had 37 talks, three demos, and I came to the realization that we weren't really encompassing the whole medical ecosystem. So what we did this year was, we have a talks village next door in Palermo,
00:41
we have a medical device hacking village with four different companies that brought their medical devices. We have BD, we have ICU Medical, Phillips, and Thermo Fisher, as well as antique medical devices, in case you wanna look at those. And in the village next door to that, in Siena, we have an implant village, implant slash wet lab. So you can go on the website,
01:01
which is village, V-I-L-L-A-G-E, B.io, and sign up for the implants, and that's going on today, all day. Saturday morning, Saturday afternoon, we're having a wet lab with Michael, who is doing a talk tomorrow,
01:21
as well as a badge workshop, the badge this year. We changed it a little bit, we did it based on microfluidics. Do you know what that is? Some of you do. If you don't, come over tomorrow, the badge maker is giving a talk about it, and he's gonna have a lab to show you guys how to use it, really awesome. And then Sunday, we're going back to implants.
01:41
I think that's all I got, so I'm throwing it back to you. Hi, great news, I'm losing my voice, perfect timing. I'm just gonna start off by saying a huge thank you to Nina, all the other organizers of the biohacking village, and particularly our sign language interpreter, who I'm hoping is gonna make me sound a lot smarter.
02:02
Please, I really, please. Okay, so before I get started, I'm just gonna have a quick disclaimer. I think there's a decent chance that there'll be people in the room who have participated in perhaps some of the disclosures I'm gonna talk about,
02:20
or who work in organizations involved in them. You may well have more information about the disclosures than I do. I'm sharing sort of an external perspective on any that I wasn't directly involved in. It's really just sort of like, here's the lessons that I grabbed based on what I saw happen, so don't judge me, too harshly anyway.
02:41
And if you wanna kind of educate me after the fact, I'd love to learn more about it, but please don't stand up and start screaming at me. I don't react super well to that. I get sweary, and then it's really awkward for an interpreter. She has to deal with that. So who am I, other than just being sweary and British? I'm Jen Ellis.
03:00
I work at Rapid7, and my general sort of thing that I do is I try and figure out how we can create positive social change around security, since security is a societal issue. So as a lot of that, we do a lot of vulnerability research, which we disclose through coordination to try and help people really understand the real risks.
03:22
We also work with the government. We try and get policy changed to reflect better cyber security practices. This is actually me testifying to Congress, where I brandished a vulnerable toy at them, which succeeded in making them think I was completely insane, so that was good.
03:40
In terms of vulnerability disclosure, I have worked in vulnerability disclosure for, I don't know, eight years maybe, a bit more than that. I've probably worked on a couple of hundred vulnerability disclosures in that time. The thing that's sort of interesting for me, about me, on this topic, is that I started off on the reputation management side.
04:03
So I was the person in the technology vendor organization who would be like, oh, this looks like it's bad. We should kill it with fire. And so to begin with, I was like, oh, researchers are bad news, we don't like them, that's terrifying. And then I kind of had my come to Jesus moment, I got converted, and now I testify to Congress
04:23
about how we can protect security research. And I have some of my best friends as security researchers. So I've done a lot of vulnerability disclosure. And one in particular that I did in 2016, which I'm sure is your main memory from 2016, because not much happened that year.
04:42
I worked on a particular vulnerability disclosure that was in the medical device sector. The reason that I did that was this guy here, who some of you may know, Jay Radcliffe. Jay is a type one diabetic. And in 2011, his primary care physician recommended
05:03
that he move on to an insulin pump that would be connected to his body. And because he works in security, he thought, well, if I'm gonna connect this damn thing to my body, I'd kind of like to know what the deal is with it, how secure it is. So he did a bunch of research on it.
05:20
And at that time, he became somewhat known in the security community as a medical device researcher. He also had a lot of learnings from that experience. It wasn't his favorite thing ever, because what happened was there was this huge news hype about it, and understandably, patients got really concerned.
05:41
And back in 2011, medical device research wasn't being done a huge amount. The press still sort of went into a frenzy over it. There wasn't a lot of sophistication. And so there was this huge hype cycle. Jay ended up with patients' parents reaching out to him, saying, hey, my kid has one of these devices
06:03
attached to them, should we take it out? That's a really awful position for a security researcher to be put in, and it was actually quite traumatic for Jay to be put in that situation. He's not a doctor. He's not the person who should be making that decision. And so the great thing for me working with him in 2016
06:20
was he brought all that experience and that knowledge and that thoughtfulness with him when he again started to look at an insulin pump. And again, it was because his doctor said, hey, I know you didn't want to have one last time, but you should reconsider. So there we were five years forward, and we were looking at the Johnson & Johnson Animus One Touch Ping.
06:42
Now, the way that this bad boy works is there is a device that connects to your body that delivers insulin, and then there is a remote control that you would carry with you that communicates with that device, and the remote control monitors your insulin level, and then it tells the device,
07:02
hey, it's time to release insulin. It communicates via radio frequency, and that radio frequency can be either disrupted or it can be spoofed. So you can either withhold insulin delivery or you could potentially push a fatal dose. That was Jay's research.
07:20
That's what he discovered. So he came to us. He at the time worked with Rapid7, doesn't any longer, but he did at the time, came to us and said, this is what I found. And we were like, okay, well, we should do something about this. We should go out. But we knew that there was some challenges. Vulnerability disclosures are not always super popular,
07:41
and so we had a good idea that this one in particular might cause some heartache. It was gonna be a little bit different. Even with the experience that I have and that a bunch of Rapid7 people have around vulnerability disclosure, the number of them that we've done, I mean, the team that we have have worked on hundreds and hundreds.
08:01
We knew that this one was gonna be a little different, and there were a few reasons for that. I mean, the first was, we were talking about something that involved life and death. And we didn't wanna go crazy and say people will definitely die. That's it, we're definitely gonna kill people with this thing. Because the reality is that for an attack like this,
08:21
you have to be within a proximity, you have to have the right technology, you have to know that the individual has the device, and you have to know what to do about that. So you're talking about super-targeted attacks, and your average person doesn't have the right profile to be targeted by that. So we knew likelihood was low, but potential risk was high.
08:42
And we tried to balance those things. Still, when you're talking about something that can resolve in death, it just results in you handling it very differently to how you might if you were talking about something that's to do with networking, for example. Another thing that made this a little bit different, a little bit challenging was,
09:01
I don't know if you've heard of Johnson & Johnson, but they're kind of a big deal. Rapid7, on the other hand, is, you know, we're growing, right? We're growing. So we were kind of aware that we were gonna be reaching out to this organization that is hundreds of thousands of people would have a very complex organization internally,
09:23
probably would have like a varying amount of security knowledge internally. We knew that they would have an army of lawyers, an army of comms people who were much like me in my old job, where I was like, kill it with fire. And so we thought that was all gonna be quite difficult to deal with. We also knew that they operated
09:41
in a highly regulated environment, and that that would probably make them more defensive about getting a vulnerability disclosure, because there's a lot of fear about what their regulator is gonna do, how they're gonna respond. And when we were doing this disclosure, it was not that long after the FDA had come out
10:04
with their post-market guidance. So we knew that there was like, definitely gonna be tension around that. And then the last thing that really kind of changed the dynamic for us was we knew that with Johnson & Johnson, typically when somebody kind of knocks on their door and says, hey, there's a problem in the product, they're like, and here are 500 of my closest friends,
10:22
and we have a class action suit for you. And they're not really kind of knocking on the door and saying, hey, we found a problem in the product, and we wanna help you fix it. So we knew that there was a likelihood that they would probably, one, not have a reason to trust us, and two, err on the side of protectivism
10:42
and conservatism and basically being like, here are our lawyers, please do chat with them. So we were kind of, we were apprehensive going into the disclosure. We got lucky, we got super lucky. And the reason we got lucky is because of a guy called Colin Morgan, who some of you may know.
11:01
By the way, the subtitle of these slides is an ode to J. Radcliffe and Colin Morgan, who I love. And they basically took us through this process. Colin had, Colin works in the security team at Johnson & Johnson. He had been engaged with the guys from I Am The Cavalry, some of whom are in this room. Quick thank you to them for everything that they do.
11:23
And he had had his own come to Jesus moment where he'd been like, hey, we make stuff that impacts people's healthcare and their safety. And so we should probably think about security from the ground up in that. And so he had been on a two year journey inside Johnson & Johnson.
11:41
I made it sound like he went to the journey in the center of the world. And maybe he did, he probably fought great battles. And he was trying to get Johnson & Johnson to change their processes, build a vulnerability handling program, and he succeeded. So he got them to build this program. We happened, just coincidentally,
12:02
to knock on their door a week before that program launched, which was either really great timing or really terrible timing, depending on your perspective. He, we ended up being his test case, essentially, is what happened. And so there was a lot of education through the process. But because Colin was there, and because Colin
12:22
was very understanding of what vulnerability research is all about, he understood what our intent was, he gave us the benefit of the doubt, he pushed the engineering team to really take it seriously. Because we had that person internally who could be that advocate for us, it made all the difference. It was a massive game changer.
12:41
And what it meant was that through the process we were able to constantly come back to this unifying point of how do we protect the patients? Once we had got the idea that there was a level playing ground that we both recognized we wanted to protect patients, it actually, to an extent, didn't matter that the details were difficult.
13:02
And we would argue over the details, and frankly, Colin and I had some pretty late night phone calls so that we could vent at each other before we got on the call with all the rest of the team the next day, where we could be really calm and be like, this is what we think we should do. But because we had that opportunity and we had that trust, we were able to really prioritize
13:20
what was best for the patients. That doesn't mean there weren't surprises along the way. One of the coolest things was that after J&J had verified the vulnerability, they decided that they would proactively communicate with patients, which I think the FDA told us that that was the first US medical device manufacturer
13:41
that proactively communicated with patients about a cyber security risk. So that was great, and we were super excited when they told us they wanted to do that. And then they told us that they were sending out letters, and I felt like I had gone back in time. I didn't realize that letters were still a thing,
14:00
and I didn't really know how to process that or plan around it. So we had some interesting conversations about at what point does a thing become public? Because they wanted to have us do our part of it at the point that the last letter was received. And I was like, but the first person who gets the letter is gonna take a photo of it and go on Twitter. Like, have you met social media?
14:23
Which for some of Johnson & Johnson is possibly a no. So again, because we had built this sort of relationship and this trust, and we had been really kind of partnering with them on timing and all those kinds of things,
14:42
we were able to solve this problem and come to an agreement. They actually pushed out, they like changed the way that they do the letter sending to fit in with our recommendations. And sure enough, the first wave went out, someone took a picture of it and stuck it on Twitter, and I was kind of like, I'm just gonna send this to you and not comment on it, just leave it there.
15:02
And so it was great. We managed it together. And as I said, like every time that we disagreed, and there were times where we were like, no, it's this way, no, it's that way. Our unifying thought was always how do we protect people best? We didn't wanna cause panic. We didn't wanna create an opportunity for adversaries.
15:21
And so we were able to kind of come together to save the world, if you will. Because we took that approach, it meant that when we went out with the story, J&J were able to control the message. They were able to go out with this really positive message. And that meant that the press covered it as like a sort of affirmative action
15:41
from Johnson & Johnson that looked very positive towards the patients. And in fact, the response that we got was very positive. People thanked us for taking a really thoughtful approach. The advice was not, hey, you should rip this thing out of your body. We were really clear from the beginning about that. Actually, for people who are interested, the advice was don't use the remote control.
16:01
You didn't need the remote control for the pump to work. You would just have to manually control the pump, but it would still work fine. So that's what the advice was. And we got a lot of people who reached out to us and thanked us for the approach that we took on that. And we didn't have hysteria for patients. Nobody asked J if they should take their kids off the thing.
16:21
He didn't have to deal with the trauma of that. And the FDA were also super positive about how all of this went. I'm just gonna give you a second to read this. I normally hate telling people to read a slide, but I think me reading it out would be a little bit weird. The net of it is that the FDA basically said that this is the exemplar that they want
16:42
other manufacturers and researchers to follow because of the way that it minimized patient impact. So as a quick recap, what did we learn from the process? I cannot emphasize enough the importance of investing time in building trust and empathy,
17:01
regardless of whether you are on the manufacturer side or the researcher side. You need to approach the table with the benefit of the doubt and figure out how to get to common ground. For us, that common ground was all around what's best for patients. It will be different in every scenario, but I think that you just need to identify what it can be and use that as your guiding principle.
17:23
Another big learning for us that we were really clear on is that just because a thing can cause harm, it doesn't mean it will, and that we wanted to really avoid creating fear, uncertainty, and doubt. We didn't want to be sensationalist in the way that we communicated this out. The whole incident with the letters
17:42
highlighted to us this whole thing about expectations and being really clear on the detail. If we hadn't gone into every detail and really kind of zeroed in on it, then they would have sent out letters and we would have had no clue what was going on, and the next thing we know, we would have lost control of the narrative and it could have blown up on us.
18:03
This is a note to researchers. Rapid7, as I said, we do a lot of vulnerability disclosure. We have a published process. The process is we go to a vendor and we tell them about it, and 15 days later, we go to CERT and then CERT's clock starts, and CERT's clock is 45 days.
18:20
So in total, our published timeline is 60 days before we will go public. However, and it says it's on our website, if a vendor is engaged with us and we see that they're taking it seriously and they're working on it, we'll obviously try and be flexible with them. We have no desire and there is no benefit for anybody here in sort of preempting
18:42
and going out before people are ready, particularly when you're talking about something like a medical device. Because Johnson & Johnson were engaged in the beginning and we knew that they were taking it seriously, we were happy to wait. So in total, it took about four months from beginning to end, which I don't think is a particularly bad timeline. I think that's a completely reasonable one
19:00
given what we were dealing with. I think for researchers, you have to decide where your line is. I'm gonna talk a little bit later on about a disclosure that happened yesterday where I think the timeline has demonstrably been too long. And so it is hard as a researcher to decide how much leeway to give, I was about to say rope, but I'm gonna back off from that,
19:21
how much leeway to give the manufacturer on timing. And I think you have to judge it based on, one, the potential harms, and two, how engaged you really think they are and how seriously you really believe they're taking it. So public disclosure can be handled in a way that does not cause trauma.
19:40
I think that the numbers six and seven here were probably the biggest learnings for Johnson & Johnson. I think that they would never have anticipated that this could be the case going into it based on their prior experience of generally bad things, whether it's medical devices or not. And so I think this one was a pretty big learning on their side. So that's the J&J one that I worked on personally.
20:03
Now I'm gonna talk about some others because at the same time that this was going on, this was also going on. I'm expecting that some of you are fairly familiar with this. And so I'm not gonna pick up on it too much and go into what Muddy Waters did or didn't do
20:22
and who was in the right and who was in the wrong on this. What I will say is the research itself was done by another entity, Not Muddy Waters. And once they had that research, they gave it to Muddy Waters, they sold it to Muddy Waters, and Muddy Waters then used it to short St. Jude's stock.
20:42
I think the lesson here that I would take for researchers is there are lots of researchers who find things and they don't know how to handle disclosure. They don't want to handle disclosure. They're afraid of legal repercussions. They're afraid of taking on a big vendor. They have a day job that doesn't allow time for it.
21:01
They discovered it during the course of their work and their employer doesn't want them to do a disclosure. There are all sorts of reasons that researchers don't wanna do disclosures themselves. We recently did a disclosure in an electronic medical record system on behalf of a researcher who had discovered it in the course of doing his job and his employer didn't want him to be the one
21:21
who kind of stepped out into the limelight and took it on. I would just say that as a researcher, if you are going to hand your research off to a third party, just be really careful with who that third party is and make sure they're aligned with your goals. Make sure that you are not handing it off to somebody who's gonna handle it in a way that you perhaps wouldn't have wanted
21:41
and that you're not gonna lose control of the narrative in a way that feels concerning to you. There are lots of third party bodies that you can work with. For example, there's ICSSert, which I understand is now no longer ICSSert. It's now the end kick, but the function is still there.
22:01
It's just under a different name. So you can go to the end kick. They'll coordinate disclosure for you. You could reach out to the FDA and they'll coordinate disclosure for you. There's all sorts of things that you can do. If you want to hand it off to somebody else, there's also ZDI if you wanna look at another body.
22:20
The I Am The Cavalry folks will be happy to help you. You can reach out to them. And you can also check and see whether the vendor has a bug bounty program. And if so, you could reach out to the company that manages that. So there's lots and lots of different ways you can do it that means that you won't necessarily have to lose control. The other thing I think that we can take
22:41
as a lesson from this is that not everybody is motivated by a concern for patient care. This one happened, I wanna say around about March this year. Philips disclosed a number of vulnerabilities in their imaging systems.
23:03
And I kind of wanted to talk about this one because this has privacy concerns more than harm concerns. And I think there's a lot of dialogue in the media, particularly around medical device vulnerabilities that make it seem like everything is a plot twist from Homeland.
23:22
And the reality is a lot of it's not anything to do with that. But it can still have a really big impact. The other thing I wanted to flag about this is that Philips takes vulnerability research really seriously. They're actually pretty sophisticated in the way they handle this. And because they do it habitually and they've built up really great processes,
23:41
it's become kind of business as usual. And I'd mean that in the highest possible esteem. I don't mean that as like, oh, having vulnerabilities in Philips is business as usual. Everybody has vulnerabilities in their technology because they're made by humans. And as we know, to err is human. I think the key here is that
24:02
they've got to the point where they've taken the hysteria out of doing it. And so they've got to a point where what they're showing is just real transparency to their customers, real accountability to their customers. And they're demonstrating that responsibility in addressing these issues really quickly. I think that's awesome. I think that's where we should all strive to get to,
24:21
frankly. GE, this happened also in the spring, I believe. This was a series of technologies from GE that have hard-coded or default passwords. I like this one because that is a thing that we have known about for a really long time
24:42
is a no-no in security. But people still do it, right? And I think what that highlights is that generally speaking, there's a pretty big disconnect still between engineering teams and security. And I think that there's a lot to be done. Like if you in this room work at a medical device manufacturer, the thing I would urge you to do
25:01
is take Colin's journey and figure out how do you go to engineering and build security in from the ground up? How do you do secure by design? How do you educate them on things like why hard-coded passwords are a really bad idea? And I think that we are seeing progress in that. There are people who have taken that journey,
25:21
who've taken on that battle. And there are lots of people who really, really care about the product that they build and how it impacts their customers. And so they want to get this right, but there's still a way to go. We still, even organizations as sophisticated as GE, who've been around for a really long time, still have challenges with this stuff.
25:42
The other thing that I like about this example is the researcher who disclosed this is a guy called Scott Urban. You guys might know Scott. He does a lot of research in medical. He's been doing medical device research for a really long time, absolutely as long as Jay has. They were a couple of the first people doing it. And when Scott started, I think he would be fine
26:03
with me saying this if he were in the room, he was pretty bombastic about it. And he was pretty gung-ho with the vendors about how to address this stuff. And that was really terrifying for them. And so what he would get is he would go to vendors and he had all this goodwill and he wanted to help them fix it
26:21
and solve the problem. It was all very well-intended. And they would be like, oh God. And they would take a massive step back and then they would disengage. And then the process would become slow and unwieldy and not great. It was hard to build the trust that we talked about and the empathy. And Scott has like massively changed that. He's built his credibility in this space.
26:41
He's built an approach that is based on building credibility and trust. And now he has this great relationships where he can go out and talk about this stuff and really see people taking it seriously and responding to it really quickly. And I have a lot of respect for that, like a lot of kudos to Scott for doing that. So my last example that I'm gonna go through
27:01
is not a company, it's a person. Again, somebody who's been doing medical research for a really, really long time. And I'm sure that lots of people in the room know about Billy. He's the founder of Whitescope. Yesterday, Billy released some research with his research partner, Jonathan Bups from QED.
27:21
And the research was on Medtronic pacemakers. Now, the thing that's interesting about this one is that Billy and Jonathan reached out to Medtronic two years ago. It took Medtronic 10 or 11 months to even verify the vulnerabilities. I, even with my like, hey, let's give everybody
27:41
for the benefit of the doubt and like, let's take time and make sure we do this right. Basically, I sound like a goddamn hippie. Even with that, I would say I think that 10 or 11 months is an outrageous amount of time to verify vulnerabilities, particularly when you're working with researchers of the caliber of Billy and Jonathan, who I'm sure were absolutely walking them through the process and investing time and effort
28:01
in helping them understand the risk and what happened. I'm sure that they had video demos. I'm sure they had great proofs of concept. So verifying taking that long is kind of outrageous. And now, here we are, two years after initial vulnerability disclosure, and those issues have still not been addressed. And I think, you know,
28:21
you can understand technology is complex. The stuff takes a really long time to develop. It takes a long time to fix if the issues are profound. However, there needs to be a really strong focus on it. There needs to be a real response of taking it seriously. And I think that this is a situation where,
28:41
although the vendor has signals that they're interested in vulnerability disclosure, they have a vulnerability disclosure path on their website, all that kind of stuff, it seems as though, and again, this is me third-party looking in on it from the outside. I don't know what the difficulties are that they deal with internally, but it does seem as though that commitment
29:01
to handling the vulnerability disclosures and actually like internalizing them into the product is perhaps not as strong as it seems from their website. And I would really encourage them to change that. Okay, so a recap of the learnings that we've had from the third-party disclosures.
29:20
Again, be careful who you partner with. Not all medical device disclosures are going to be, or vulnerabilities are gonna be a matter of life or death. I'm really, really happy to say that that's the case. As you build your experience, they will become less disruptive, which is great because you want less disruption for your customers. I mean, ultimately, that's the goal, right,
29:42
is make your customers happy. Don't get sued. Everybody's a winner. Like, that's good stuff. ICS can help manage the process, although NCCIC, I'm sorry, apparently I was a little behind on the times on that one. NCCIC, for those who don't know, is N-C-C-I-C. And if you Google it, there'll be information
30:01
about how to work with them and disclose to them. There are a lot of known security problems that continue to arise. Going back to Johnson & Johnson and the one that we worked on, the root cause of the issue was that that communication was not encrypted.
30:21
But people have known about encryption for a pretty long time. It's kind of a thing. I mean, it's enough of a thing that the government wants us to backdoor it. So there's no excuse not to build encryption in, particularly when you're dealing with something this sensitive. So the other thing that I wanted to talk about is
30:41
with all the examples I've given, not one of them has required a reauthorization through the FDA of the product. And there is this great myth that goes around that I hear all the time of, oh, we can't address this issue because we'll have to go back through reauthorization. That's just not true in most cases.
31:01
And then the last one is, having a vulnerability disclosure email address on your website or a form to fill in is not the same as having an actual process or program where things will really get prioritized and get done internally. And you need both. A welcome mat doesn't really amount to much
31:20
if there's like no house on the other side of it. So with all of those learnings in mind, why does all of this matter? I'm guessing that everybody is familiar with what this is. If not, this is what WannaCry look like. So you can probably tell from my accent that I'm British. WannaCry hit hard.
31:43
And when WannaCry hit, there were a lot of hospitals in the UK that closed. 80-odd hospitals closed. A good proportion of those, I think 60-ish, they had been hit, but they didn't know what extent they were gonna suffer. They had no way of knowing what level of exposure they had
32:02
or how bad it would be because they literally didn't know how vulnerable they were, what vulnerabilities they had in their environment. And that's, from what I understand from talking to healthcare organizations, that's actually super common. There's just a real lack of understanding of what they have and what's going on. And as a result, when situations like this arise,
32:23
the outcome of it is pretty dire. It's more, it's disproportionately bad. And so you end up with a situation where hospitals closed, people get turned away, they didn't get operations they needed because they didn't know if they were vulnerable. This issue is so big and so important that the FDA has spent a whole bunch of time working on it.
32:42
They brought out their post-market guidance a few years ago well before WannaCry. WannaCry has added a lot more scrutiny from governments, both in the US and the UK, around the world. But even before that happened, the FDA had been working on this post-market guidance,
33:03
encouraging vendors to behave in certain ways around security. They continue to push forward on this stuff. I would just like to say, I don't know if anyone here is from the FDA, but I would like to say a massive thank you to them. The work that they've done, and particularly the way they've approached it, they've partnered with the security community to make sure they're getting right, getting it right,
33:21
getting the right expertise into the conversation. They have really done great things, and they're not now resting on their laurels and going, hey, we have a document, it's all good, which is something that sometimes I like to do. Instead, they are looking for what's the next thing they can do, how can they push this further, how can they do more?
33:41
And I think that's awesome. Another reason to take this seriously is because, oh look, we have a whole village on it. It kind of seems as though people care about vulnerability disclosures in medical devices. I'm hoping that's the case. Otherwise, this keynote is going to have been a bit of a disaster.
34:01
But hopefully you guys care about this, and this is a topic that you're interested in, and I think there are lots of people who care about it. I think that the media also care about it, which means that lots of other people care about it because they read about it, and it all sounds super sci-fi and scary, and that makes people go, ooh, what's going on? And again, it's in Homeland, so that makes people care about it.
34:21
So it's all like a serious thing. On top of that, I'm actually gonna say, does anybody know what this is? Okay, great, the Library of Congress. Does anybody know why I have a slide with the Library of Congress on it? Other than Josh, who's sitting on the front row nodding. Patents, it's a close guess.
34:44
Yeah? Yes, very good. Okay, so the Librarian of Congress makes the final decision on DMCA exemptions. And three years ago, there was in 2015, the Librarian of Congress said, we shall have a exemption to the DMCA
35:04
on security research, provided it's done within this sort of box of what it should look like, which is basically like, do it in a safe testing environment, that kind of stuff. And that was a real game changer, because all of a sudden, researchers who'd been sitting on vulnerabilities
35:20
and afraid of disclosing them were suddenly like, oh, hey, I'm not gonna get arrested, I should totally go and disclose this. And it was a wake-up call for medical device manufacturers who no longer had that handy DMCA stick to beat people with. And so that was a really good thing. And it's been part of changing the ecosystem. And this exemption is going to,
35:42
this is a bold statement to make, but I'm gonna say it, this exemption is going to get re-approved. It's gonna roll over. It may even go further. So as a medical device manufacturer, you're gonna continue to have people knocking on your door, and you need to know that. Again, though, you're not alone, because there is an organization
36:01
that has a very out-of-date logo, apparently, that should say NCCIC, and you can work with them. They have guidelines that you can follow on how to do this stuff. I think the big thing here is that we are evolving. Like, we have definitely made progress in this field. In the years since Jay did his initial research,
36:22
which was 2011, this field has changed to a point of being unrecognizable from what it was. And people now are so much more thoughtful about how they do this. We see there are medical device manufacturers like Philips, like J&J, who have really great practices in place now.
36:41
They've got really good at how they do this. And that is a really major thing. That is a really positive thing. The environment has changed in the best possible way, and it's continuing to evolve. Again, things like the biohacking village play a huge role in that. People like you coming together and looking at this stuff and talking about it, sharing information,
37:00
helps to continue to move us forward. And so I want to thank all of you for doing that. I think it's really awesome. And I think it means that together, we'll be able to embrace the future that we all dreamt of, where technology is safe to use and we fly around in bubbles. Thank you.
37:24
Okay, so for people who have any questions, please find out the microphone there in the light, and we'll be able to answer any questions you may have. Hopefully. I told you they wouldn't have questions.
37:41
What's next for Medtronic? What's next for Medtronic? Yeah, obviously they're not doing anything after two years. What's the next step to get them to take action? I think that, sorry, the question was, what's next for Medtronic? Medtronic, they haven't got where they need to be in the two years since Billy and Jonathan's
38:02
original disclosure. Where do they go from here? I think that the answer is, I think there are two answers. There's the cynical one, and then there's my optimistic Care Best Air one. The Care Best Air one is that my big hope is that they will continue to work on the issue,
38:22
that they'll continue to take it seriously, they'll continue to invest time and effort on it. The cynical one is, DHS and the FDA are involved, and the FDA is the regulatory authority in this space, and I believe that they will evaluate whether action needs to be taken, and they'll push Medtronic accordingly.
38:46
Any other questions? Do you think more regulatory frameworks are coming down the road? Do you think there's a push from the industry
39:01
to say, from the consumer side, there's a lot of consumer groups, and the lawyers are definitely all over this. So do you see that six months from now, a new regulatory framework is going to be announced that is really gonna push the medical vendors? Yeah, so the question was,
39:21
do I think that there'll be new regulatory frameworks coming? Will there be more legislation, et cetera? I think there's certainly an awful lot of discussion about it, both in Congress and in the administration. So, Congress is looking at legislation
39:40
around things like IoT procurement, labeling for products, basic security hygiene measures for privacy, and for protection of gates farm, and there are bills that come out around those things fairly frequently,
40:01
and they're increasing in intensity and interest and support. Every time there's a new major milestone headline, those bills come up again, and they get discussed more. So that's the hill piece. And then, in the administration, as I said, the FDA is really focused on cybersecurity.
40:24
Suzanne Schwartz and Seth Carmody, who are here somewhere, they led the effort on the post-market guidance. They're absolutely phenomenal, really knowledgeable, work really close with the community, and they're definitely looking at how they can continue to help with the issues.
40:40
And so, you have to balance, right? Nobody wants to create a lot of very burdensome regulation that holds innovation back and that hurts patients that way, right? We recognize connected technologies are a thing because there's huge benefits. I don't think anybody thinks otherwise. Patients definitely benefit from using these technologies,
41:02
but they have to be able to do so safely. And so, the FDA is trying to balance those two elements together, and they're likely to kind of go in some direction. I feel like Mr. Corman here wants to share something, too. You can go out there and do something in the dark tonight. Okay, great, so what he was just saying, Suzanne is gonna be around at eight o'clock tonight
41:21
and do no harm if anybody wants to talk to her about what she's working on and where they're going with this in terms of regulation. Thank you, Josh. That's here? Oh, okay, it's in the main track. Octavius 9? My god, this is so great. Who else has information they want me to share?
41:43
If you own the white Ford that is part of the company. Any other questions? Thank you. Thank you. So now, without further ado,
42:02
our next speaker has been with the Biohacking Village since the beginning, focusing on the interface between computation and biohacking. He is currently working on a doctorate in environmental engineering, focusing on microbial carbonates to isolate pollutants. Back at the dawn of time,
42:20
he was a graduate historical researcher focusing on WMD warfare that fell into the IT field in order to make ends meet. Now, without further ado, I'd like to introduce my friend, Mr. Brinley. Thank you, Biohacking Village attendees.
42:41
Today's topic is Blue Team Bio Using Kill Chain Methodology to Stop Bioterrorism. Quick little notes, who am I? As we mentioned, I am a PhD student in environmental engineering at a very ripe old age. I've got 20 years in the IT industry, because you gotta make rent somehow.
43:02
This is my fourth year at the DEFCON Biohacking Village. And in my first year, we had some issues, technically. So we've got the sacrificial virtual rubber chicken for good luck. Today's agenda slide, we're gonna go through bioterrorism, get a quick definition, talk about some genetic engineering methods,
43:21
talk about the actual kill chain methodology as applied in information security, and then talk about the various steps that I'm seeing as a biology kill chain method, recon, development, weaponization, staging, delivery, and then we'll have our conclusions. So, why this talk?
43:41
Well, first off, we're now at the point where gene engineering is actively being used to alter human DNA in the field. And last year, there was a main track talk by the chief medical officer of Intel, Dr. John Santos, regarding genetic diseases to guide digital hacks
44:01
of the human genome. And he presents a nightmare situation about gene engineering techniques which are cheap, perfectly reliable, and easily accessible. His thought was basically, if you made genetic engineering as easy as computer programming and as reliable replication.
44:24
Can you hear me? If you had as reliable of replication as you have with digital copying, you can run into some real issues. So, the analog is actually malicious code development. That's why he brought it to DEF CON to discuss.
44:42
And in keeping with that, I thought, well, I work on Blue Team. I thought his talk was maybe a little alarmist, but he said, you know, why don't we put our heads together and start thinking of ways to counteract that? And I went, well, gee, we have a methodology in place
45:01
already, several methodologies, to deal with malicious code development and the effects of malicious code. So, let's start exploring directly applying that to this potential situation before it happens. We've got a 30-year head start. Let's take advantage of it. Some current technical news that plays in with this.
45:24
There is recently, the big worry has been whether CRISPR-Cas9 is potentially described, it's been potentially described as a weapon of mass destruction in and of itself, do the possible effects that it could bear using it to basically genetically engineer
45:43
either pathogens or alterations to human DNA and spread them out in the public. Turns out that it looks like CRISPR-Cas9 might not be as faultproof as initially feared. You wind up getting coding errors
46:01
when you implement CRISPR-Cas9 genetic change. So, that would be our first little bit of technical news. On the other hand, the alarming one in this list is that some Canadian researchers last year reconstituted an extinct horse pox virus
46:22
for $100,000 using mail-order DNA. And this is exactly the situation Dr. Santos is worried about because we're talking, pox viruses are classically horrible. And $100,000, no questions asked.
46:41
I suspect actually these were university researchers. If this was somebody off the street without university affiliation, it would probably have been much more difficult, but it still plays up the insider threat aspect in any security paradigm. The folks who you expect to be doing things properly
47:03
are the folks that have the greatest access to do things maliciously. So, when you're dealing in bioterrorism, the first place you start worrying about is the faculty at the local microbiology faculty because they're the ones with the expertise
47:21
and the opportunity and the ability to sneak something out. So, let's go to a definition of bioterrorism so we can get our terms clear. Bioterrorism is the use of harmful biological agents to generate a political response. I'm separating this out for purposes of this talk from biowarfare.
47:42
In this case, we're assuming that bioterrorism does not have direct state support, possibly indirect state support, but this is not a nation state's official biowarfare program. The reason for that is if you're someplace and you run into your own country's biowarfare program,
48:01
you pretty much have two choices, live with it or don't. If you run into some other country's biowarfare program, okay, you're gonna tell your country, hey, I've discovered a biowarfare program over at this country, and they're gonna either say we know about it or that's interesting. You're going to spend a long time talking
48:20
with some very polite gentlemen to describe exactly how you came across this information. And as that points out, you're dealing in national security issues, local sovereignty issues. If it's a biowarfare program, the locals are either read in or it's authorized,
48:41
you're not gonna get any leeway. And so we've just pointed out the lack of viable options. Bioterrorism presumes that your local PD, your local national police and security apparatus are not involved and would like very much to prevent this. So as an individual concerned biohacker,
49:02
you have a lot more options for who you can notify and what sort of actions they're going to take. Now, with the new genetic engineering methods, we have a new concern, which is designer or custom pathogens. And we're gonna see how that works in when we're talking about gene modification.
49:21
Now for bioterrorism, there are three main possible targets, crops, animals and people. Basically, anything that's biological that relates to life in the target area is up for grabs. But what we're doing is we're functionally using biological techniques to change the bits,
49:44
what you could call the bits of the genetic code to a configuration that's not the original one. So what it really looks like is a genome data problem. And much like dealing with, say, computer viruses and other forms of malware,
50:02
you've got a pervasive threat that will either cause harm or alter content, and it can be customized. So really, instead of necessarily a medical biological paradigm, you can sit there and switch to guarding this from an information security perspective.
50:23
And that's what we're gonna try to do here. Now real quick, we wanna go into some of the methods that are used to alter the genome. First off, you have to have proper information, and that is available mostly online, mostly open source. There are a number of databases, the BLAST database, the FASTA database, and the PSI Protein Classifier.
50:42
You can sit there, put in an entry on, I want to find the human genome code for Tay-Sachs disease, and post that in, it'll bring it right up. I wanna see the regular genome code for
51:01
apodotus, which is, or a fruit fly, or any other common research subject animal or plant. It'll pop right up. Additionally, that's great for when you're trying to customize down to a species. If you're trying to customize down to a person
51:22
or a subset of people, you can go out now and pick up genome capture devices, like the Ion, which was demonstrated actually first by a hacking village four years ago. A little device about the size of a USB stick,
51:42
take a sample, put it in there, it basically had a little PCR that would run a chocolate DNA, do its sequencing, spit out the code, or the analysis, $1,000. Got you three uses, and then reload kits were at an additional cost.
52:02
There's an additional one called QIAGEN that's available. So now you have the ability to go look up a code that you want to implant someplace, and go look up the code of the person or subgroup of people that you want to plant it into. This is a little alarming if you look at things
52:21
from a security by obscurity standpoint. 10 years ago, before the Human Genome Project fully completed, we didn't have this as a possible action. So after you've got your information, you wanna go to synthesis. Pretty much all of it's done for custom genetic code
52:40
is oligonucleotide synthesis. There were like four different methods that have been generated over the years. Currently now it's the phosphoramidite method. Basically you're running in a cycle, you de-block your amino acid, couple it up with a new acid, put in a cap, oxidize it, lather, rinse, repeat until you're done
53:01
making that section of genetic code that you want. It's generally done commercially because unless you're really highly skilled personally as a process chemist, you wanna have this done commercially. And the nice thing is, at least for some of the
53:21
more dangerous codes that segments, we go back to the pox viruses for those Canadian researchers, generally they won't synthesize known harmful gene code. That's why I said if you're dealing with a university credential, you might be able to get somebody to synthesize something
53:41
that they otherwise wouldn't because I'm a legit researcher, this is what I do. There's also something called MAJE, which is multiplex automated genomic engineering, which came out in 2009 from the Weiss Institute at Harvard. Basically you take a little chunk of single strand DNA and you electroporate, which electrically stimulates cells
54:04
in order to get it to go through the cell barrier and go get that single strand DNA into the nucleus. It anneals and melds in at some point and recovers. It's a two and a half hour cycle, you can get up to 50 edits. And usually what this is used for
54:21
is to create edited genome diversity for doing tests. But this also means that you can get a batch of gene samples picked, worked up, filter out the one you want, and then simply go and replicate that, and have that cell replicate itself over and over by conventional methods.
54:43
Basically at that point you're just cell breeding. Finally we have to deal with our transfer methods because we've got our chunks of genetic code that we know that we want. Basic gene editing, all forms are pretty much the same overall process.
55:01
You pick a target location and you define a binding domain that your method's gonna group to in order to say this is right before where we want to insert the custom code. We induce a double, you induce a double strand DNA break at that cleavage domain, you add your programmed DNA,
55:23
and you anneal it. And usually there are off target effects. Basically you don't get a perfect match up. You'll have, you know, because you're trying to do this in bulk, it might have bind to us multiple places and make multiple strand breaks and insert this DNA in multiple spots.
55:41
And that would give you a one way function because you can't pull them back out as easily as you put them in because you take the risk of taking out the change that you wanted to make in order to take out the changes you don't want. And the thought then becomes, you know, if that is the more accurate you make your
56:03
transfer method, the easier it is actually to reverse your transfer method. Because, you know, the fewer strand breaks, the fewer chances and the fewer off target effects, the easier it is to remove one or not have any to deal with. And if it's just the one that was put in,
56:21
you can just take it back out if you want to. Three types are ZFN, zinc finger nuclease. Basically it's a binding domain that's got its cleavage domain and it's defined by having the zinc ion in it. Delivered as a plasmid and if your target domain wasn't unique, it'll bind in multiple places
56:40
and you'll have multiple gene changes. You've got TALEN, which is transcription activator like effect nuclease, it's used for gene therapy, delivered as an mRNA and you can combine it with other methodologies. It's generally slower to build a proper TALEN because you've got a larger target domain.
57:01
That also makes it very, very granular and very specific so you don't have many off target effects. And then finally the most recent one that's been getting all the news is CRISPR, clustered regularly interspersed short palindromic repeats. It's faster to generate your plasmid or other method to transfer than either ZFN or TALEN
57:25
and it'll cut at any location because it's using a little bit of a small chunk of guide RNA in the plasmid but it does have a large number of off target effects noted as we saw in the earlier slide where we were talking about the fact
57:42
that there are possible toxicity from off target effects. After you've got your transfer method picked out, you pick a vector. Usually it'll be either an adeno or retrovirus and you basically inject it around the cell,
58:03
have it infect the cell, your change takes place. You're worried at that point is that you might have some over expression because you're putting a large dose in and hoping that it gets to the right cells and maybe you get two or three doses per cell.
58:21
There are some non viral vectors that you can use that cause less immune response. So you can actually literally just inject in this raw DNA. You can put it in a polymer or a fat cover. The whole point is just to get it through the cell membrane into the nucleus. Now, in reality, as I said,
58:41
I'm using a computer hacking paradigm to try and fight this. And the advantages in this case for doing code changes is to the computer person, the programmer, there's less formal training required. You can edit as you see your project develop.
59:00
There's a lot less infrastructure needed. Code doesn't mutate until somebody gets a hold of it and changes it. And since you're dealing in generally electronic bits and bytes, the consequences when you're caught are a lot lighter, which is why for 30 years, people have been running rampant as computer hackers.
59:21
The two advantages for a biological attacker are you have concrete effects. You're doing real damage if you want to do real damage. Not to say that a computer hacker can't cause real damage and physical effects, but in this case, it's direct harm to the organism.
59:43
And you can select existing pathogens in the wild and possibly alter them in order to get your effect. Now, to deal with computer attacks, recently, we've gone through a lot of problems.
01:00:00
from a perimeter only defense to something called the kill chain framework. And the idea for the kill chain is you want to identify all the steps required to have a successful attack. And that starts off at reconnaissance and ends with persistence. And it goes through, you know, recon, weaponization, delivery of the malware, exploitation, and it goes all the way through to
01:00:31
actions on the target and persistence. And, you know, as I said, with the goal for kill chain analysis is to identify malicious action as early as possible and choose to react to the
01:00:44
threat or to allow the threat to continue so you can study the methodology and possibly apply it elsewhere and then react to the threat. And it's not just confined to computer security. There have been variants developed for missile defense for basically against anti-ship missiles. You know,
01:01:08
you have a radar ping, you go through a decision framework, you gather more information to determine whether this is something that's hostile or not. You choose whether or not how you're going to react to this potential hostile circumstance. You reevaluate and continue on. And we
01:01:26
actually have an analysis and reaction cycle for this. You know, gather information, evaluate it, choose your reactions, implement your reactions, gather more information, and keep repeating the cycle until the threat has been neutralized or basically it's until the threat's
01:01:45
neutralized or you can't gather any more new information. Now, there's some cautions in dealing with the kill chain framework instead of a strictly perimeter-based defense. You've got limited resources to apply to all of your defenses, so you're going to have some
01:02:03
restraints on both your monitoring and your response actions. The earlier you react, the more chance you have of tipping off your opponent. And time spent gathering information reduces time spent reacting. You know, it's a tradeoff. But if we apply this to biology, we can form a
01:02:23
kill chain, which is similar, starting at recon, going through the development and testing phase, going to weaponization, production, delivery, and we can stop at delivery in this case because after delivery, the advantage that a biological threat has is at that point it's
01:02:42
been delivered. It is now self-acting. Either it'll work or it won't. Pretty much the only follow-on action for the opponent is do we repeat the cycle? So at that point, you know,
01:03:00
you're dealing in post-action issues, and I'm taking those out of the kill chain because they don't really add to the kill chain analysis. And what this leads, you know, what you find out when you're dealing with any given spot in the kill chain for a malicious actor, secrecy is inversely proportional to the scale and effort put into that phase. If you
01:03:25
spend a lot of time on recon, you have a larger chance of having your guy get caught snooping. If you spend a lot of effort in production, that means that you're having a lot of resources consumed and you have a facility and there's a greater chance, you know,
01:03:44
that that will draw somebody's attention. You spend time on production, that's a greater chance somebody runs across it. So especially for non-state actors, you get some serious trade-offs that build up. You know, when you're a state actor, once again, this is why I was accepting out the biowarfare programs. It doesn't matter. You control the local police.
01:04:04
You have the resources at your disposal. It's okay. We can just move on with this. But if you're a small, you know, a small, motivated group that's trying to do something against the local authorities, you've got to make some decisions. And we can take
01:04:23
advantage of the decisions that they choose. So our phase one is recon. What's recon? It's getting information either on the target area or the target organisms. So collection of biomaterial, collection of targeting information. Biomaterial collection can be either direct
01:04:42
sampling or acquisition of waste products. And it is the easiest step to complete. We follow on in recon. You've got your target selection. So this is where you've got direct observation and the use of open source intelligence by your malicious actor.
01:05:02
And it's very similar to stalking. And in fact, the choke point for that then is treat it like stalking. You've got anti-stalking laws, anti-trespassing laws that you can leverage. Also, you can leverage the malicious actor's personal chatter. Motivated people
01:05:20
tend to talk a lot about things that they're motivated by, human nature. There's an old SNL gag when Eddie Murphy decided to retire the buckwheat character. And it was right around the time that John Hinkley shot President Reagan, so they were doing that whole thing. And one of the questions was, do you believe that the man who shot buckwheat
01:05:40
shot buckwheat? And the answer was, oh, for everybody there, it was, oh yeah, that was all he ever talked about. So chatter comes in very early. But it is a lot harder to prevent than it is to detect, which is just like recon and cyber. You've pretty much got a one-to-one match there. The next step is development and testing, which is
01:06:06
where you're developing the desired genetic payload you want for your attack. Do you want to do gene transfer, or do you want to do direct damage? You need to test to see whether or not it's going to have the desired effect. And you can do that either in
01:06:20
cell culture or in crop samples. Note that if you're doing it in crop samples, it's a lot more noticeable. You've got multiple payload options, generalized plague, a targeted plague, which in this case is confined to a subset of a population. There are viruses that are associated with cancer risk. You could try to cause an autoimmune response,
01:06:45
or you can try to inflict trait modification. Now, the downside on the defense is the gene sequencers are becoming less expensive, and commercial gene sequencing is now available
01:07:03
in common. As I said, they do monitor for known pathogen sequences, and if a group's going to be able to monitor these, I know some biochemistry students who can tell you the exquisite agony of trying to keep mammalian cells from developing, from catching
01:07:25
viruses that they did not want and dying. It absolutely drives them nuts. So it's difficult for a small group to do. The choke point in this case are reagents. Everything has to go using nucleoside phosphoramidites, and there was a presentation
01:07:43
last year by Meow Meow Ludo Meow. They'll look it up on YouTube, but it was actually a very good presentation where we went directly into this. Also, you've got your polymerases. Both of these you have to order in. You're not going to make them yourself unless you are one whale of a process test. Step three would be weaponization, and this would be
01:08:03
marrying your payload to your desired transport vector, and that pathogen then has to retain its virulence, its specificity, and the ability to turn it off, because it's no use as a political tool if you let it loose and you can't turn it back off. Then all you
01:08:23
are is an omnicidal maniac. This is difficult for nation states to do, and that difficulty relates to basically making it so that it stays on target, does not mutate, and your own population is not affected. Big choke point in this case for non-bio warfare
01:08:45
programs is usable facilities. Two years ago, I did a presentation on biosafety, and we're talking biosafety level three and level four facilities. If you start building
01:09:00
one, you will have the locals immediately taking a strong interest in you because nobody has a need for a level four biosafety facility outside a nation state bio warfare program. At this point, you're also looking at marrying transmission agents, whether it's
01:09:21
a prion or a virus or a parasite or a host organization's organism cell, to your desired effect, and it partly sets how you're going to transmit and deliver this. Your transmission selection are airborne, waterborne, contact, injection, partly depends on pathogen, partly
01:09:44
depends on how you're planning on delivering this. Step four would be production. This is where you're trying to make your pathogen in bulk. This is where you start dealing mostly in accomplices. You can get through phases one through three as a solo operator,
01:10:03
but at phase four, you're looking at generally where accomplices start coming in, and accomplices multiply the number of opportunities for the defense to catch something, and it generally will require additional facilities beyond your initial development facilities because
01:10:25
now you're talking bulk process chemistry, bulk process biological activities. You've got culture media issues, cross-contamination issues to worry about. The analog here would be trying to do any form of illegal substance production, from your stills to your illegal
01:10:49
drug labs, except for in this case, it requires better technique, higher product purity, and you're going to have more difficulty in post-processing. Once again, the real bio warfare program
01:11:05
has the cooperation of locals. They're not trying to sneak this around. They have dedicated facilities. It's nowhere near as difficult for a nation state. Finally, we've got delivery and infection, and this covers your delivery logistics and attack planning.
01:11:20
If you deliver your product and nobody catches it, you have failed. The key issues for the defense are target access and cracking the bad guy's operational security. Once again, chatter kicks in. You're dealing with a group of motivated people who want to talk about what motivates them. Target access, most of the interesting targets are
01:11:44
for political activity, are generally guarded when they're interesting. Basically, you're trying to sort out what a potential malicious actor is going to try to do. Aerosol delivery is easiest, but it also tends to burn away quickest. Injection is hardest, but
01:12:04
it's the best targeted. As it turns out, the more specific you're targeting, the harder it is to deliver. Part of the thing that the defense gets to do is just monitor known or likely targets for activity based on what you're hearing from chatter. Operational
01:12:23
security is a bear in general, and the moment you catch one of the accomplice, prisoner dilemma kicks in. As a defense, you want to intercept and leverage the communications between the planning cell and the delivery cell as much as possible, because that way
01:12:42
you can walk it right back. There are no current examples of an engineered biological attacks, and in fact, there's very limited examples of non-state biological attacks. The two that I've got here are the 1984 Rajneesee-Samanel attack and the 2001 Amerithrax attacks, which are the only two biological attacks in the past 50 years in the United States
01:13:05
that I'm aware of. In both cases, I'm going to explain why the steps in the kill chain that kind of got skipped over, but how you can analogize them. So for the Rajneesee incident, the goal was to interfere with county elections in a county in Oregon to
01:13:23
allow some favored candidates to win and seize local power. The result was 751 cases of food poisoning, and they lost the election anyway. For Econ, the Rajneesee religious group selected restaurants with salad bars. They also tried to do some direct delivery
01:13:45
of pathogens in drinking water to specific personnel. They skipped development because they were using Salmonella purchased from a medical supply company for testing cultures,
01:14:00
and they just re-cultured that in bulk in order to get their pathogen. Weaponization is also skipped because they're using a wild-type bioagent production. As I said, they cultured bulk amounts in an operationally controlled medical lab and basically took that cultured
01:14:22
culture, put it in the salad dressing at the salad bars, put it in water, and they had limited effect. The AmeriThrax attacks occurred late 2001 after the 9-11 bombings, and the goal is near as understood right now. It seems to have been to save the
01:14:42
anthrax vaccine program that was under development. There was a lot of work going on in the false flag attack attempting to pin the threat on radical Muslims, and it was targeted against high-visibility targets, media operations, and congressmen. There were five collateral
01:15:02
deaths in 17 industries, and on the one hand, it was successful because the anthrax virus program funding got restored. On the other hand, since the perpetrator that
01:15:23
committed suicide, and he was an employee, and presumably he was doing this to save his own job. You don't save your job if you shoot yourself while you're under investigation, so you can argue that's a failure. RECOM was publicly available mailing addresses,
01:15:40
development, skipped again because it was leveraging actual bio warfare, bioagents cribbed from the Ames Anthrax bio defense research strain. Also, weaponization skipped because you're using the active bio warfare bioagent. Dr. Ivins cultured his agents
01:16:05
for two years from existing research stock and mailed it out to his targets, which is why US mail nowadays to certain high priority targets goes through a considerably greater
01:16:20
number of steps to treat potential bio hazards. In conclusion, you can see that we can actually apply that kill chain methodology to a bio threat. As I said, you can find these bio cyber similarities, and we've got 20 plus years of cyber defense techniques and knowledge
01:16:41
available for us to leverage. We've got a head start, so let's not waste it. We've got a head start in an environment where we're at a greater disadvantage on the defense than we would be against a bio terrorist. Basically, it's the difference. It's like
01:17:02
stepping down from playing against the NFL to the local high school team. We've got our advantages. We should move now to really leverage these, build out policies, build out a knowledge base, and build out plans, and move forward. I've got some references,
01:17:22
basically available. Hopefully, these will be on. Are there any questions? I can answer questions outside. Yeah. If you do have questions for Ms. Grimley, please come outside. Thank
01:17:42
you very much. Thank you very much to the Biohacking Village for allowing us to be with you here today. It is much more than simply me, but
01:18:01
you will, in fact, be speaking with my co-panelists, Dr. Suzanne Schwartz from the FDA, Commissioner Rebecca Slaughter from the Federal Trade Commission, and Professor Stephanie Powell from West Point. We have a little bit of technological excitement going on because one of our three panelists
01:18:20
appears with us today in cyborg form. As with every live demo, excitement will occur, but we'll hopefully be able to make everything work. Before I introduce in carbon form our, well, two out of three carbon forms of our esteemed panelists, I'd like
01:18:43
to share a little bit of introductory information with you on this concept of the Internet of Bodies, which may be unfamiliar to many of you. This is a term that I've started using coming out of some of my earlier work. Basically, the ideas are connecting
01:19:02
the Internet of Things into the world of body-attached, body-embedded, and body-melded devices. As we're cybering all the things in the Internet of Things, we are also starting to creep into the human body as a platform for development and applications. The defining
01:19:24
features of the current Internet of Things, which is no news to this audience, we rely on Internet connectivity gratuitously. We think everything is better with Bluetooth and bacon. There are extreme levels of known vulnerabilities that are already causing harms. We have enabled new methods of data collection, aggregation, and repurposing,
01:19:45
and this in particular has led to privacy problems. And we have consumer ability diminishing in terms of the ability to opt out of these additional methods of data collection and leveraging. I have a very deep voicemail. Okay, that's very exciting.
01:20:05
In particular, we have hidden price terms that are appearing in the way that products are being sold. So you can't really tell whose security is better just by looking at the product in a store, for example. And so self-help on the part of consumers who
01:20:24
struggle to find the power on button sometimes is relatively limited. So one of the challenges that you see in policy spaces frequently is the definitions of security and privacy get blended. But at the end of the day, and I don't need to define these terms for this community, but at the end of the day in terms of a policy set of
01:20:43
discussions, what we are asking or should be asking in security policy is whether Alice's system successfully defended against Eve's attacks, was the failure foreseeable, and what were the legal duties that pertained in terms of fixing the harm that occurred
01:21:04
and preventing it. With privacy, we have negotiated idiosyncratic promises of reasonable expectations. The first setting, whatever was on the first was good.
01:21:31
So as a consequence, we have different questions that are being asked in security and in privacy. And this distinction is lost in many cases in policy circles and in legal
01:21:44
discussions. And so when you run into your friendly neighborhood lawyer, make sure they understand the distinction between security and privacy. So this starts to matter as a legal matter, in part because of reciprocal security vulnerability. So as we all know in this
01:22:00
room, you know, think about Mirai, right? It was pointed at Reddit and Twitter, but next time it could be a nuclear power plant, an electrical grid, et cetera. So the same types of vulnerabilities and compromises that exist in the private sector can ultimately impact the public sector and vice versa. So we have the Internet of Things starting to do real
01:22:20
damage. Just by way of memory lane, some of you undoubtedly remember the April Fool's joke of 2013, haha, toaster.io, haha, hilarious. 2017, we have articles about toasters breaking the Internet. We have BBC warning about compromises on toasters. And we are in the world where computer code is starting to directly impact humans. And we forget
01:22:45
sometimes, though not in this room, that code is written by humans, for humans, and that mistakes happen often. You undoubtedly know about Alexa ordering dollhouses. This is the first, the story of the first smart home in Germany where Professor Raul Rojas had
01:23:04
accidentally dossed his own house because of one light bulb. He thought it was kind of hilarious. His computer scientist wife less so. We know that malware is increasingly user-friendly in terms of being able to deploy it. And Mirai has caused many
01:23:22
problems. And sometimes the creators of these technologies lose control of them. In particular, in terms of this room, we're all very aware of the epidemic of malware spreading around hospitals and ransomware. And that's where we start to see the connection between code and human bodies. So we have the risk to physical safety
01:23:49
happening as a result of computer code. And so here we start to see the blue screen of
01:24:02
death. Undoubtedly, many of you know about the incident where a patient's heart procedure was interrupted because of a virus scan. We have cases where fitness apps are revealing military secrets, so where human bodies are deployed. And we have the Internet of Things transitioning into this type of Internet of Bodies where
01:24:24
human flesh is connected and reliant upon the Internet for some aspect of its functionality or safety. And that starts to change the stakes, especially in a legal context. So the Internet of Bodies, if we were to put a definition on it, is the
01:24:40
creeping technological reliance and vulnerability of human bodies on software, hardware, and the Internet for their integrity and functionality. So in other words, all of the unfixed problems of IoT are now blending with the unresolved legal and ethical messiness of prior generations of med tech and legal issues related to the human
01:25:06
body. And that is complicated. So we've had and are having three stages of these IOB technologies. The first stage we're all very familiar with, the quantified self, the Fitbits, the Google Glasses, the Apple Watches, et cetera. It's
01:25:23
optional. The risk is primarily driven by repurposing of data, privacy and security harms. But generally we're not talking about physical safety directly being impacted, the Strava location disclosure incidents notwithstanding. The second
01:25:42
generation of Internet of Bodies technologies we're already seeing. So we're an early second generation. Digital pills that report the progression of the pill and its release of medicine from the inside of our stomachs, pacemakers that are hardwired into our
01:26:00
bodies, cochlear implants that communicate using Bluetooth, digital prosthetics, limbs that rely on Internet connectivity for some aspect of their functionality or otherwise communicate with an external machine, artificial pancreases, the first one of which has already been approved, and of course Internet connected hospital equipment often keeps
01:26:22
human bodies alive. So these technologies are less optional. They are connected to the physicality of the human body. Physical harms are entirely possible as a consequence. And the next generation we're already starting to see in medical trials, stage 2.5, second
01:26:42
generation, late second generation, things like brain prosthetics where you have an external computer that can modify the sensation to, say, a patient's spine or a doctor can remotely recalibrate the degree of sensitivity of an implanted device. The third stage
01:27:00
we are not quite at, and it's the situation where you have in theory melding of the human body and the externalized cloud, ether, brain, and so forth. So we're
01:27:26
in the second stage now. Google Glass is still around. It's on factory floors. DoD is building an Iron Man suit that sometimes is powered in ways that make decisions that
01:27:42
override the human body and side potentially based on early reports. And so we see these buggy bits and buggy bodies potentially leading to physical harm. So imagine Mirai not on DVRs or cameras. Imagine it on a set of injected contact lenses just for fun. You put
01:28:01
lenses in for augmented reality gaming, and they get harnessed in a botnet. Botnets of body parts are a reasonable anticipated consequence here. Imagine WannaCry and artificial pancreases, senior citizens not knowing how to send Bitcoin to the person holding their
01:28:21
pancreas hostage. This is unfortunately somewhat foreseeable based on the way that we know medical device manufacturing has progressed, and we have now gotten to the point where there is excellent active consideration of these issues by the FDA. We saw the first
01:28:46
recall over a security issue. And we know that these are the types of issues that we will see continuing into the future. In particular, we'll hear from one of our panelists about the implications for criminal law. So these transfers of technology from outside the body
01:29:05
to the inside of the body will have tremendous legal implications. This is a case where someone was convicted based on data from his own pacemaker. And we know that companies through their patent filings, patent, meaning patent, P-A-T-E-N-T, that's what I'm saying,
01:29:28
that they are experimenting in research and development with various kinds of inside eye devices, whether it is to directly enhance your ability to perceive, or for gaming, or for
01:29:44
recording the reality that is around you. So some of it is the self-archival concern and interest that existed with quantified self, but now it's less visible to the outside person who may be the object of the recording. So the big question is whether or not we
01:30:05
are left with, and I'll just throw some of these ideas out there and shift quickly to our esteemed group of panelists, these third generation technologies, in particular, start to raise questions about where our own minds end and where someone else's ideas begin. And not
01:30:24
to be too philosophical, but Kant has this idea of autonomy, and we think about the autonomy of the human being. There's also another idea that he has called heptonomy, and it's about self-self-governance. It's about the ability to think through things in, for lack
01:30:43
of a better term, a disconnected way, to ensure that it's really your processing of the ideas internally to prepare you for exercises of autonomy, to act in ways that are consistent with your own moral processing once you are in the world and making decisions,
01:31:01
whether it is voting or it is the way that you treat other people, or your conduct as a professional in a corporate environment. So autonomy, self-governance, really can't be exercised, Kant says, without this prior self-self-governance, this heptonomy. And when you have your brain always on and connected to a cloud, the query is whether you can
01:31:26
ever really be sure that the ideas are fully yours. Okay, and so we know that these experiments are underway, neural lace, cortical interfaces, and so this brings us to a host of legal questions, which I will just hint at with this slide and then leave you in
01:31:45
the capable hands of our panelists. We have regulatory questions about where different agencies' authority overlaps and how to ideally ensure that consumers are not hurt. With the first generation IOB, those devices that were deemed to be lifestyle devices,
01:32:04
those were primarily regulated, well, I won't say regulated, enforcement conduct was carried out by the Federal Trade Commission. The FDA deemed many of those devices, those first generation devices, to not be medical devices, and so therefore was adopting a hands-off
01:32:23
approach. But second generation devices, where things go inside the body, that's starting to be a different story. With contracts, think about every EULA you've ever clicked on on a website. Now imagine that that EULA that you're clicking on is attached to the injected lenses in your eyeballs that have an internet connection. The stakes start to be a
01:32:42
little different. If you can't understand what you're clicking yes on, what does that mean in terms of the possible harm to your vision, your participation in a botnet of eyeballs that could take down a power grid? This sounds like sci-fi, but yet if you look at past attacks, as we all know, it's unfortunately not that far-fetched. Patents and
01:33:07
assertion entities, trolls, have been very aggressive in enforcing their IP rights. What will that mean when the allegedly infringing patent is in a body part? How much can legal
01:33:21
process force you to stop using a device, for example? In bankruptcy, whose benefit is served when these devices and the contract rights that spring from them are sold off, open question, and how to build civil recourse in torts. We know that civil cases around
01:33:42
security have been slow to emerge, so these are open questions for us, too. Where is the line between augmentation and medical correction? What are our new tech baselines? And who are the winners and the losers? How are we changing our relationship of our bodies to the rest of the world? So we're entering a stage where we have an internet of situated
01:34:03
things and bodies, and what this really asks, and this is where I'll leave you, it asks us to ask ourselves and our communities, what is our ideal of the human body and the next generation of technology? Are human bodies a bug or a feature? Are they something that
01:34:24
we need to get rid of and replace with robotic parts, or are they something that we need to preserve and extend with tools, essentially in their current form? Depending on who you ask, you have a different set of responses on this sliding scale of techno-humanity, if
01:34:41
you will. Some people, of course, think that we're all just a simulation, in which case, just pass me the cheese and the wine and let's call it a day, but if you are somewhere else on the scale and you think that the same human-machine symbiosis point is the ideal place to stop, then you're going to have a different set of policy and
01:35:04
legal prescriptions for the way that we want to build the next generation of technology and security than the people who believe in a post-humanist ideal, for example. And we don't even have a consistent definition for some of these stopping points. So this
01:35:22
is a bigger discussion over whether human bodies are a good thing or just a last generation operating system that needs to be replaced. So with that, I will stop here and bring up our esteemed panelists. And to those of you who have my phone number, please
01:35:41
don't text me during this because one of the panelists is on Skype. So I will ask each of our panelists to introduce themselves briefly and describe how you are connected to issues of security and bodily integrity. Suzanne, can I ask you to go first while we fuzz around here?
01:36:17
All right. There we go. We're good. Can you hear me? Can you hear me?
01:36:26
Good afternoon. And my name is Suzanne Schwartz. I'm at FDA Center for Devices and Radiological Health. I'm here with one of my colleagues from the center, Dr. Seth Carmody, who's sitting
01:36:42
up in the front row as well. Our team at FDA has taken on the role of medical device cybersecurity from the FDA's policy response outreach education perspective. So just a couple of words also introductory about FDA. FDA's mission specifically is to
01:37:07
protect and promote the public health. FDA has multiple centers within the large organization. Several of those centers are specifically what we call product centers such as ours, Center for Devices and Radiological Health. And what that means in terms of the
01:37:27
authority, we have the authority to review and to regulate medical devices, those devices that are going to be coming on the market, as well as the authority and the post market
01:37:43
sense to make sure that those devices which are being used, which are deployed, which are available for patients, that they are remaining safe and effective and that if there are any concerns around those devices, that those concerns are further analyzed, that
01:38:04
information is looked at carefully and then the appropriate next steps are taken based upon that information. So medical device cybersecurity as with the framing that Andrea gave, as you can imagine with a lot of the newer advances, the extraordinary therapies
01:38:22
that we see in treatments, interventions and diagnoses that are available today contain computers, are connected, are interoperable, are interconnected and they present some challenges from a security perspective. We have had to address and we continue to
01:38:43
address those that are the legacy devices, devices that were built and developed and put on the market and are actually in clinical use in hospitals years ago and those devices, many of them were not built with the security that we would want to see in them today and yet
01:39:02
they are performing extremely important life functions and they're also huge investments by hospitals and healthcare organizations. And then we have, of course, the newer, the novel technologies and the opportunity really to be able to make sure that before these devices actually get out there in clinical use, that a very careful, very thoughtful, rigorous
01:39:26
and appropriate security approach is taken with respect to threat modeling and the kinds of assessments that need to be done to assure that by design the security is built into the
01:39:40
device. I will stop right there by way of introduction and let's try, I'm going to see how we're doing with our slide board connection here. Okay. Becca, could you tell us, okay, can we hear? Yes. Okay. I think we're good. So would you be great enough to tell
01:40:05
us a little bit about the mission of the FTC, the approach that the FTC has taken to protecting consumers in terms of the different internet connected devices such as the lifestyle devices, the Fitbits, et cetera, and a little bit about the FTC's perspective
01:40:22
generally towards security and privacy enforcement. Yeah, sure. Hi, my name is Rebecca Slaughter. I'm a commissioner on the Federal Trade Commission and I'm going to apologize in advance. Not only am I not with you in person, I am here in person with my four-month-old baby who I think I have asleep, but that's always a dicey proposition. So let me tell
01:40:45
you a little bit about the FTC and data security and the internet of bodies. So the FTC's mission is to ensure a fair and competitive marketplace in two ways. We protect consumers
01:41:00
from unfair methods of competition and we protect consumers from unfair and deceptive acts or practices affecting commerce. So internet of bodies really sounds more in what we think of as our consumer protection mission, the unfair and deceptive acts or practices section of what we do. As it sounds, unfair and deceptive acts and practices can be
01:41:25
divided into unfair acts and practices and deceptive acts and practices. I will tackle deceptive first because it's hideous. A deceptive act or practice is one that deceives consumers. So in terms of data privacy, security means basically misrepresenting
01:41:47
what your device does, how you will protect the device, basically lies to consumers about your product. That's true in any products, also in internet of bodies products. Unfairness
01:42:05
is a little bit more complicated, but maintain reasonable security as an unfair act or practice in certain circumstances. Reasonableness is a flexible standard that depends on the
01:42:23
sophistication of the business, the type of data in question, how sensitive it is, factors like that. When we are evaluating unfairness, we basically have to determine whether an act is causes or is likely to cause substantial harm to consumers, whether
01:42:41
that harm is unavoidable by the consumers, and whether the harm is not outweighed by countervailing benefits, competition, or consumer. So thinking about this in terms of the internet of bodies, our general mission could be applied in terms of failure to accurately describe information practices. So what information is being collected, how
01:43:04
long it's being stored, who has access to it. Failure to reasonably secure sensitive data. So for example, if you're thinking about a medical device that reads on a geolocation or heart rhythm or something like that, if that's not kept secure. Safety or health
01:43:26
risks that arise from devices such as susceptibility to hacking that impacts critical functions like Andrea was mentioning in her introduction. Those are all areas where I could envision FTC enforcement under practices. Okay. Great. Thank you. Stephanie, tell us how
01:43:51
these kinds of issues might play out in a criminal context. Sure. And because I work for the Army Cyber Institute, I have to say that these are my personal views and
01:44:00
they're not the views of the United States Army or the U.S. government. Okay. So Andrea has introduced us to this concept of internet of bodies, and you've heard about some of IOB's propensity to damage human bodies and minds, and that we should start considering what the appropriate consumer safety responses are, legal and otherwise. I'm going to add
01:44:26
another layer to all of this. Like it or not, law enforcement is going to want this IOB generated data under circumstances as well. And while I do not intend to spend my time
01:44:40
talking about the crypto wars and the going dark debate, and let me just say I don't, I am not even buying the rhetoric of going dark for purposes of this conversation, but I am sure you all appreciate that arguments from very respected researchers have been made that
01:45:01
say look law enforcement, even if you're having challenges intercepting text messages or voice calls, there's all this metadata out there that's coming from the internet of things. Well, guess what? It's going to come from the internet of bodies as well. So as we
01:45:22
contemplate the various generations of IOB that Andrea identified, body external, body internal, and body melded, we have to think about how the fourth and fifth amendments of our constitution may regulate law enforcement access to this data. Now as you all probably
01:45:46
recall, the fourth amendment protects us from unreasonable searches and seizures. But for the fourth amendment to be triggered as a protection, we have to actually have a search. A
01:46:02
search must occur. And a search is a legal term of art. If there's not a search, there's no fourth amendment protection. And a lot of IOT metadata arguably falls outside of the scope
01:46:20
of the fourth amendment because of something called third party doctrine. And that in its most simple terms and in its most aggressive interpretation on the part of the government says, well if you share certain kinds of metadata with various third parties, you lose
01:46:41
protection in that data. Now some of you may say, wait a minute, the Supreme Court just came down with the wonderful Carpenter decision, right? If you're familiar, this happened end of June of this year. Been a long litigated issue, the issue of whether cell phone location data could be acquired by law enforcement without a warrant. Well, in this
01:47:10
case, we're talking about historical cell data. And the court said that we have a reasonable expectation of privacy in the whole of our movements. And the court basically said anything
01:47:24
seven days or more of location data is a fourth amendment search. And generally speaking, there are some exceptions. But when you have a fourth amendment search, the only way to make it reasonable, because the fourth amendment protects against unreasonable searches, the way we most
01:47:43
make it reasonable is by getting a warrant. And so now I don't want to be too optimistic because they're tower dumps or something the case didn't address, the use of sting rays. But I think it's fair to say it is a very positive opinion in terms of technologies
01:48:03
that have the ability to track us and certainly to do long term tracking. How that case plays out over time remains to be seen. And so how that case may apply to IOB generated data also remains to be seen. I don't want to be overly optimistic and say, well, look, that data is
01:48:28
being generated from a device inside your body. Granted, very concerning from a privacy, maybe even a security perspective, but it remains to be seen how much Carpenter will help or
01:48:42
give guidance with respect to that kind of data. So look at a real situation, a real case, that Andrea brought up to say, why would law enforcement, you know, concretely want this data? All right. So the man, we had a guy, you know, his house had a fire and he reported to the
01:49:11
authorities that this was all a surprise that, you know, in fact he was, he was busting open windows and throwing things out of windows and, you know, yelling at people
01:49:22
in the house, get out, get out, there's a fire. And well, unfortunately his cat died too. But law enforcement for whatever reason, you know, didn't necessarily believe what he was telling them. And so a body internal device under Andrea's framework, a pacemaker, was,
01:49:43
well, a little bit of a switch, if you will. According, and again, I'm going from news reports, not an actual court record, but according to news reports, a doctor who could interpret this pacemaker data said that the readings did not match someone who was
01:50:05
surprised by the fire, who was under stress and, you know, trying to get people and belongings out of the house. And so we've taken the liberty of just tweaking with Casey Green's wonderful little cartoon here and, you know, our friend may think, may try to
01:50:27
look like he, everything is fine, he may tell authorities otherwise, but his pacemaker data may kick him off. What might the Fifth Amendment say about a case like this
01:50:46
kinds of IOB data? Now, you probably remember that the Fifth Amendment based, among other things, it's a good amendment, basically says that no one shall be compelled to be a
01:51:00
witness against him or herself in a proceeding. And it's not just a specific criminal proceeding. If you're in, you know, if you're subpoenaed to testify in a civil case and what you say might incriminate you, that would be protected under the Fifth Amendment. But of course, there are always elements. You can't just say, well, I take the Fifth
01:51:23
and have that be over. For something to truly be covered, protected by the Fifth Amendment, there are three elements that must apply. It must be compelled, if you're going to law enforcement voluntarily, then you're not, you know, you're not being
01:51:41
forced against your will. What the information to be gained vis-a-vis your communication must be incriminatory. If it doesn't incriminate you, then the Fifth Amendment doesn't cover it. And it also must be a testimonial communication or act.
01:52:00
And that is normally the sort of critical element that gets litigated. Again, not quite as applicable here, but I am sure this community follows, you know, whether or not the Fifth Amendment would protect the password to your, the encrypted password to your
01:52:24
smartphone. And my answer to you, and I'm happy to discuss it more later, is a lawyer's answer, well, it depends. Under certain circumstances, it might be testimonial. Under certain circumstances, it might not be. Now in this particular case, this is data
01:52:42
that's being acquired presumably from a third party. So we don't have a Fourth Amendment issue, and it's not testimonial. So our little friendly, our friend up here is kind of out of luck with respect to his pacemaker data. This was, you know, a fairly straightforward
01:53:09
example. As we start to consider the third generation body melded that Andrea talked about, and we think about how technologies may meld and reveal things about our
01:53:25
thoughts, it's certainly reasonable to ask how are our thoughts protected by the Fourth Amendment or Fifth Amendment or various kinds of brain scans. For example, you know, if a perpetrator breaks into a house and the occupant and owner of the house grabs a hammer
01:53:46
and starts to hit the perpetrator on the head, the perpetrator, you know, then is able to overpower the owner of the house and unfortunately kills the owner. And all of that's caught on video. If law enforcement, you know, can't make out the face of the
01:54:03
perpetrator, if they find a suspect in enough time and he's been, he or she has been hit hard enough, maybe they want to examine through some kind of scan internal damage that might have been done to the brain. Is that protected under the Fourth or Fifth
01:54:23
Amendment? Certainly under the Fifth Amendment, it would be a very hard sell, I would say, because that kind of, that kind of examination would be more like just a fingerprint
01:54:41
or DNA, or if there were, you know, external wounds, it's identification, it's not really about testimony. What if, however, there's a more advanced technology, and if there
01:55:01
were, we, if some investigators or scientists were to show that alleged perpetrator pictures, you know, still pictures taken from the video of the attack, and you could read how the alleged perpetrator was reacting. Would the Fourth Amendment cover that?
01:55:22
Maybe even more interesting question, would the Fifth Amendment cover that? Does the Fifth Amendment reach as far as those kinds of mental privacy issues? Suffice it to say that these issues are not completely settled, and that criminal procedure nerds, and I like to call
01:55:44
myself one of them, we're thinking through these kinds of things, and I think we're only going to need to think more about them as these three generations of IOB technology become more prevalent in our world.
01:56:01
Thank you, thank you, thank you very much for those insightful, insightful comments. And so, let me give back our third panelist, Hercord.
01:56:36
Okay, we are back in business now, and let me, let me give back our third panelist,
01:56:47
let's start taking any questions from the audience, if there are any. Okay, let's, let's start up here, and we'll just, yes. You're injured.
01:57:21
I don't think contracts are going to, if the Fourth or Fifth Amendment, I should say, I don't think contracts are going to defeat law enforcement. Other things might, like encryption. Right. So, that case, as you well know, ended in a really interesting way.
01:57:47
Frankly, the FBI was able to hire a third party vendor to break into the phone. The question, though, is important because it's still being litigated. Right. It doesn't even, in that
01:58:02
case, our San Bernardino shooter was dead. So, there was no way to try and compel him to provide his password. However, under, there are sort of two lines of cases that are developing. One from the Eleventh Circuit, where you had a situation where a, an
01:58:25
individual was suspected of having a lot of child pornography, and law enforcement was able to get into some of his devices, but not all of them. And they, they couldn't, he
01:58:41
used TrueCrypt on certain external hard drives. And when law enforcement, when law enforcement sought to compel him to open up those hard drives, in other words, to decrypt it, to decrypt them, he raised a Fifth Amendment privilege. The Eleventh Circuit
01:59:02
basically determined that in that case, partially because of the use of TrueCrypt, the government agent could not, I don't want to use the word, you don't have to be definitive, but it was, the government agent couldn't really say, are there actual files on that case, or is it, you know, just a bunch of nothing, so to speak. Is, is, is
01:59:25
TrueCrypt essentially creating potential evidence that is unable to decipher to a point where law enforcement can't even offer a reasonable prediction that such files. In that case, the Eleventh Circuit determined that the decryption would be testimonial. In
01:59:50
another case, in the Third Circuit, not a hundred percent, same facts, but close enough for purposes of our discussion.
02:00:00
The Third Circuit looked at the situation, and they did this in footnotes, so it's called dicta, but it's pretty strong dicta. They said, unlike the Eleventh Circuit, the government didn't have to show that it was likely that those kinds of files
02:00:21
were on the external drives. All the government had to show was that the defendant, in this case it was a defendant, knew the password, and in both of these cases, what they were talking about was something called the foregone conclusion doctrine.
02:00:43
And basically what that means is if you are merely asking someone to do something and the government sort of already knows that this exists, in other words, the act of decrypting doesn't give the government any new kind of testimonial information or admission,
02:01:02
then it falls outside of what is considered testimonial under the Fifth Amendment. But it is not, it is not a settled area of law. I'm worried that we have some.
02:01:22
I'm worried that all the data is just us. One here's more, if you guys think about that, and you care about that. Can you give me, can you give me an example? Sure, say I had a device that could make you smarter and faster in those examples.
02:01:43
Yeah, and I think what we're pushing towards is a frontier that, honestly speaking, we have not been working through, not to say that we shouldn't be, but we're not there yet. We're certainly not there yet. I'd say that, and one thing to really kind of
02:02:02
complete the introduction on FDA, there are certain definitions that are statutory definitions and legal definitions that give the FDA authority and kind of prescribe, if you will, what is a medical device
02:02:21
and what are drugs and what are biologics and that's where our remit is as a public health agency in terms of really, again, coming back to that assurance of safety and effectiveness. So when we talk about medical devices, and I'm not using the exact statutory language here,
02:02:43
but we're talking about devices that are diagnosing, treating, mitigating illness. In other words, it has a very clear, in the statute, language around it providing something
02:03:03
to help a patient who is injured or ill, providing a cure, diagnosing a disease, diagnosing an injury, treating and mitigating. And I think that the lines really start to blur when we get into these areas of augmentation
02:03:20
or enhancement, if it's not specific to dealing with an injury or a underlying illness or an underlying, and again, here I'm using language that very generically defect of sorts, right?
02:03:41
That an individual as a patient is in need of addressing in some manner. Yes, please. So I actually, my sound was a little bit, kind of a question was about bodies devices
02:04:03
that would say augment your intelligence or do something like that, and I want to point out that that's an area where the FTC has taken and would take a careful look to make sure that the claims that someone makes about a device, whether it's an internet of bodies device
02:04:21
or an over-the-counter pill that's not regulated by the FDA or anything else, something we would look at carefully to make sure that the claims are substantiated. You can't lie about the benefits of your product or the existence of scientific proof behind it.
02:04:41
So we've brought a fair amount of enforcement actions against makers that are supposed to help people's health, like an app, a mobile app that would detect, claimed it could detect melanoma, or a mobile app that claimed it could treat acne.
02:05:00
And where those claims can't be substantiated, that would constitute a deceptive act that the FTC could enforce against even where the device or product is not regulated by the FDA. And I should have said at the beginning, as Stephanie did, that I am also speaking for myself and not for the commitment in this context.
02:05:21
Sorry for that omission. Good afternoon. One question I had that I have been weighing in here and has ever come up, for example, the case of the arson situation is discrimination. So the man who had the pacemaker was in a situation where he had no choice
02:05:40
but to have a pacemaker to survive. And because of that, one could argue, for example, that he was discriminated against. Let's say he had a medical device, they were able to pull this data on him and say a different person who doesn't have this device, they could never have pulled that data on him. So I just wanted to open up to that question. Beyond just Fifth Amendment simplifications of self-information,
02:06:02
what is your take on discrimination being a factor? So, it's a very interesting question. I mean, if I am just looking at Fourth and Fifth Amendment doctrine, it, to the best of my knowledge,
02:06:23
doesn't recognize discrimination. That doesn't mean that, you know, communities of color are often, or communities that don't have certain kinds of resources often are
02:06:43
targeted more by law enforcement activities than others. I don't know that the Fourth and Fifth Amendment are going to be the ways to address that. And I do very much take the concern
02:07:02
that you don't generally have a choice about whether to use a pacemaker. Nevertheless, traditional Fourth and Fifth Amendment law doesn't, to the best of my knowledge, and in case I've ever read,
02:07:22
acknowledge that kind of distinction. And actually, I just want to carry you up. I'm not a person out here that's one of the genetics that only needs to get there, to help me learn, I'm not a lawyer originally. I wish it was other laws that could be included in other amendments.
02:07:41
Well, I mean, I guess that when you raised your question, the first thing that came to my mind is, frankly, again, thinking about communities that are impacted more by law enforcement actions than others,
02:08:00
just thinking back about law enforcement policies and better kinds of policing. That is not a new problem. That is not a problem just raised by I and me, but frankly, something we've been struggling with for a while. I am not a fan,
02:08:22
but stingrays have long been an interest of mine. I've written on them with a technologist, and I was asked the question the other day about how the use of stingrays, which I am almost certain that everyone in this room knows that a stingray is a device that
02:08:44
is a fake cell tower. It can trick your phone into believing it's a real cell tower, so your phone will give it information as if the phone was talking to a real cell tower. Phones are better at resisting these kinds of devices
02:09:01
when they are operating in 4G, but people who don't have the resources might for a long time have phones that are backward compatible to 2G, and because of that, may be more susceptible
02:09:21
to law enforcement use of stingrays. So I take your point very much. I think it is a problem that goes far beyond just IOB that we've been grappling with for a long time. This is a bit of a sci-fi ethics question.
02:09:48
So there's certain things that we can do to give, let's say, a job, an advantage, like giving the master to computers and technology to be able to learn the master,
02:10:01
yet what happens when, you know, a computer I can step away from as a kid, I can live my life, I'm 18, great, I can never look at a computer again if I want to, but what happens when you start to, you know, invent a system within us? So where is the parent and child relationship,
02:10:20
like how do we determine, like, okay, it's okay for me to invent in my child or work in a choice with my child, because now they're stuck with the device. And then a second part would be, you know, because this would give us some sort of intended advantage, what if there are some countries that then say, we're going to allow them to this level,
02:10:40
do we do it just to keep up, do we not? Just something curious to hear your thoughts. So on the first point about children and the choices of parents to, say, give their kids a little extra storage for school,
02:11:01
that is something that we might get a little window of sort of presage from the cochlear implant cases. So there have been cases that have been litigated out over whether children should be, in some cases, against their wishes or against the birth parent's wishes
02:11:22
implanted with a cochlear implant. And courts have struggled with these cases. So in general, the wishes of the parents are respected, but the child's opinion, depending on the age of the child, is also recognized. So it's not going to be clean.
02:11:40
And this goes to the question I was raising about technological baselines slipping. So if you think about it, 150 years ago, not everyone had glasses who had vision issues. And today, it's a precondition of getting a driver's license, that if we need glasses to reach a certain level of vision clarity,
02:12:00
we are obligated to wear those glasses or we don't have the legal right to drive a car. So there's a codification of these tech baselines that are just socially constructed, like so many other things. Different cultures will probably reach different technological baselines, and that's kind of why I left
02:12:20
those big ethical questions hanging out there about what is the desirable human of the next generation that we should strive for as a society, and where does mechanical doping come into the picture? So I guess I know three questions. First is, what's a copier implement? Second, I'm on the hill.
02:12:41
Do any of you have body logifications or implants and things along those lines? And thirdly, you mentioned that some of you have responsibilities along the lines of regulating people's claims. Are there any claims of like drugs or implants that make you smarter or do anything that have that are, that you're not taking actions
02:13:01
because they are fact true and warrior that you love? So Santa, do you want to explain a copier implant or should a lawyer stumble through it? Yeah. Thank you.
02:13:21
The ideal solution appears. A copier implant is for anyone that has an issue with hearing. It's a technological device that's implanted
02:13:40
in either children or adults into the cochlea of a person's ear. And many parents are deciding that for their children without their children's knowledge, when they're too young to even decide. We don't know the whole, whole story behind any of those decisions, but they are parallel to glasses, if you will.
02:14:05
So parents think that this kind of implant is fine. It's a benefit to the child. But the issue in the deaf community as well is some people want to make it illegal
02:14:20
for them, for parents to just automatically implant their child because many of those children are growing up without sign language, without a deaf identity, which leads to its own problems. So that's kind of the gist of the copier implant.
02:14:44
Are there any FDA? Well, unless one of the panelists wants to draw us the second one, I'm not. But the third one. The third one of whether there have been, to what extent have there been FDA approvals
02:15:01
of IOB devices and what is kind of the next generation of devices, have there been any malfunctions that we know of and recalls and along those lines. So the examples that Andrea provided in the original slide, which had under, I guess with the phase two implantable devices,
02:15:23
certainly FDA has approved those devices that include implanted insulin pumps, pacemakers, neuromodulators. So there's a good number of devices that do reside, that are within the body.
02:15:43
And they go through a rather rigorous process in terms of, again, looking at the device performance, looking at the kinds of testing around those devices. And now of course, also incorporating security
02:16:01
as part of that review process. Just like any other medical device. So I was really asking about like augmentation for healthy people as opposed to devices that are for like disability or medical problems. All right. So it's more than FTC questions that we have open
02:16:34
or closed or whether we have any. So the way I will answer your question is just by making the logical point
02:16:42
that if you think about the number of connected devices, including internet of bodies devices out there. So necessarily, you could have investigated
02:17:00
or thought that there weren't a case to be made to get you guys having me through it.
02:17:22
The gap between the FDA's scope and the FTC's scope. So the question is, if we say there's an implant in your eye that is effective at doing night vision, but there's a 20% chance of infection because of the techniques used to do the implantation,
02:17:43
what's the responsibility of the FDA in that space and what is the responsibility of the FTC if it's functional, if you night vision would ensure a medical safety problem could exist? And is there regulation or definitions for medical device that need to be updated or something like that?
02:18:04
So I'll come back to if it's a medical device, if it's deemed to be a medical device, it's regulated by the FDA. And if it's not a medical device, we will work with the FTC in areas where there are some blurred lines
02:18:21
or some hazy lines. And that's been going on for several years already. But with respect to an implant or any kind of a device that is considered to be by regulatory definition a medical device, that is under the FDA's jurisdiction.
02:18:40
And so safety becomes a prime area of our review and investigation and follow up over the course of the lifetime of that device being on the market. So I have a question about kind of on the one hand what you're describing
02:19:01
and how quickly we use, for example, healthy people seeking natural prescription and how the FDA would handle something that have a benefit for a specific population but also could be applied to healthy people to see them better. Can you repeat the question again?
02:19:22
I missed the beginning of it. Talking about over-prescribing of medications and off-label use, for example, healthy people seeking a prescription for Adderall and how the FDA would deal with a device that has a benefit for a specific population
02:19:41
but that could also be used as a boost for healthy people. So we consider off-label use in the hands of the clinicians, the physicians who prescribe those particular treatments and that happens a lot. And it doesn't have to be the kind of case
02:20:02
that you're talking about but physicians will recognize the use of a drug or a device in an area where it was not necessarily clear for marketing or approved for but that it may have benefit in the hands and that physician can,
02:20:22
through what's considered to be practice of medicine, prescribe that particular treatment intervention. And FDA has no authorities in that particular area.
02:20:42
Thanks for the great discussion here. A question that I think we're focusing a lot on protecting individual rights. We've seen IoT go bad, you go rogue. If you're that cyborg who has medical devices that are connected and you are in fact the one going rogue
02:21:02
unbeknownst to the person part but the cyborg is the one doing the distributed denial of service or something along that effect, how do you stop that without stopping the individual that's part of this life? That is an excellent question that we hope the security community
02:21:21
will come up with brilliant solutions for. And I think that is a fantastic point on which to end. Thank you all very much for joining us and thank you to the wonderful panelists. Thanks.