Open Source is Insufficient to Solve Trust Problems in Hardware
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Untertitel |
| |
Serientitel | ||
Anzahl der Teile | 254 | |
Autor | ||
Lizenz | CC-Namensnennung 4.0 International: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/53182 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
| |
Schlagwörter |
00:00
Open SourceHardwareUnrundheitBitOpen SourceHardwareOrdnung <Mathematik>SoftwareArithmetische FolgeAuflösung <Mathematik>ComputeranimationVorlesung/Konferenz
01:10
HardwareOpen SourceEntscheidungstheorieQuick-SortAuflösung <Mathematik>Stochastische MatrixDichte <Stochastik>Gebäude <Mathematik>Open SourceHardwareOffene MengeSchreib-Lese-KopfDatensicherungComputeranimationXML
02:08
Open SourceCompilerBrowserOffene MengeSoftwareentwicklerVorzeichen <Mathematik>RechnernetzGerichtete MengeStandardabweichungFunktion <Mathematik>CASE <Informatik>BrowserDiagrammMechanismus-Design-TheorieProjektive EbeneStandardabweichungCodeComputerspielBitOffene MengeMereologieSoftwareentwicklerDigital Rights ManagementOpen SourceComputersicherheitKette <Mathematik>SoftwareDreiecksfreier GraphMatchingMultiplikationsoperatorProgrammHinterlegungsverfahren <Kryptologie>Schreib-Lese-KopfNormalvektorQuick-SortWärmeübergangHash-AlgorithmusHardwarePunktwolkeVorzeichen <Mathematik>SymboltabelleTypentheorieMini-DiscMathematikQuellcodeFolge <Mathematik>Public-Key-KryptosystemGebäude <Mathematik>Familie <Mathematik>TopologieForcingComputeranimation
04:32
Funktion <Mathematik>Hash-AlgorithmusPunktVektorpotenzialSoftwareentwicklerWärmeübergangSoftwareBefehlsprozessorRandwertTermHardwareFirmwareImplementierungCodePhysikalisches SystemKontrollstrukturProxy ServerComputersicherheitMAPHackerLaufzeitfehlerWort <Informatik>Funktion <Mathematik>Hash-AlgorithmusProxy ServerPublic-Key-KryptosystemKomponente <Software>SchlüsselverwaltungGebäude <Mathematik>Elektronische PublikationGleitkommarechnungGruppenoperationBitSymboltabellePunktFolge <Mathematik>HardwareMini-DiscSoftwareQuick-SortComputersicherheitFirmwareZeichenketteKette <Mathematik>CASE <Informatik>LaufzeitfehlerVersionsverwaltungVorzeichen <Mathematik>KomplexitätsklasseProgrammverifikationMultiplikationsoperatorQuellcodeElektronische UnterschriftProgrammHackerEinfacher RingMikrocontrollerOffene MengeUmwandlungsenthalpieUbiquitous ComputingTermProzess <Informatik>Physikalisches SystemMathematikCodeServerMereologieBefehlsprozessorPuffer <Netzplantechnik>Rechter WinkelFunktionalComputerChiffrierungWort <Informatik>AbstraktionsebeneGamecontrollerWeb logCookie <Internet>Computerunterstützte ÜbersetzungKontrollstrukturRechenwerkMechanismus-Design-TheorieSystemaufrufMetropolitan area network
09:05
HardwareArithmetisches MittelMinimalgradNP-hartes ProblemProdukt <Mathematik>RechteckRegulärer GraphBitrateEinschließungssatzWhiteboardRechenwerkProdukt <Mathematik>RobotikHilfesystemRechter WinkelSoftwaretestStichprobenumfangGamecontrollerVersionsverwaltungTaskKette <Mathematik>MengeQuick-SortMultiplikationsoperatorGrenzschichtablösungPlastikkarteComputerProgrammverifikationKomplexitätsklasseHardwareGanze FunktionBitEntwurfsautomationKomponente <Software>Kartesische KoordinatenOntologie <Wissensverarbeitung>QuantenzustandHalbleiterspeicherZweiVirtuelle MaschineDiagrammMinimumEinsDiagramm
11:55
GeradeVirtuelle MaschineQuick-SortMinimumGraphiktablettQuadratzahlQuantenzustandRechter WinkelBildgebendes VerfahrenWhiteboardGrenzschichtablösungSystem FWinkelverteilungVorlesung/KonferenzComputeranimation
13:14
RuhmasseSpektralanalyse <Stochastik>QuantenzustandQuick-SortBildgebendes VerfahrenComputertomographVerdeckungsrechnungCASE <Informatik>DifferenteVirtuelle MaschineRuhmasseQuantenzustandSpektralanalyse <Stochastik>BitComputerAblaufverfolgungComputeranimation
14:01
QuantenzustandKette <Mathematik>Prozess <Informatik>YouTubeWhiteboardGarbentheorieVirtuelle MaschineProgrammQuick-SortBitQuantenzustandDynamisches RAMKeller <Informatik>DifferenteCliquenweiteMeterZentrische StreckungMinimumTypentheorieDigitaltechnikMAPWinkelverteilungPunktProzess <Informatik>Computeranimation
15:36
NP-hartes ProblemTemplateMobiles InternetServerBayes-NetzVisuelles SystemReverse EngineeringSpeicherabzugFlächentheorieARM <Computerarchitektur>Auflösung <Mathematik>RuhmasseCompilerDigitaltechnikDesintegration <Mathematik>GamecontrollerDigital Rights ManagementMini-DiscOISCTrigonometrische FunktionSoftwareProdukt <Mathematik>BefehlsprozessorGraphikprozessorRouterDatenflussKontextbezogenes SystemPunktp-BlockAnalogieschlussEinfacher RingHintertür <Informatik>Enterprise-Resource-PlanningDigital-Analog-UmsetzerAnalog-Digital-UmsetzerPell-GleichungProzess <Informatik>MereologieMateriewelleMaßerweiterungBridge <Kommunikationstechnik>Abstrakter DatentypStrukturgleichungsmodellMathematikZellularer AutomatProxy ServerMAPDrahtloses lokales NetzEbeneFaktor <Algebra>Fourier-EntwicklungLastMetropolitan area networkGraphiktablettAutomatische HandlungsplanungQuick-SortTypentheorieCASE <Informatik>NebenbedingungInternetworkingMengeDateiformatFlächentheorieBitNP-hartes ProblemFront-End <Software>RuhmasseEntscheidungstheorieReflektor <Informatik>VerdeckungsrechnungFlächeninhaltPhysikalisches SystemSchreib-Lese-KopfOffene MengeMAPRechter WinkelDatenflussBefehlsprozessorProdukt <Mathematik>ZählenDigitaltechnikSoftwareschwachstelleDienst <Informatik>MIDI <Musikelektronik>StichprobenumfangBildgebendes Verfahrenp-BlockTermDigital Rights ManagementPlotterGamecontrollerDichte <Physik>MultiplikationsoperatorFormation <Mathematik>PhasenumwandlungEinsServerInformationsspeicherungMini-DiscPunktEinfacher RingProzess <Informatik>Open SourcePerfekte GruppeSoftwaretestZellularer AutomatPhysikalismusGeradeMathematikAuflösung <Mathematik>ProgrammablaufplanKorrelationsfunktionLogiksyntheseFestplatteMereologiePatch <Software>Mathematische LogikTemplateZehnSoundverarbeitungCompilerFaktor <Algebra>HardwareKappa-KoeffizientKette <Mathematik>Gewicht <Ausgleichsrechnung>EnergiedichteQuaderRouterKartesische KoordinatenUmwandlungsenthalpieRückkopplungSpektralanalyse <Stochastik>RoutingSymboltabelleRechenwerkIntegralMusterspracheSchnitt <Mathematik>Demoszene <Programmierung>Mailing-ListeCodierungMessage-PassingCoxeter-GruppeProjektive EbeneGraphfärbungAnalogieschlussVorzeichen <Mathematik>Computeranimation
24:13
Innerer PunktResiduumYouTubeFourier-EntwicklungWhiteboardProdukt <Mathematik>RuhmasseOpen SourceSoftwareentwicklerFaktor <Algebra>Offene MengeSoftwaretestRandverteilungHardwareHash-AlgorithmusReverse EngineeringVektorpotenzialFlächentheorieTermNatürliche ZahlQuick-SortFaktor <Algebra>Hash-AlgorithmusBitKette <Mathematik>SoftwareentwicklerHardwareGebäude <Mathematik>Offene MengeQuaderFlächeninhaltSystem FSoftwaretestKontinuierliche IntegrationVirtuelle MaschineBildgebendes VerfahrenPunktProgrammverifikationMultiplikationsoperatorBefehlsprozessorComputersicherheitMAPReverse EngineeringOpen SourceTypentheorieRechter WinkelZahlenbereichCracker <Computerkriminalität>Komponente <Software>Whiteboard
26:20
Gebäude <Mathematik>Sampling <Musik>HardwareServerStichprobeSoftwareNP-hartes ProblemPeer-to-Peer-NetzOpen SourcePolstelleCompilerInformationPunktGanze FunktionProgrammverifikationSystemprogrammierungFunktion <Mathematik>EbeneBeschreibungskomplexitätCoprozessorProzess <Informatik>BitrateMonster-GruppeBenutzerfreundlichkeitDatentypMobiles InternetProdukt <Mathematik>TelekommunikationAuthentifikationEin-AusgabeMultiplikationImplementierungTouchscreenGamecontrollerFirmwareVirtuelle RealitätOverlay-NetzPunktwolkeZahlenbereichProdukt <Mathematik>HilfesystemSchlussregelStandardabweichungElektronische UnterschriftBefehlsprozessorPrototypingATMFlächentheorieComputersicherheitComputerarchitekturFontSchnelltasteStichprobenumfangRenderingProgrammfehlerPeer-to-Peer-NetzWeb SitePasswortMessage-PassingArithmetisches MittelTouchscreenMikrocontrollerKugelkappeCodeIdeal <Mathematik>Prozess <Informatik>Quick-SortGleitendes MittelTelekommunikationDivergente ReiheDeterministischer ProzessEntwurfsautomationQuellcodeNP-hartes ProblemGebäude <Mathematik>BitEinfügungsdämpfungPublic-Key-KryptosystemPunktProgrammverifikationRuhmasseProjektive EbeneMonster-GruppeTypentheorieSoftwareHardwareLeistung <Physik>BitrateOpen SourceBenutzerfreundlichkeitOrdnung <Mathematik>Physikalisches SystemZehnInformationSchreib-Lese-KopfEin-AusgabeKontextbezogenes SystemFolientastaturMaschinenschreibenNichtlineares GleichungssystemMengeVirtualisierungTermKomplex <Algebra>SprachsynthesePhysikalismusKryptologieMultiplikationsoperatorOverlay-NetzFigurierte ZahlFunktionalDatenflussRechter WinkelSoftwareentwicklerKomponente <Software>Lokales MinimumMAPp-BlockOffene MengeÜberlagerung <Mathematik>PlastikkarteSignifikanztestGlobale OptimierungAuthentifikationZweiPRINCE2Computeranimation
33:28
MultiplikationImplementierungInterface <Schaltung>ProgrammverifikationROM <Informatik>QuaderTechnische OptikEbeneQuick-SortDatensichtgerätFlächeninhaltFlächentheorieAuflösung <Mathematik>Dichte <Physik>PixelTouchscreenPunktFunktion <Mathematik>ComputerRechteckInterface <Schaltung>ProgrammverifikationGraphfärbungTermEin-AusgabeHalbleiterspeicherWinkelverteilungKeller <Informatik>Rechter WinkelKomplexitätsklasseMereologieTechnische OptikAnalysisComputeranimation
35:13
AnalysisTechnische OptikMeta-TagHardwareDatenverarbeitungssystemDigitale PhotographieMusterspracheDigitaltechnikTrojanisches Pferd <Informatik>Globale BeleuchtungMAPStreuungFeinstruktur <Mengenlehre>Platonischer KörperFigurierte ZahlWinkelverteilungStatistische SchlussweiseKonfiguration <Informatik>ProgrammverifikationMultiplikationsoperatorIdeal <Mathematik>Quick-SortTermZehnStrategisches SpielAnalysisBitProzess <Informatik>Gebäude <Mathematik>Computeranimation
36:24
Field programmable gate arrayArray <Informatik>HardwareImplementierungOpen SourceBefehlsprozessorMobiles InternetBenutzerfreundlichkeitVersionsverwaltungArray <Informatik>Rechter WinkelBildgebendes VerfahrenMathematische LogikProgrammGesetz <Physik>Quick-SortDatenfeldSpeichermodellImplementierungFlip-FlopGewicht <Ausgleichsrechnung>RuhmasseKorrelationMultiplikationsoperatorDiagrammCodierung <Programmierung>KorrelationsfunktionMultifunktionBitTabelleLaufzeitfehlerOpen SourceSoftwareWärmeübergangStreaming <Kommunikationstechnik>ProgrammverifikationBenutzerfreundlichkeitOffene MengeDatenflussAuswahlaxiomDivergente ReiheBefehlsprozessorTermVersionsverwaltungPseudozufallszahlenComputerspielMinkowski-MetrikOffice-PaketHardwareComputeranimation
38:11
Field programmable gate arrayHardwareZellularer AutomatAdressraumDatentypMultitaskingParametersystemZufallszahlenMAPMotion CapturingRuhmasseMittelwertMathematikBinder <Informatik>E-MailMathematische LogikVektorpotenzialPartielle DifferentiationZusammenhängender GraphOrtsoperatorp-BlockChiffrierungImplementierungEindeutigkeitPatch <Software>MusterspracheAbgeschlossene MengeHintertür <Informatik>MaschinenschreibenBildverstehenPunktNP-hartes ProblemOpen SourceProgrammverifikationBeschreibungskomplexitätSystemprogrammierungGanze FunktionGebäude <Mathematik>Gerichtete MengeAnalysisBootenSoftwareschwachstelleChiffreAdvanced Encryption StandardStrukturgleichungsmodellHardwareZahlenbereichQuellcodeField programmable gate arrayBitURLMAPPseudozufallszahlenQuick-SortSampler <Musikinstrument>VersionsverwaltungTouchscreenVollständigkeitEnergiedichteBitmap-GraphikFunktionalHilfesystemBefehlsprozessorRuhmassePersönliche IdentifikationsnummerMathematikMehrrechnersystemMathematische LogikProgrammverifikationUnendlichkeitOpen SourceChiffreAblaufverfolgungRandomisierungVektorpotenzialParametersystemInterface <Schaltung>MultiplikationsoperatorFibonacci-FolgePerspektiveTermChiffrierungPunktNP-hartes ProblemLeistung <Physik>p-BlockSchlüsselverwaltungAdressraumAbgeschlossene MengeEindeutigkeitPatch <Software>MereologieHintertür <Informatik>Projektive EbeneEinsKomplexitätsklasseZellularer AutomatStrukturgleichungsmodellVektorraumMetropolitan area networkDivergente ReiheBus <Informatik>SymboltabelleImplementierungRechter WinkelEinfache GenauigkeitMessage-PassingStreaming <Kommunikationstechnik>RechenwerkAutorisierungProxy ServerGeradeArithmetisches MittelPhysikalismusSoftwareschwachstelleExtreme programmingFlächeninhaltVorzeichen <Mathematik>FitnessfunktionDienst <Informatik>Physikalisches SystemAutomatische HandlungsplanungKette <Mathematik>VertauschungsrelationComputersicherheitSchaltnetzKategorie <Mathematik>QuantenzustandCASE <Informatik>Computeranimation
44:55
StrukturgleichungsmodellHardwareField programmable gate arrayBefehlsprozessorKryptologieAggregatzustandChiffrierungSpezialrechnerBootenMultiplikationImplementierungGarbentheorieProgrammverifikationNP-hartes ProblemMereologieEindeutigkeitBeschreibungskomplexitätSystemprogrammierungGanze FunktionGebäude <Mathematik>PunktTabelleSchreib-Lese-KopfApp <Programm>Elektronisches WasserzeichenQuick-SortPunktTypentheorieImplementierungBitProgrammverifikationGanze FunktionMAPZentralisatorDatensichtgerätMereologieLokales MinimumMultiplikationsoperatorAutorisierungHash-AlgorithmusTermSampler <Musikinstrument>BefehlsprozessorHardwareEinsRechteckOpen SourcePhysikalisches SystemSchnelltasteHilfesystemKryptologieLastComputerspielBildschirmmaskeBildgebendes Verfahrenp-BlockKoroutineLeckNeuronales NetzSoftwareschwachstelleBootenPasswortOktaederStreaming <Kommunikationstechnik>Weg <Topologie>SoftwareentwicklerRechter WinkelLochkarteRandomisierungFeuchteleitungMathematikGewicht <Ausgleichsrechnung>UmwandlungsenthalpieElektronische UnterschriftRoutingLeistung <Physik>MusterspracheMechanismus-Design-TheorieGoogle Street ViewStrömungsrichtungComputeranimation
51:07
HardwarePunktMAPExtremwertstatistikPrototypingVersionsverwaltungMatrizenrechnungDatenmissbrauchSoftwareHardwareFaktor <Algebra>FlächeninhaltTechnische OptikQuick-SortElektronische UnterschriftZahlenbereichProxy ServerArithmetisches MittelMultiplikationsoperatorHilfesystemProgrammverifikationGebäude <Mathematik>Prozess <Informatik>FlächentheorieCodeComputerIndexberechnungLeistung <Physik>Konfiguration <Informatik>Sampler <Musikinstrument>SoftwareentwicklerRoutingExtremwertstatistikRichtungHash-AlgorithmusMereologieInverser LimesMAPMechanismus-Design-Theoriep-BlockParametersystemDatensichtgerätKartesische KoordinatenRückkopplungNP-hartes ProblemHintertür <Informatik>StrömungsrichtungRandomisierungDatenmissbrauchSoftwareTermBitOpen SourceKonstruktor <Informatik>Rechter WinkelAnalysisTeilbarkeitZufallsgeneratorStreaming <Kommunikationstechnik>InternetworkingPunktSpeicherabzugInformationsspeicherungCASE <Informatik>DatenstrukturWhiteboardTypentheorieExtreme programmingSelbst organisierendes SystemTotal <Mathematik>Produkt <Mathematik>EinfügungsdämpfungProjektive EbeneZellularer AutomatUnrundheitTechnische ZeichnungVorlesung/Konferenz
01:00:10
UnrundheitVorlesung/KonferenzComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:20
talk on the first day of congress. The talk is Open Source is Insufficient to Solve Trust Problems in Hardware, and although there is a lot to be said for free and open software, it is unfortunately not always inherently more secure than proprietary or closed software, and the same goes for hardware as well. And this talk will take us into the nitty-gritty
00:42
bits of how to build trustable hardware and how it has to be implemented and brought together with the software in order to be secure. We have one speaker here today, it's Bunny. He's a hardware and firmware hacker, but actually the talk was worked on by three people, so
01:00
it's not just Bunny, but also Sean Zobbs-Cross and Tom Marble, but the other two are not present today, but I would like you to welcome our speaker, Bunny, with a big warm round of applause and have a lot of fun. Good morning, everybody. Thanks for braving the crowds and making it into the congress,
01:22
and thank you again to the congress for giving me the privilege to address the congress again this year. Very exciting being the first talk of the day. Had font problems. I'm running from a PDF backup, so we'll see how this all goes. Good thing I make backups.
01:41
So the topic of today's talk is Open Source is Insufficient to Solve Trust Problems in Hardware, and sort of some things we can do about this. So my background is I'm a big proponent of Open Source hardware. I love it, and I've built a lot of things in Open Source using Open Source hardware principles, but there's been sort of a nagging question to me about, like, you know, some people would say things like, oh, well,
02:03
you know, you build Open Source hardware because you can trust it more, and there's been sort of this gap in my head, and this talk tries to distill out that gap in my head between trust and Open Source and hardware. So I'm sure people have opinions on which browsers you would think is more secure or
02:21
more trustable than the others, but the question is why might you think one is more trustable than the others? You have everything on here from, like, Firefox and Ice Weasel down to, like, the Samsung custom browser or the, you know, the Xiaomi custom browser. Which one would you rather use for your browsing if you had to trust something? So I'm sure people have their biases, and they might say that Open is more trustable,
02:42
but why do we say Open is more trustable? Is it because we actually read the source thoroughly and check it every single release for this browser? Is it because we compile our source, our browsers from source before we use them? No, actually, we don't have the time to do that. So let's take a closer look as to why we like to think that Open Source software is more secure.
03:02
So this is a kind of a diagram of a life cycle of, say, a software project. You have a bunch of developers on the left. They'll commit code into some source management program like Git. It goes to a build, and then ideally, some person who carefully manages a key signs that build, goes into an untrusted cloud, then gets downloaded onto users' disks,
03:23
pulled into RAM, run by the user at the end of the day, right? So the reason why actually we find that we might be able to trust things more is because in the case of Open Source, anyone can pull down that source code, like someone doing reproducible builds or an auditor of some type, build it, confirm that the hashes match and that the keys are all set up correctly,
03:43
and then the users also have the ability to know developers and sort of enforce community norms and standards upon them to make sure that they're acting in the favor of the community. So in the case that we have bad actors who want to go ahead and tamper with builds and clouds and all the things in the middle, it's much more difficult. So Open is more trustable because we have tools to transfer trust in software,
04:05
things like hashing, things like public keys, things like Merkle trees, right? And also in the case of Open versus Closed, we have social networks that we can use to reinforce our community standards for trust and security. Now, it's worth looking a little bit more into the hashing mechanism
04:22
because this is a very important part about the software trust chain. So I'm sure a lot of people know what hashing is. For people who don't know, it basically takes a big pile of bits and turns them into a short sequence of symbols so that a tiny change in the big pile of bits makes a big change in the output symbols.
04:40
And also knowing those symbols doesn't reveal anything about the original file. So in this case here, the file on the left is hashed to sort of cat, mouse, panda, bear. And the file on the right hashes to peach, snake, pizza, cookie. And the thing is you may not even have noticed necessarily that there was
05:02
that one bit changed up there, but it's very easy to see that a short string of symbols had changed. So you don't actually have to go through that whole file and look for that needle in the haystack. You have this hash function that tells you something has changed very quickly. Then once you've computed the hashes, we have a process called signing where a secret key is used to encrypt the hash.
05:21
Users decrypt that using the public key to compare against a locally computed hash. In other words, we're not trusting the server to compute the hash. We reproduce it on our side, and then we can say that it's now difficult to modify that file or the signature without detection. Now the problem is is that there's a time of check, time of use issue with the system. Even though this is, even though we have this mechanism,
05:42
if we decouple the point of check from the point of use, it creates a man-in-the-middle opportunity or a person in the middle if you want. The thing is is that, you know, it's a class of attacks that allows someone to tamper with data as it is in transit. And I'm kind of symbolizing this evil guy, I guess, because hackers all wear hoodies and,
06:01
you know, they also keep us warm as well in very cold places. So now an example of a time of check, time of use issue is that if, say, a user downloads a copy of the program onto their disk, and they just check it after they download onto the disk, and they say, okay, great, that's fine. Later on, an adversary can then modify the file on the disk
06:22
before it's copied to RAM, and now actually the user, even though they downloaded the correct version of the file, they're getting the wrong version into the RAM. So the key point is the reason why in software we feel it's more trustable is we have a tool to transfer trust, and ideally we place that point of check as close to the user as possible, right?
06:41
So ideally we're sort of putting keys into the CPU or some secure enclave that, you know, just before you run it, you've checked that that software is perfect and has not been modified, right? Now, an important clarification is that it's actually more about the place of check versus the place of use. Whether you checked one second prior or a minute prior doesn't actually matter.
07:02
It's more about checking the copy that's closest to the thing that's running it, right? We don't call it pock-poo because it just doesn't have quite the same ring to it. But now this is important. The reason why I emphasize place of check versus place of use is this is why hardware is not the same as software in terms of trust. The place of check is not the place of use.
07:22
Or in other words, trust in hardware is a talk-to problem all the way down the supply chain, right? So the hard problem is how do you trust your computers, right? So we have problems where we have firmware, pervasive hidden bits of code that are inside every single part of your system that can break abstractions. And there's also the issue of hardware implants,
07:41
so it's tampering or adding components that can bypass security in ways that we're not according to the specification that you're building around. So from the firmware standpoint, it's more here to acknowledge as an issue. The problem is this is actually a software problem. The good news is we have things like openness and runtime verification that go away to run these questions. If you're a big enough player or you have enough influence or something,
08:03
you can coax out all the firmware blobs and eventually sort of solve that problem. The bad news is that you're still relying on the hardware to obediently run the verification. So if your hardware isn't running the verification correctly, it doesn't matter that you have all the source code for the firmware, which brings us to the world of hardware implants.
08:21
So very briefly, it's worth thinking about how bad can this get? What are we worried about? What is the field? If we really want to be worried about trust and security, how bad can it be? So I've spent many years trying to deal with supply chains. They're not friendly territory. There's a lot of reasons people want to screw with the chips in the supply chain.
08:41
For example, here, this is a small ST microcontroller. It claims to be a secure microcontroller. Someone was like, ah, this is not a secure, you know, it's not behaving correctly. We digest off the top of it. On the inside, it's an LCX244 buffer, right? So like, you know, this was not done because someone wanted to tamper with the secure microcontroller. It's because someone wanted to make a quick buck, right?
09:01
But the point is that that marking on the outside is convincing, right? You could have been any chip on the inside in that situation. Another problem that I've had personally is I was building a robot controller board that had an FPGA on the inside. We manufactured 1,000 of these. And about 3% of them weren't passing tests. Set them aside.
09:20
Later on, I pulled these units that weren't passing tests and looked them very carefully. And I noticed that all of the units, the FPGA units that weren't passing tests, had that white rectangle on them, which is shown in a big, more zoomed-in version. It turned out that underneath that white rectangle were the letters ES for engineering sample. So someone had gone in and laser blasted off the letters,
09:43
which say that's an engineering sample, which means they're not qualified for regular production, blended them into the supply chain at a 3% rate, and managed to essentially double their profit at the other day. The reason why this works is because distributors make a small amount of money, so even a few percent actually makes them a lot more profit at the end of the day. But the key takeaway of this is just because 97% of your hardware is OK,
10:03
it does not mean that you're safe. So it doesn't help to take one sample out of your entire set of hardware and say, oh, this is good. This is constructed correctly. Therefore, all of them should be good. That's a talk-to problem. 100% hardware verification is mandatory if you're worried about trust and verification.
10:22
So let's go a bit further down the rabbit hole. This is a diagram, sort of like an ontology of supply chain attacks, and I've kind of divided it into two axes. On the vertical axis is how easy is it to detect or how hard, right? So on the bottom, you might need a SEM, a scanning electron microscope to do it. In the middle is an X-ray, a little specialized.
10:41
And the top is just visual or JTAG, like anyone can do it at home, right? And then from left to right is execution difficulty, right? Things are going to take millions of dollars in months. Things are going to take $10 in weeks or $1 in seconds, right? There's sort of several broad classes I've kind of outlined here. Adding components is very easy. Substituting components is very easy.
11:01
We don't have enough time to really go into those. But instead, we're going to talk about kind of the two more scary ones, which are sort of adding a chip inside a package and IC modification. So let's talk about adding a chip in a package. This one has sort of grabbed a bunch of headlines. So there's sort of these in the Snowden files, we found these NSA implants where they had put chips literally inside
11:22
of connectors and other chips to modify the computer's behavior. Now, it turns out that actually adding a chip in a package is quite easy. It happens every day. This is a routine thing, right? If you take open any SD card, micro SD card that you have, you're going to find that it has two chips on the inside at the very least.
11:40
One is a controller chip, one is a memory chip. In fact, they can stick 16, 17 chips inside of these packages today very handily, right? And so if you want to go ahead and find these chips, is the solution to go ahead and x-ray all the things? Should we just take every single circuit board and throw inside of an x-ray machine? Well, this is what a circuit board looks like in an x-ray machine. Some things are very obvious.
12:00
So on the left, we have our Ethernet magnetic jacks. And there's a bunch of stuff on the inside. It turns out those are all OK, right? Don't worry about those. And on the right, we have our chips. And this one here, you may be sort of tempted to look at it and say, oh, I see this big sort of square thing on the bottom there. That must be the chip. It actually turns out that's not the chip at all. That's the solder pad that holds the chip in place.
12:23
You can't actually see the chip because the solder is massing it inside the x-ray. So when we're looking at a chip inside an x-ray, I've kind of given you a little key guide here. On the left is what it looks like sort of in 3D. On the right is what looks like an x-ray. So looking from the top down, you're looking at ghostly outlines with very thin spidery wires coming out of it.
12:41
So if you were to look at a chip and chip on an x-ray, this is actually an image of a chip. So the cross section, you can see there's several pieces of silicon that are stacked on top of each other. And if you could actually do an edge-on x-ray of it, this is what you would see. Unfortunately, you'd have to take the chip off the board to do the edge-on x-ray. So what you do is you have to look at it from the top down.
13:00
And when you look at it from the top down, all you see are basically some straight wires. Like I can't, it's not obvious from that top down x-ray whether you're looking at multiple chips, eight chips, one chip, how many chips are on the inside of that, because the wire bonds all stitch perfectly and overlap over the chip. So this is what the chip-and-chip scenario might look like. You have a chip that's sitting on top of a chip,
13:21
and wire bonds just sort of going a little bit further on from the edge. And so in the x-ray, the only kind of difference you see is a slightly longer wire bond in some cases. So it's actually, it's not, you can find these, but it's not obvious that you've found an implant or not. So looking for silicon's heart, silicon is relatively transparent to x-rays.
13:42
A lot of things mask it, copper traces, solder to mask the presence of silicon. This is like another example of a wire bonded chip under an x-ray. There's some mitigations if you have a lot of money. You can do computerized tomography. It'll build up the 3D image of the chip. You can do x-ray diffraction and spectroscopy,
14:01
but it's not a foolproof method. And so basically the threat of wire bond package is actually a very well understood commodity technology. It's actually quite cheap. I was actually doing some wire bonding in China the other day. This is a wire bonding machine. I looked up the price of $7,000 for a used one. And you basically just walk into the guy
14:21
with a picture of where you want the bonds to go. He sort of picks them out, programs the machine's motion once, and he just plays it back over and over again. So if you wanna go ahead and modify a chip and add a wire bond, it's not as crazy as it sounds. The mitigation is that this is a bit detectable inside x-rays. So let's go down the rabbit hole a little further. So there's another concept I wanna throw at you.
14:40
It's called the through silicon via. So this here is a cross section of a chip. On the bottom is the base chip. On the top is a chip that's only 0.1 to 0.2 millimeters thick, almost the width of a human hair. And they actually have drilled vias through the chip so you have circuits on the top and circuits on the bottom. So this is kind of used as sort of putting an interposer
15:00
in between different chips, also used to stack DRAM and HBM. So this is a commodity process. It's available today. It's not science fiction. And the second concept I wanna throw you is a thing called a wafer level chip scale package, WLCSP. This is actually a very common method for packaging chips today. Basically it's solder balls directly on top of chips. They're everywhere. If you look inside of like an iPhone,
15:21
basically almost all the chips are WLCSP package types. Now, if we were to take the wafer level chip scale package and cross section and look at it, it looks like a circuit board with some solder balls and the silicon itself with some backside passivation. If you go ahead and combine this with a through silicon via implant, a man in the middle attack using through silicon vias,
15:41
this is what it looks like at the end of the day. You basically have a piece of silicon that's the size of the original silicon sitting on the original pads in basically all the right places with the solder balls masking the presence of that chip. So it's actually basically a nearly undetectable implant if you wanna execute it. If you go ahead and look at the edge of the chip, they already have seams on the side so you can't even just look at the side and say, oh, I see a seam on a chip,
16:01
therefore it's a problem. The seam on the edge, a lot of times it's because they have different coating as the back or passivations, these types of things. So if you really want to sort of say, okay, how well can we hide an implant? This is probably the way I would do it. It's logistically actually easier than a wire bond implant because you don't have to get the chips in wire bondable format. You literally just buy them off the internet.
16:22
You can just clean off the solder balls with a hot air gun. And then the hard part is building that through silicon via template for doing the attack, which will take some hundreds of thousands of dollars to do in probably a mid-end fab, but if you have almost no budget constraint and you have a set of chips that are common and you want to build a template for, this could be a pretty good way
16:41
to hide an implant inside of a system. So that's sort of adding chips inside packages. Let's talk a bit about chip modification itself. So how hard is it to modify the chip itself? Let's say we've managed to eliminate the possibility someone's added chip, but what about the chip itself? So, and this sort of goes at a lot of people
17:01
have said, hey, Bunny, why don't you just spin like an open source silicon processor? This will make it trustable, right? This is not a problem. Well, let's think about sort of the attack surface of IC fabrication processes. So on the left hand side here, I've got kind of a flow chart of what IC fabrication looks like. You start with a high level chip design.
17:20
It's a RTL like Verilog, VHDL these days, now Python. You go into some backend, you have a decision to make. Do you own your backend tooling or not? And so we'll go into this a little bit more. If you don't, you trust the fab to compile it and assemble it. If you do, you assemble the chip with some blanks for what's called hard IP. We'll get into this. And then you trust the fab to assemble that,
17:41
make masks, and go to mass production, right? And so there's three areas that I think are kind of ripe for tampering. Netless tampering, hard IP tampering, and mask tampering. We'll go into each of those. So netless tampering, you might, a lot of people think that of course if you wrote the RTL, you're going to make the chip. It turns out that's actually kind of a minority case.
18:01
We hear about that, that's on the right hand side called customer-owned tooling. That's when the customer does the full flow down to the mask set. The problem is it costs several million dollars and a lot of extra head count of very talented people to produce these. And we usually only do it for flagship products like CPUs and GPUs, high-end routers, these sorts of things. Most, I would say, most chips tend to go
18:22
more towards what's called an ASIC side, application-specific integrated circuit. What happens is that the customer will do some RTL and maybe a high-level floor plan, and then the silicon foundry or service will go ahead and do the place and route, the IP integration, the pad ring. This is quite popular for cheap support chips like your baseband management controller inside your server probably went through this flow,
18:42
disk controllers will probably go through this flow, mid to IO controllers, all those peripheral chips that we don't like to think about that we can handle our data probably go through a flow like this. And to give you an idea of how common it is, but how little you've heard of it, there's a company called SocIO Next. They're a billion dollar company, actually.
19:02
You've probably never heard of them. And they offer services where basically you can just throw a speck over the wall and they'll build a chip to you all the way to the point where you've done logic synthesis and physical design, and then they'll go ahead and do the manufacturing and test and sample shipment for it. So then, okay, fine. Obviously, if you care about trust, you don't do an ASIC flow, you pony up the millions of dollars
19:21
and you do a cop flow, right? Well, there's a weakness in cop flows and this is called the hard IP problem. So this here on the right hand side is an amoeba plot of the standard cells alongside a piece of SRAM. I'll highlight this here. The image wasn't great for presentation, but this region here is the SRAM block.
19:43
And all those little colorful blocks are standard cells representing your AND gates and NAND gates and that sort of stuff, right? What happens is that the foundry will actually ask you just to leave an open spot on your mass design and they'll go ahead and merge in the RAM into that spot just before production.
20:01
The reason why they do this is because stuff like RAM is a carefully guarded trade secret. If you can increase the RAM density of your foundry process, you can get a lot more customers. There's a lot of know-how in it and so foundries tend not to want to share the RAM. You can compile your own RAM. There are open RAM projects, but their performance and their density is not as good as the foundry-specific ones.
20:22
So in terms of hard IP, what are the blocks that tend to be hard IP? Stuff like RF and analog, so your phase lock loops, your ADCs, your DACs, your band gaps. RAM tends to be hard IP. ROM tends to be hard IP. eFuse that stores your keys is gonna give into you as an opaque block. The pad ring around your chip, the thing that protects your chip from ESD,
20:41
that's going to be an opaque block. Basically, all the points you need to backdoor your RTL are going to be trusted to the foundry in a modern process. So okay, let's say fine, we're gonna go ahead and build all of our own IP blocks as well. We're gonna compile our RAMs, do our own IO, everything, right? So we're safe, right? Well, turns out that masks
21:01
can be tampered with post-processing. So if you're gonna do anything in a modern process, the mask designs change quite dramatically from what you drew them to what actually ends up in the line. They get fractured into multiple masks. They have resolution correction techniques applied to them and then they always go through an editing phase, right? So masks are not born perfect, right?
21:20
Masks have defects on the inside and so you can look up papers about how they go and they inspect the mask, every single line on the inside. When they find an error, they'll go ahead and patch over it. They'll go ahead and add bits of metal and then take away bits of glass to go ahead and make that mask perfect or better in some way if you have access to the editing capability, right? So what can you do with mask editing?
21:40
Well, there's a lot of papers that have been written on this. You can look up ones on, for example, dopant tampering. This one actually has no morphological change. You can't look at it under a microscope and detect dopant tampering. You have to have something and do some, either you have to do some wet chemistry or some X-ray spectroscopy to figure it out. And this allows for circuit level change
22:01
without a gross morphological change to the circuit. And so this can allow for tampering with things like RNGs, some logic paths. There are oftentimes spare cells inside of your ASIC because everyone makes mistakes, including chip designers. And so you wanna patch over that. That can be done at the mask level,
22:20
signal bypassing, these types of things. So there are some attacks can still happen at the mask level, right? So that's a very quick sort of idea of how bad can it get when you talk about the time I've checked, time to use trust problem inside the supply chain. The short summary of implants is that there's a lot of places to hide them.
22:42
Not all of them are expensive or hard. I talked about some of the more expensive or hard one, but remember wire bonding is actually a pretty easy process, it's not hard to do, and it's hard to detect. And there's really no actual essential correlation between detection difficulty and difficulty of attack if you're very careful in planning the attack.
23:01
So, okay, implants are possible. Let's just, let's agree on that maybe. So now the solution is we should just have trustable factories. Let's go ahead and bring the fabs to the EU. Let's have a fab in my backyard or whatever it is, these types of things. Let's make sure all the workers are logged and registered, that sort of thing. Well, let's talk about that. So if you think about hardware, there's you, right?
23:24
And then we can talk about evil maids, but let's not actually talk about those because that's actually kind of a minority case to worry about. But let's think about how stuff gets to you. There's a distributor who goes to a courier who gets to you. All right, so we've gone and done all this stuff for the trustable factory,
23:41
but it's actually documented that couriers have been intercepted and implants loaded by, for example, the NSA on Cisco products. Now, you don't even have to have access to couriers now thanks to the way modern commerce works. Other customers can go ahead and just buy a product,
24:00
tamper with it, seal it back in the box, send it back to your distributor, and then maybe you get one, right? That can be good enough, particularly if you know a corporation's a particular area, you're targeting them, you buy a bunch of hard drives from the area, seal them up, send them back, and eventually one of them ends up in the right place and you've got your implant, right? So there's a great talk last year at 35C3, I recommend you check it out, that talks a little bit more about the scenario,
24:21
sort of removing tamper stickers and the possibility that some crypto wallets were sent back in the supply chain and then tampered with. Okay, and then let's take that back. We have to now worry about the wonderful people in customs, we have to worry about the wonderful people in the factory who have access to your hardware, and so if you cut to the chase, it's a huge attack surface in terms of the supply chain.
24:43
From you, to the courier, to the distributor, customs, box build, the box build factory itself oftentimes will use gray market resources to help make themselves a little more profitable, right? You have distributors who go to them who you don't even know who those guys are, PCB assembly, components, boards, chip fad, packaging, the whole thing, right?
25:01
Every single point is a place where someone can go ahead and touch a piece of hardware along the chain. So can open source save us in this scenario? Does open hardware solve this problem, right? Let's think about it. Let's go ahead and throw some developers with Git on the left hand side. How far does it get, right? Well we can have some continuous integration checks that make sure that the hardware's correct, we can have some open PCB designs,
25:21
we can have some open PDK, but then from that point it goes into a rather opaque machine. And then, okay, maybe we can put some tests on the very edge before it exits the factory to try and catch some potential issues, right? But you can see all the area of other places where sort of a time of check to time of use problem can happen. And this is why I'm saying that open hardware
25:41
on its own is not sufficient to solve this trust problem, right? And the big problem at the end of the day is that you can't hash hardware, right? There is no hash function for hardware. This is why I wanted to go through that earlier today. There's no convenient, easy way to basically confirm the correctness of your hardware before you use it. Some people will say, well, Bonnie said once,
26:01
oh, it's a bigger microscope, right? This is a, you know, I do some security reverse engineering stuff. This is true, right? So there's a wonderful technique called techo-graphic x-ray imaging. There's a great paper in Nature about it where they take like a modern i7 CPU and they get down to the gate level non-destructively with it, right? It's great for reverse engineering and for design verification.
26:21
The problem number one is it literally needs a building size microscope. It was done at the Swiss light source. That donut-shaped thing is the size of the light source for doing that type of verification, right? So you're not gonna have one at your point of use, right? You're gonna check it there and then probably query it to yourself. Again, time of check is not time of use.
26:41
Problem number two, it's expensive to do, so verifying one chip only verifies one chip. And as I said earlier, just because 99.9% of your hardware is okay doesn't mean you're safe. Sometimes all it takes is one server out of 1,000 to break some fundamental assumptions that you have about your cloud. And random sampling just isn't good enough, right? I mean, would you random sample signature checks
27:02
on software that you install, download? No, you insist 100% check on everything. If you want that same standard of reliability, you have to do that for hardware. So then is there any rule for open source and trustable hardware? Absolutely yes. Some of you guys may be familiar with that little guy on the right, the Spectre logo. So correctness is very, very hard.
27:21
Peer review can help fix correctness bugs. Micro architectural transparency can enable the fixes in Spectre-like situations. So for example, we would love to be able to say, we're entering a critical region, let's turn off all the micro architectural optimizations, sacrifice performance, and then run the code securely, and then go back into who cares what mode and just get done fast, right?
27:41
That would be a switch I would love to have, but without that sort of transparency or without the ability to review it, we can't do that. Also, community-driven features and community-owned designs is very empowering and make sure that we're sort of building the right hardware for the job and that it's upholding our standards. So there is a role, it's necessary, but it's not sufficient for trustable hardware.
28:01
So now the question is, okay, can we solve the point of use hardware verification problem? Is it all gloom and doom from here on? Well, I didn't bring you guys here to tell you it's just gloom and doom. I've thought about this and I've kind of boiled it into three principles for building verifiable hardware. Three principles are that complexity is the enemy of verification.
28:21
We should verify entire systems, not just components, and we need to empower end users to verify and seal their hardware. We'll go into this in the remainder of the talk. So the first one is that complexity is complicated. So without a hashing function, verification rolls back to bit by bit or atom by atom verification.
28:41
So those modern phones have so many components. Even if I gave you the full source code for the sock inside of a phone down to the mass level, what are you gonna do with it? How are you gonna know that this mass actually matches this chip and those two haven't been modified? So more complexity, it's more difficult. So okay, the solution is let's go to simplicity.
29:01
Let's just build things from discrete transistors. Someone's done this, the Monster 6502 is great. I love the project. Very easy to verify. Runs at 50 kilohertz, right? So you're not gonna do a lot with that. Okay, well let's build processors at a visually inspectable process. So go to 500 nanometers. You can see that with light. Okay, well 100 megahertz clock rate and a very high power consumption
29:21
and a couple kilobytes of RAM probably is not going to really do it either. So the point of use verification is a trade off between ease of verification and features and usability. So these two products up here largely do the same thing. Air pods and headphones on your head. Air pods have something on the order
29:40
of tens of millions of transistors for you to verify. The headphone that goes on your head, like I can actually go to Maxwell's equations and actually tell you how the magnets work from very first principles and there's probably one transistor on the inside of the microphone to go ahead and amplify the membrane and that's it, right? So this one, you do sacrifice some features and usability when you go to headset.
30:01
Like you can't say hey Siri and they'll listen to you and know what you're doing. But it's very easy to verify and know what's going on. So in order to start a dialogue on user verification we have to sort of set a context. So I start a project called be trusted because the right answer depends on the context. I want to establish what might be
30:21
a minimum viable, verifiable product. And it's sort of like meant to be used or verifiable by design and we think of it as a hardware software distro. So it's meant to be modified and changed and customized based upon the right context at the end of the day. This is a picture of what it looks like. I actually have a little prototype here.
30:41
Very, very, very early prototype here at the Congress if you wanna look at it. It's a mobile device that is meant for sort of communication, sort of text based communication and maybe voice. Authentication, so authenticator tokens or like a crypto wallet if you want. And the people we're thinking about who might be users are either high value targets politically or financially.
31:01
So you don't have to have a lot of money to be a high value target. You could also be very politically risky for some people. And also of course looking at developers and enthusiasts. And ideally we're thinking about a global demographic, not just English speaking users, which is sort of a thing you, when you think about the complexity standpoint, I think this is where we really have to sort of champ at the bit and figure out how to solve
31:20
a lot of hard problems like getting Unicode and right to left rendering and pictographic fonts to work inside a very small tax surface device. So this leads me to the second point which we need to verify entire systems is not just components. People say, well, why don't you just build a chip? Why not, why are you thinking about a whole device?
31:40
The problem is that private keys are not your private matters. Screens can be scraped and keyboards can be logged. So there's some efforts now to build wonderful security enclaves like Keystone and OpenTitan, which will build wonderful secure chips. The problem is that even if you manage to keep your keys secret, you still have to get that information through an insecure CPU from the screen
32:02
to the keyboard and so forth. And so people who have used these on-screen touch keyboards have probably seen something, a message like this saying that, by the way, this keyboard can see everything you're typing, including your passwords. And people probably clip and say, oh yeah, sure, whatever, I trust that. Okay, well, this little enclave on the site here isn't really doing a lot of good when you go ahead and you say,
32:21
sure, I'll run this input method that can go ahead and modify all my data, or intercept all my data. So in terms of making a device variable, let's talk about the concept of practice flow. How do I take these three principles and turn them into something? So this is the ideal of taking these three requirements and turning it into the set of five features,
32:41
a physical keyboard, a black and white LCD, an FPGA-based RISC-V stock, user suitable keys, and something that's easy to verify and physically protect. So let's talk about these features one by one. First one is a physical keyboard. Why am I using a physical keyboard and not a virtual keyboard? People love their virtual keyboards. The problem is is that cap touch screens, which is necessary to do a good virtual keyboard,
33:01
have a firmware blob. They have a microcontroller to do the touch screens. It's actually really hard to build these things. If you can do a good job of it and build an open source one, that'd be great, but that's a project in and of itself. So in order to sort of get an easy one here when we can, let's just go with a physical keyboard. So this is what the device looks like with this cover off. We have a physical keyboard PCB
33:21
with a little overlay that does, so you can do multilingual inserts and you just can go ahead and change that out. And it's just a two-layer daughter card, right? You just hold up to the light and you're like, okay, switches, wires, right? Not a lot of places to hide things. So I'll take that as an easy win for an input surface that's verifiable, right? The output surface is a little more subtle, so we're doing a black and white LCD.
33:41
If you say, okay, why not use a color LCD? If you ever take apart a liquid crystal display, look for a tiny little thin rectangle sort of located near the display area, that's actually a silicon chip that's bonded to the glass. That's what it looks like at the end of the day. That contains a frame buffer and a command interface.
34:03
It has millions of transistors on the inside and you don't know what it does. So if you're already assuming your adversary may be tampering with your CPU, this is also a viable place you have to worry about. So I found a screen, it's called a memory LCD by Sharp Electronics. It turns out they do all the drive electronics on glass.
34:21
So this is a picture of the drive electronics on the screen through like a 50X microscope with a bright light behind it, right? You can actually see the transistors that are used to drive everything on the display. It's a non-destructive method of verification. But actually more important to the point is that there are so few places to hide things
34:41
you probably don't need to check it, right? If you want to add an implant to this, you would need to grow the glass area substantially or add a silicon chip, which is a thing that you'll notice, right? So at the end of the day, less places to hide things is less need to check things. And so I can feel like this is a screen where I can write data to and it'll show what I want to show.
35:02
The good news is that display has a 200 PPI sort of pixel density. So it's not, even though it's black and white, it's kind of closer to e-paper, EPD in terms of resolution. So now we come to sort of the hard part, right? The CPU, the silicon problem, right? Any chip built in the last two decades is not going to be fully inspectable with an optical microscope, right?
35:21
Thorough analysis requires removing layers and layers of metal and dielectric. This is sort of a cross section of a modernish chip and you can see sort of the huge stack of things to look at on this. This process is destructive and you can think of it as hashing but it's a little bit too literal, right? We want something where we can check the thing that we're going to use and then not destroy it.
35:42
So I spent quite a bit of time thinking about options for non-destructive silicon verification the best I could come up with, maybe using optical felt induction somehow, combined with some chip design techniques to go ahead and scan a laser across and look at fault syndromes and figure out, does the thing, do the gates that we put down
36:01
correspond to the thing that I built? The problem is, is I couldn't think of a strategy to do it that wouldn't take years and tens of millions of dollars to develop, which puts it a little bit far out there and probably in the realm of like, sort of venture funded activities which is not really going to be very empowering of everyday people. So let's, so I want something a little more short term than that, than the sort of this,
36:22
you know, sort of platonic ideal of verifiability. So the compromise that I sort of arrived at is the FPGA. So field programmable gate arrays, that's what FPGA stands for, are large arrays of logic and wires that are user configured to implement hardware designs. So this here is an image inside an FPGA design tool.
36:42
On the top right is an example of one sort of logic sub-cell, it's got a few flip flops and look up tables in it, and it's embedded in this huge mass of wires that allow you to wire it up at runtime to figure out what's going on. And one thing that this diagram here shows is I'm able to sort of correlate design, and I can see, okay, decode to XU instruction
37:01
register bit 26 corresponds to this net. So now we're sort of like bringing that time of check a little bit closer to the time of use. And so the idea is to narrow that talk to gap by compiling your own CPU. We can basically give you the CPU from source, you can compile it yourself, you can confirm the bitstream. So now we're sort of enabling a bit more of that trust transfer like software, right?
37:22
But there's a subtlety in that the tool chains are not necessarily always open. There's some FOS flows, like Simba flow, that have 100% open flow for ice 40 and ECP5. And there's like seven series where they have a coming soon status, but they currently require some closed vendor tools. So picking an FPGA is a difficult choice.
37:42
There's a usability versus verification trade-off here. The big usability issue is battery life. If we're going for a mobile device, you want to use it all day long, or you want to be dead by noon, it turns out that the best sort of chip in terms of battery life is a Spartan 7. It gives you 4x, roughly three to 4x in terms of battery life. But the tool flow is still semi-closed.
38:05
But I am optimistic that Simba flow will get there, and we can also fork and make an ECP5 version if that's a problem at the end of the day. So let's talk a little bit more about FPGA features. So one thing I like to say about FPGAs is they offer a sort of ASLR, sort of address-based layout randomization,
38:21
but for hardware. Essentially, a design has a kind of pseudo-random mapping to the device. This is a sort of a screenshot of two compilation runs of the same source code with a very small modification to it, and basically a version number stored in a GPR. And then you can see that actually the locations
38:40
of a lot of the registers have basically shifted around. The reason why this is important is because this hinders a significant class of silicon attacks. All those small mass level changes I talked about, the ones where we just, okay, we're just gonna change a few wires or run a couple of logic cells around. Those become less likely to capture a critical bit.
39:01
So if you wanna go ahead and backdoor a full FPGA, you're gonna have to change the die size. You're gonna have to make it substantially larger to be able to sort of swap out the function in those cases. So now the verification bar goes from looking for a needle in a haystack to measuring the size of the haystack, which is a bit easier to do towards the user side of things. And it turns out, at least in Xilinx land,
39:20
just a change in random parameter does a trick. So some potential attack vectors against FPGAs is like, okay, well, it's closed silicon. What are the backdoors there? Notably inside a seven series FPGAs, they actually document introspection features. You can pull out anything inside the chip by instantiating a certain special block.
39:42
And then we still also have to worry about the whole class of like man in the middle, IO and JTAG implants that I talked about earlier. So this easy, really easy to mitigate the known blocks, basically lock them down, tie them down, check them in the bit stream, right? In terms of the IO, man in the middle stuff, this is where we're talking about like someone goes ahead and puts a chip in the path of your FPGA.
40:03
There's a few tricks we can do. We can do sort of bus encryption on the RAM and the ROM at the design level that frustrates these. At the implementation, basically, we can use the fact that data pins and address pins can be permuted without affecting the device's function. So every design can go ahead and permute those data and address pin mappings sort of uniquely.
40:22
So any particular implant that goes in will have to be able to compensate for all those combinations, making the implant a little more difficult to do. And of course, we all can always fall back to sort of careful inspection of the device. In terms of the closed source silicon, the thing that I'm really optimistic for there is that,
40:40
so in terms of closed source system, the thing that we have to worry about is that, for example, now that Xilinx knows that we're doing these trustable devices using the toolchain, they push a patch that compiles backdoors into your bitstream, right? So not even necessarily a silicon-level implant, but maybe the toolchain itself has a backdoor that recognizes that we're doing this.
41:01
So the cool thing is, and I'm very, this is a cool project, so there's a project called PRJ X-Ray, Project X-Ray. It's part of the Symbolful effort, and they're actually documenting the full bitstream of the 7-series device. It turns out that we don't yet know what all the bit functions are, but the bitmappings are deterministic. So if someone were to try and activate a backdoor in the bitstream through compilation,
41:22
we can see it in a diff. We'd be like, well, we've never seen this bit flip before. What does this do? We can look into it and figure out if it's malicious or not, right? So there's actually sort of a hope that essentially, at the end of the day, we can build sort of a bitstream checker. We can build a thing that says, here's a bitstream that came out. Does it correlate to the design source? Do all the bits check out? Do they make sense?
41:40
And so ideally, we would come up with a one-click tool, and now we're at the point where the point of check is very close to the point of use. The users are now confirming that their CPUs are correctly constructed and mapped to the FPGA correctly. So the summary of FPGA custom silicon is sort of like the pros of custom silicon is that they have great performance, right?
42:01
We can do a true single-chip enclave with hundreds of megahertz speeds and tiny power consumption, but the con of silicon is that it's really hard to verify. Open source doesn't help that verification, and hard IP blocks are tough problems we talked about earlier. So FPGAs, on the other side, they offer some immediate mitigation paths. We don't have to wait until we solve this verification problem.
42:21
We can inspect the bitstreams, we can randomize the logic mapping, and we can do per-device unique pin mapping. It's not perfect, but it's better than, I think, anything other solution I can offer right now. The cons of it is that the FPGAs are just barely good enough to do this today, so you need a little bit of external RAM, which needs to be encrypted. About 100 megahertz speed performance,
42:42
and about five to 10x the power consumption of a custom silicon solution, which in a mobile device is a lot, but actually part of the reason, the main thing that drives the thickness in this is the battery, right? And most of that battery is for the FPGA. If we didn't have to go with an FPGA, it could be much, much thinner.
43:01
So now let's talk a little about the last two points, user-speable keys and verification and protection. And this is that third point, empowering end users to verify and seal their hardware. So it's great that we can verify something, but can it keep a secret? Transparency is good up to a point, but you want to be able to keep secrets so that people don't just walk up and say, oh, there's your keys, right? So sealing a key in the FPGA,
43:20
ideally we want user-generated keys that are hard to extract, we don't rely on essential keying authority, and that any attack to remove those keys should be noticeable, right? So any high-level apps, I mean, someone like Infinite Funding, basically, should take about a day to extract it, and that effort should be trivially evident. The solution to that is basically self-provisioning
43:40
and sealing of the cryptographic keys in the bitstream and a bit of epoxy. So let's talk a little bit about sort of provisioning those keys. If we look at the 7-series FPGA security, they offer sort of encrypted HMAC 256 AES with 256 SHA bitstreams. There's a paper which discloses a known weakness, isn't it?
44:00
So the attack takes about a day, 1.6 million chosen ciphertext traces. The reason why it takes a day is because that's how long it takes to load in that many chosen ciphertexts through the interfaces. The good news is there's some easy mitigations to this. You can just glue shut the JTAG port or improve your power filtering, and that should significantly complicate the attack. But the point is that
44:20
it will take a fixed amount of time to do this, and you have to have direct access to the hardware. It's not the sort of thing that someone at customs or like an evil maid could easily pull off. And just to put that in perspective again, even if we improved dramatically the DPA resistance of the hardware, if we knew a region of the chip that we want to inspect,
44:40
probably with a SEM and a skilled technician, we could probably pull it off in a matter of a day or a couple of days. Takes only an hour to de-cap the silicon, an hour a few hours in a fib to de-layer a chip, and an afternoon in the SEM, and you can find out the keys, right? But the key point is that this is kind of the level that we've agreed is okay for a lot of the silicon enclaves,
45:03
and this is not gonna happen at a customs checkpoint or by an evil maid, so I think I'm okay with that for now. We can do better, but I think it's a good starting point, particularly for something that's so cheap and accessible. So then how do we get those keys in FPGA, and how do we keep them from getting out? So those keys should be user-generated, never leave the device, not accessible by the CPU after it's provisioned,
45:22
unique per device, and it should be easy for the user to get it right. You don't have to know all this stuff and type a bunch of commands to do it right, right? So if you look inside Be Trusted, we have, there's two rectangles there. One of them is the ROM that contains the bitstream, and the other one's the FPGA, so I'm gonna draw those in the schematic form.
45:42
Inside the ROM, you start the day with an unencrypted bitstream in ROM, which loads in the FPGA, and then you have this little crypto engine that has no keys on the inside, and there's no keys anywhere, so you can check everything, you can build your own bitstream, you can do what you wanna do. The crypto engine then generates keys from a TRNG that's located on-chip,
46:00
probably with some help with some off-chip randomness as well, because I don't necessarily trust everything inside the FPGA. Then that crypto engine can go ahead, and as it encrypts the external bitstream, inject those keys back into the bitstream, because we know where that block RAM is. We can go ahead and inject those keys back into that specific RAM block as we encrypt it. So now we have a sealed, encrypted image on the ROM,
46:22
which can then load in the FPGA if it had the key. So after you've gone ahead and provisioned the ROM, hopefully at this point you don't lose power, you go ahead and you burn the key into the FPGA's keying engine, which sets it to only boot from that encrypted bitstream, blow out the re-back disabled bit,
46:41
and AES-only bit is blown. So now at this point in time, basically there's no way to go ahead and put in a bitstream that says, tell me your keys, whatever it is. You have to go do one of these hard techniques to pull out the key. You can maybe enable hardware upgrade path if you want by having the crypto engine just be able to retain a copy of the master key and re-encrypt it,
47:01
but that becomes a vulnerability because a user can be coerced to go ahead and load inside a bitstream that then leaks out the keys, right? So if you're really paranoid at some point in time, you seal this thing and it's like, it's like done, right, you know, you have to go ahead and do that full key extraction routine to go ahead and pull stuff out if you forget your passwords. So that's the sort of user sealable keys.
47:22
I think we can do that with FPGA. Finally, easy to verify and easy to protect. Just very quickly, talking about this, so if you wanna make an inspectable tamper barrier, a lot of people have talked about glitter seals, those are pretty cool, right? The problem is I find that glitter seals are too hard to verify, right? Like I have tried glitter seals before
47:40
and I stare at the thing and I'm like, damn it, I have no idea if this is the seal I put down. And so then say, okay, well take a picture or write an app or something. Now I'm relying on this untrusted device to go ahead and tell me if the seal is verified or not. So I have a suggestion for a DIY watermark that relies not on an app to go ahead and verify but are very, very well tuned neural networks
48:00
inside our head to go ahead and verify things. So the idea is basically, there's this nice epoxy that I found. It comes in this bipack, it's a two-party epoxy you just put on the edge of the table and you go like this and it goes ahead and mixes the epoxy and you're ready to use so it's very easy for users to apply. And then you just draw a watermark on a piece of tissue paper.
48:20
It turns out humans are really good at identifying our own handwriting, our own signatures, these types of things. Someone can go ahead and try to forge it. There's people who are skilled in doing this but this is way easier than looking at a glitter seal. You go ahead and put that down on your device, you swab on the epoxy and at the end of the day you end up with this sort of tissue paper plus a very easily recognizable seal.
48:41
Someone goes ahead and tries to take this off or tamper with it. I can look at it easily and say, yes, this is a different thing than what I had yesterday. I don't have to open an app, I don't have to look at glitter patterns, I don't have to do these sorts of things and I can go ahead and swab onto all the IO ports that I need to do. So it's kind of a, you know, it's a bit of a hack but I think that it's a little closer
49:01
towards not having to rely on third party apps to verify a tamper evidence seal. So I've talked about sort of this implementation and also talked about how it maps to these three principles for building trustable hardware. So the idea is, you know, trying to build a system
49:21
that is not too complex so that we can verify most of the parts or all of them at the end user point, look at the keyboard, look at the display and we can go ahead and compile the FPGA from source. We're focusing on verifying the entire system end to end so the keyboard and the display. We're not forgetting the user. The secrets start with the user and end with the user,
49:42
not at the edge of the silicon. And finally, we're empowering end users to verify and seal their own hardware so you don't have to go through a central keying authority to go ahead and make sure your secrets are inside your hardware. So at the end of the day, the idea behind Be Trusted is to close that hardware time of check, time of use gap
50:01
by moving the verification point closer to the point of use, right? So in this huge, complicated landscape of problems that we can have, the idea is that we want to, as much as possible, teach users to verify their own stuff, right? So by design, it's meant to be a thing that hopefully anyone can be taught to verify and use
50:23
and we can provide tools that enable them to do that. But if that ends up being too hard of a bar, I would like it to within one or two nodes in your immediate social network, anywhere in the world can find someone who can do this. And the reason why I set this bar is I want to define the maximum level
50:41
of technical competence required to do this because it's really easy, particularly sitting in an audience of this of like really brilliant technical people, say, oh yeah, of course everyone can just hash things and compile things and like look at things on microscopes and solder it. And then you get into life and reality and like, oh wait, like I have completely forgotten what real people are like. So this tries to get me grounded
51:03
and make sure that I'm not sort of drinking my own Kool-Aid in terms of like how useful open hardware is as a mechanism to verify anything, right? Because I hand a bunch of people schematic and say, check this, but you're like, I have no idea, right? So the current development status is that,
51:21
so the hardware is kind of an initial EVT stage for a type, you know, subject to significant change, particularly, you know, part of the reason we're here talking about this is to collect more ideas and feedback on this, make sure we're doing it right. The software is just starting, we're writing our own OS called Zeus, being done by Sean Cross, and we're exploring the UX and applications
51:41
being done by Tom Marble, shown here. And I actually want to give a big shout out to NLNet for funding us partially. We have a grant, a couple of grants for under privacy and trust-enhancing technologies, and this is really significant because now we can actually think about the problem, the hard problems, and not have to be like, oh, when do we go crowdfund, when do we go fundraise?
52:00
Like a lot of people are just like, oh, this looks like a product, right? You know, can we sell this now? It's not ready yet, right? And I want to be able to take the time to talk about it, listen to people, incorporate changes, and make sure we're doing the right thing. So with that, I'd like to open up the floor for Q&A. Thanks to everyone for coming to my talk.
52:29
Thank you so much, Bunny, for the great talk. We have about five minutes left for Q&A. For those who are leaving earlier, you're only supposed to use the two doors on the left, not the one, not the tunnel you came in through, but only the doors on the left,
52:42
like the very left door and the door in the middle. Now, Q&A, you can pile up at the microphones. Do we have a question from the internet? No, not yet. If someone wants to ask a question, but is not present, but in the stream, or maybe a person in the room who wants to ask a question, you can use the hashtag Clark, and Twitter, Mastodon, and IRC are being monitored.
53:01
So let's start with microphone number one. Your question, please. Hey, Bunny. Hey. So you mentioned that with the Foundry process, that the hard IP blocks, the proprietary IP blocks were a place where attacks could be made. Do you have the same concern about the hard IP blocks in the FPGA, either the embedded block RAM
53:23
or any of the other special features that you might be using? Yeah, I think that we do have to be concerned about implants that have existed inside the FPGA prior to this project, right? And I think there is a risk, for example,
53:41
that there's a JTAG path we didn't know about. But the other, I guess the compensating side is that the US military does use a lot of these in their devices. So they have a self-interest in not having backdoors inside of these things as well is sort of, we'll see. I think that the answer is it's possible.
54:01
I think the upside is that because the FPGA is actually a very regular structure, it's doing like sort of a sem-level analysis of the initial construction of it at least is not insane. We can identify these blocks and look at them and make sure the right number of bits. That doesn't mean that the one you have today is the same one. But if they were to go ahead and modify that block
54:21
to do sort of the implant, my argument is that because of the randomness of the wiring and the number of factors they have to consider, they would have to actually grow the silicon area substantially and that's a thing that is a proxy for detection of these types of problems. So that would be my kind of half answer to that problem. It's a good question though. Thank you.
54:40
Yeah, thanks for the question. The next one for microphone number three, please. Hi. Yeah, move close to the microphone, thanks. My question is, in your proposed solution, how do you get around the fact that the attacker, whether it's an implant or something else, will just attack it before the user's self-provisioning?
55:02
So it'll compromise the self-provisioning process itself. Right, so the idea of the self-provisioning process is that is, so we send the device to you, you can look at the circuit boards and devices and then you compile your own FPGA,
55:21
which includes a self-provisioning code from source and you can confirm, or if you don't wanna compile, you can confirm that the signatures match with what's on the internet, right? And so if someone wanted to go ahead and compromise that process and sort of stash away some keys in some other place, that modification would either be evident in the bitstream or that would be evident as a modification
55:40
of the hash of the code that's running on it at that point in time. So someone would have to then add a hardware implant, for example, to the ROM, but that doesn't help because it's already encrypted by the time it hits the ROM, so it'd really have to be an implant that's inside the FPGA, and then Trammel's question just sort of talked about that situation itself. So I think the attack surface is limited,
56:01
at least, for that. So you talked about how the courier might be the hacker, right? Sure, yeah. So in this case, the courier will put a hardware implant not in like the hard IP, but just in the piece of hardware inside the FPGA that provisions the bitstream.
56:21
Right, so the idea is that you would get that FPGA and you would blow your own FPGA bitstream yourself. You don't trust my factory to give you a bitstream. You get the device- But how do you trust that the bitstream is being blown? You just get an indicator on your computer saying this bitstream is being blown, right? Ah, I see, I see, I see. So how do you trust that the ROM actually doesn't have a backdoor in itself
56:41
that's putting another secret bitstream that's not related to- From the courier you will need. Yeah, I mean, possible, I guess. I think there are things you can do, for example, to defeat that. So the way that we do the semi-randomness in the compilation is there's a 64-bit random number
57:01
we compile into the bitstream. So if you're compiling your own bitstream, you can read out that number and see if it matches. At that point, if someone had pre-burned a bitstream onto it that they're using to actually load it instead of your own bitstream, it's not gonna be able to have that random number, for example, on the inside. So I think there's ways to tell
57:21
if, for example, the ROM has been backdoored and it has two copies of the ROM, one of the evil one and one of yours, and then they're gonna use the evil one during provisioning, right? I think that's a thing that can be mitigated. All right, thank you very much. We take the very last question from microphone number five. Hi, Bunny. Hi. So one of the options you touched on in the talk
57:41
but then didn't pursue was this idea of doing some custom silicon in a very low-res process that could be optically inspected directly. Is that completely out of the question in terms of being a usable route in the future? Or did you look into that in great detail at all?
58:00
So I thought about that one. There's a couple of issues. One is that if we rely on optical verification, now users need optical verification prior to do it. So we have to somehow move those optical verification tools to the edge towards the time of use, right? So nice thing about the FPGAs, everything I talked about, building your own bitstream, inspecting the bitstream, checking the hashes,
58:21
those are things that don't require particular sort of user equipment. But yes, if we were to go ahead and build like an enclave out of 500 nanometer silicon, like it'd probably run around 100 megahertz, you'd have a few kilobytes of RAM on the inside, not a lot, right? So you have a limitation in how much capability
58:42
you have on it. And it would consume a lot of power. But then every single one of those chips, right? We put them in a black piece of epoxy. How do you, like, you know, what keeps someone from swapping that out with another chip? Yeah, I mean, I was thinking of like old school transparent top, like on a lot of chips. Okay, so yeah, you can go ahead and wire bond on the board,
59:01
put some clear epoxy on, and then now people have to take a microscope to look at that. That's a possibility. I think that, that's the sort of thing that I, I think I'm trying to imagine, like for example, my mom using this and asking her to do this sort of stuff I just don't envision her knowing anyone who would have an optical microscope
59:21
who could do this except for me, right? And I don't think that's a fair, you know, assessment of what is verifiable by the end user at the end of the day. So maybe for some scenarios it's okay, but I think that the full optical verification of a chip and making that sort of the only thing between you and an implant worries me. And that's the problem with the hard chip
59:42
is that basically if someone, even if it's full, you know, it's just a clear thing, and someone just swapped out the chip with another chip, right, you still need to know, you know, a piece of equipment to check that, right? Whereas like when I talked about the display and the fact that you can look at that, actually the argument for that is not that you have to check the display,
01:00:00
it's that you don't, it's actually because it's so simple you don't need to check the display, right? You don't need the microscope to check it because there's no place to hide anything. All right folks, we ran out of time. Thank you very much to everyone who asked a question and please give another big round of applause for our great speaker, Bonnie. Thank you so much for the great talk. Thanks everyone.