A Password is Not Enough: Why disk encryption is broken and how we might fix it
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 112 | |
Autor | ||
Lizenz | CC-Namensnennung 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/38914 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
DEF CON 2190 / 112
3
6
8
9
13
14
15
16
17
22
23
24
25
29
32
33
36
37
39
42
45
47
49
53
60
61
64
65
66
71
76
79
80
82
89
103
106
108
00:00
PasswortChiffrierungMini-DiscZugriffskontrolleBestimmtheitsmaßKonsistenz <Informatik>DatenmissbrauchSpezielle unitäre GruppeMini-DiscChiffrierungRechenschieberFestplatteNeuroinformatikOpen SourceKryptologieSoftwareComputeranimation
01:27
BestimmtheitsmaßChiffrierungKonsistenz <Informatik>DatenmissbrauchZugriffskontrolleDeterminantePasswortSchlussregelComputersicherheitSoftwareNeuroinformatikMini-DiscGamecontrollerAlgorithmusChiffrierungIntegralVirtuelle MaschineInternetworkingMehrwertnetzQuick-SortComputeranimation
03:31
Derivation <Algebra>PasswortZufallszahlenBlockchiffreGruppoidATMVerschlingungKryptologieSchlüsselverwaltungNeuroinformatikPasswortPhysikalische TheorieEndliche ModelltheorieKryptologieMini-DiscComputersicherheitChiffrierungBlackboxATMNichtlinearer OperatorKategorie <Mathematik>Quick-SortZufallsgeneratorp-BlockKontrollstrukturDerivation <Algebra>Physikalisches SystemRechter WinkelComputeranimation
04:36
ComputerMini-DiscPhysikalische TheorieComputersicherheitChiffrierungProzess <Informatik>Ganze FunktionWeb-SeiteNeuroinformatikEndliche ModelltheorieSoftwareBenutzerbeteiligungComputeranimation
05:17
Feasibility-StudieHardwareNotebook-ComputerComputersicherheitChiffrierungBitFeasibility-StudieNotebook-ComputerEndliche ModelltheorieMini-DiscSoftwareExogene VariableChiffrierungFlash-SpeicherPhysikalisches SystemSystemprogrammierungSoftwareentwicklerPhysikalismusHardwareComputersicherheitMultiplikationsoperator
06:41
ROM <Informatik>BefehlsprozessorProzess <Informatik>SystemstartFestplatteInformationsspeicherungCOMQuick-SortDiagrammBefehlsprozessorPlastikkarteNetzbetriebssystemProzess <Informatik>NeuroinformatikHalbleiterspeicherSystemstartChiffrierungAbstraktionsebeneSichtenkonzeptWeb-SeiteInformationsspeicherungWärmeübergangAuthentifikationPhysikalisches SystemSchlüsselverwaltungDatentransferPasswortComputeranimation
07:52
SystemstartGamecontrollerROM <Informatik>BefehlsprozessorGraphikprozessorInformationsspeicherungFestplatteFisher-InformationGerichtete MengeInhalt <Mathematik>Motion CapturingBus <Informatik>PlastikkarteArithmetischer AusdruckHalbleiterspeicherSchlüsselverwaltungRichtungComputersicherheitNeuroinformatikBimodulChiffrierungFirmwareNetzbetriebssystemEin-AusgabeSI-EinheitenComputerforensikHardwareBildschirmmaskeSchnelltasteRegelkreisRootkitBitSoftwarePhysikalisches SystemMini-DiscKategorie <Mathematik>Zusammenhängender GraphHyperbelverfahrenFestplatteInformationsspeicherungZweiLeistung <Physik>CodierungQuaderSystemstartProzess <Informatik>CASE <Informatik>KoroutineLesen <Datenverarbeitung>MAPÄhnlichkeitsgeometrieProgrammierumgebungPasswortMereologieProzessfähigkeit <Qualitätsmanagement>TypentheorieOrdnung <Mathematik>BootenFlash-SpeicherBefehlsprozessorWärmeübergangNichtlinearer OperatorSystemprogrammierungExploitGamecontrollerMulti-Tier-Architektur
13:06
BimodulKryptologieHardwareAdvanced Encryption StandardBefehlsprozessorProgrammschemaQuick-SortBefehlsprozessorMinkowski-MetrikSchlüsselverwaltungNetzbetriebssystemATMPrimitive <Informatik>PerspektiveEinfache GenauigkeitInformationsspeicherungHalbleiterspeicherHardwareChiffrierungProzess <Informatik>BitNeuroinformatikRegulärer GraphBootenNichtlinearer OperatorKonfiguration <Informatik>SystemstartVektorpotenzialSoftwaretestInhalt <Mathematik>p-BlockBimodulZeiger <Informatik>Physikalisches SystemKryptologieAdditionComputersicherheitDigitales ZertifikatZweiServerTaskBenutzerbeteiligungDämpfungAutorisierungLesen <Datenverarbeitung>SoftwareVersionsverwaltungCoprozessorMini-DiscPunktKontrollstrukturEinfacher RingTransaktionComputeranimation
16:03
SpeicherabzugGamecontrollerROM <Informatik>WärmeübergangInterrupt <Informatik>Mini-Discp-BlockHalbleiterspeicherWärmeübergangServerNetzbetriebssystemAnströmwinkelSoftwareProzessfähigkeit <Qualitätsmanagement>VirtualisierungMereologiePhysikalisches SystemMini-DiscVirtueller ServerSchlüsselverwaltungEinfache GenauigkeitGarbentheorieVersionsverwaltungBefehlsprozessorInhalt <Mathematik>CodierungChiffrierungBitComputeranimationVorlesung/Konferenz
17:40
PasswortLeckChiffrierungDatenverwaltungHalbleiterspeicherSchlüsselverwaltungElektronische PublikationRechter WinkelChiffrierungPublic-Key-KryptosystemPhysikalisches SystemMini-DiscPasswortFaltung <Mathematik>MereologieQuick-SortComputeranimation
18:48
BenchmarkKryptologieBimodulHardwareChiffrierungAutorisierungKryptosystemQuick-SortHalbleiterspeicherZusammenhängender GraphBenchmarkNichtlinearer OperatorBeweistheorieImplementierungBitBrowserBimodulKryptologieBefehlsprozessorSchlüsselverwaltungHardwareLesen <Datenverarbeitung>Physikalisches SystemComputeranimation
20:09
HalbleiterspeicherLesen <Datenverarbeitung>SystemsoftwareExogene VariableQuick-SortHardwareZusammenhängender GraphSchlüsselverwaltungSchreiben <Datenverarbeitung>AuthentifikationPhysikalisches SystemAggregatzustandBootenComputeranimation
20:57
SystemstartMAPSystemplattformComputerEinflussgrößeWurm <Informatik>MaßerweiterungBimodulKonfigurationsraumIntegralBimodulDifferenteFolge <Mathematik>SoftwareNeuroinformatikKryptologieImplementierungSystemplattformBitAggregatzustandPhysikalisches SystemComputeranimation
21:54
SystemplattformBimodulZufallszahlenGemeinsamer SpeicherRSA-VerschlüsselungZahlenbereichKryptologieSchlüsselverwaltungInformationsspeicherungRootkitPermanenteElektronische PublikationSchlüsselverwaltungSpannweite <Stochastik>Physikalisches SystemAggregatzustandDifferenteDigital Rights ManagementKryptologieHauptplatineHypermediaZahlenbereichBitKonfigurationsraumPlastikkarteComputersicherheitSystemstartSystemplattformHalbleiterspeicherZufallsgeneratorNichtlinearer OperatorNichtflüchtiger SpeicherVideokonferenz
23:33
Physikalisches SystemChiffrierungSchlussregelTotal <Mathematik>Mini-DiscComputersicherheitQuick-SortATMAuthentifikationVarietät <Mathematik>PasswortNeuroinformatikEindeutigkeitProtokoll <Datenverarbeitungssystem>Digitale PhotographieBildgebendes VerfahrenSoftwareKonfigurationsraumExogene VariableVideokonferenzMini-DiscPhysikalisches SystemGamecontrollerMultiplikationsoperatorSchlüsselverwaltungMereologieRSA-VerschlüsselungSchreib-Lese-KopfMAPSystemplattformEreignishorizontComputeranimation
25:16
Inverser LimesPasswortLokales MinimumATMCodierungSystemverwaltungNeuroinformatikSoftwareKonfigurationsraumFlächentheorieInformationsspeicherungQuick-SortZusammenhängender GraphPhysikalisches SystemRPCServerIntegralsinc-FunktionKonfiguration <Informatik>PasswortTelekommunikationMini-DiscChaostheorieZahlenbereichÄhnlichkeitsgeometrieMultiplikationsoperatorFrequenzSpieltheorieMessage-PassingSchlussregelInverser LimesCodierungSystemstartProtokoll <Datenverarbeitungssystem>TouchscreenSystemaufrufComputeranimation
28:13
Workstation <Musikinstrument>ImplementierungPortscannerDigitaltechnikBefehl <Informatik>Quick-SortPolygonnetzWeg <Topologie>MinimalflächeMultiplikationsoperatorPlastikkarteComputersicherheitBus <Informatik>Computeranimation
29:48
HydrostatikAggregatzustandBus <Informatik>FlächeninhaltSystemplattformSystemstartZählenATMSchnelltasteRootkitSpielkonsoleHierarchische StrukturBridge <Kommunikationstechnik>Physikalisches SystemQuick-SortPersönliche IdentifikationsnummerGamecontrollerHauptplatineVersionsverwaltungBefehlsprozessorNeuroinformatikCASE <Informatik>SystemprogrammierungRoutingEinflussgrößeComputerspielComputeranimation
31:34
SystemprogrammierungProgrammierumgebungATMEinflussgrößeSystemstartPhysikalisches SystemComputersicherheitReelle ZahlAuthentifikationBimodulMagnettrommelspeicherVerdeckungsrechnungBitKonfigurationsraumPhysikalisches SystemAggregatzustandATMQuick-SortNetzbetriebssystemComputerarchitekturZusammenhängender GraphSoftwareschwachstelleReelle ZahlMini-DiscBootenTabelleVektorraumInterrupt <Informatik>DifferenteEin-AusgabeLesen <Datenverarbeitung>MAPKonfiguration <Informatik>SpeicherabzugSchnelltasteBefehlsprozessorEinflussgrößeSystemstartLaufzeitfehlerRechter WinkelComputeranimation
33:01
FestplatteProzessfähigkeit <Qualitätsmanagement>InformationsspeicherungSpieltheorieCachingGamecontrollerBefehlsprozessorSchnelltasteNormierter RaumProjektive EbeneKonfigurationsraumDomain <Netzwerk>ChiffrierungMAPGibbs-VerteilungBitProzessfähigkeit <Qualitätsmanagement>Physikalisches SystemHalbleiterspeicherVirtualisierungBrowserBefehlsprozessorVirtuelle MaschineProzess <Informatik>Dichte <Stochastik>Open SourceATMLaufzeitfehlerHardwareSoftwareÄhnlichkeitsgeometrieMinkowski-MetrikGamecontrollerKartesische KoordinatenMini-DiscInformationsspeicherungEinfacher RingQuick-SortSystemaufrufHypercubeUmwandlungsenthalpieSchlüsselverwaltungComputersicherheitSchnelltasteProgrammierungSystemverwaltungVersionsverwaltungBus <Informatik>RichtungRechenwerkCASE <Informatik>
36:10
Kontextbezogenes SystemAdvanced Encryption StandardCodierungHardwareNichtflüchtiger SpeicherBefehlsprozessorComputersicherheitBitrateDatenmodellInformationComputerErwartungswertSystemstartPhysikalisches SystemKryptologiePasswortNotepad-ComputerZusammenhängender GraphOrdnung <Mathematik>Quick-SortDisjunktion <Logik>Puls <Technik>ComputervirusAggregatzustandPhysikalisches SystemEinsEndliche ModelltheorieWikiGüte der AnpassungMailing-ListeWürfelKonfigurationsraumStrahlensätzeBus <Informatik>Token-RingInhalt <Mathematik>MultiplikationsoperatorComputersicherheitUmwandlungsenthalpieBitMaßerweiterungMAPVirtuelle MaschineKlasse <Mathematik>HalbleiterspeicherDifferenteSchlüsselverwaltungVirtualisierungKreisbogenAnalogieschlussNeuroinformatikIntelKomplex <Algebra>StabSoundverarbeitungRechter WinkelTermKartesische KoordinatenFrequenzSystemprogrammierungSystemstartVersionsverwaltungIntegralMinkowski-MetrikBefehlsprozessorAusnahmebehandlungNichtflüchtiger SpeicherImplementierungHardwareReverse EngineeringSoftwareschwachstelleChiffrierungDämpfungATMCodierungKontextbezogenes SystemProjektive EbenePhysikalismusMini-DiscKryptologieComputerarchitekturSoftwareBasis <Mathematik>BootenGamecontrollerProgrammierumgebungPerspektiveMusterspracheOrtsoperatorCASE <Informatik>Perfekte GruppeDatensatzFramework <Informatik>SoftwaretestInjektivitätGefangenendilemmaProdukt <Mathematik>GeradeProgrammverifikationBeweistheorieExistenzsatzBenutzerfreundlichkeitStabilitätstheorie <Logik>Offene MengeComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:01
We're here to talk about full disk encryption, why you're not as secure as you might think you are. Oh, what just happened?
00:20
I'm missing a slide. Okay. I'll just say what it was then. So how many of you encrypt the hard drives in your computer, just like a show of hands? Oh, wow. Yeah, welcome to DEF CON. So I guess it's like, what, 90% of you at least? So how many of you use open source full disk encryption software, something that you could
00:42
potentially audit and, okay, not as many of you, like TrueCrypt or, you know. How many of you always fully shut down your computer whenever you're leaving it unattended? More of you, I'd say about 20%. How many of you have ever left your computer unattended for more than a few hours?
01:03
A lot of hands should be up. Either. Either on or off. I mean, I'd be surprised if you're not, because I'd have to ask, are you like zombies that don't sleep or something? Okay. And then the other answer, of course, is anyone who leaves their computer unattended
01:22
for more than a few minutes, also pretty much everyone. So why do we encrypt our computers? And it's surprisingly hard to find anyone actually talking about this, which is really weird. And I think it's really important to articulate our motivations, why we are doing something,
01:41
a particular security practice, and if we don't do that, we don't have a sensible goal post to see how we're doing. There's plenty of details in the documentation of full disk encryption software of what they do, what algorithms they use, what, you know, how their passion passwords and
02:01
so forth, but almost nobody is talking about why. And I argue that we want to ‑‑ we encrypt our computer because we want some control of our data, some assurances about the confidentiality and integrity of our data, that nobody is stealing our data or modifying our data without us knowing about it.
02:20
And it's basically ‑‑ we want determination over our data. We want to be able to control what happens to it. And there's also situations where you have liabilities for not maintaining the secrecy of your data. Lawyers have to have attorney‑client privilege, doctors have patient confidentiality, people
02:41
who are in finance and accounting have all sorts of regulatory rules that they need to comply with. And so if you're leaking data, you know, there's companies which have to notify their customers that, oh, we've ‑‑ someone left a laptop unencrypted in a van and it got broken into and stolen, so your data might be out there on the Internet.
03:01
But it also speaks to ‑‑ it's really all about physical access to our computers that we want to protect them because really full disk encryption doesn't do anything if someone just owns your machine. But it also gets to a greater point of if we want to build secure networks, if you want to have a secure Internet, we can't do that unless we have end points that are
03:24
secure. You can't build a secure network without the foundations of the secure end points. But by and large, we figured out the disk encryption theory aspects of this stuff. We know how to generate random numbers reasonably securely on a computer.
03:41
We know all the block cipher modes of operation that we should use for full disk encryption to get these sorts of nice security properties. We know how to derive keys from passwords securely. So mission accomplished, right? We can all stand on an aircraft carrier and, you know ‑‑ the answer is no, it's
04:01
not the whole story. There's still a hell of a lot of cleanup that you need to do. Even if you have absolutely perfect cryptography, even if you know it can't be broken in any way, you still have to implement it on a real computer where you don't have these nice black box academic properties of your system. And so you don't attack the crypto when you're trying to break someone's full disk encryption.
04:22
You either attack the computer and trick the user somehow or you attack the user and convince them to either give you the password or get it from them in some other means by like a key logger or whatever. And de facto use doesn't really match up with the security models of the full disk
04:42
encryption software. If you're looking at full disk encryption software, they're very much focused on the disk theoretic aspects of full disk encryption. And here's a quote from the true crypt web page, their actual documentation that they do not secure data on your computer if someone has ever manipulated it or is manipulating
05:03
it while it's running. I wish I was making this up. Basically their entire security model is like, oh, if it encrypts the disk correctly, if it decrypts the disk correctly, we've done our job, woot. And I apologize for the text that you probably would not be able to read very well.
05:23
So I'll read it here, a little bit of it here. So this is an exchange between the true crypt developers and another security researcher by the name of Joanna. Where she brought up this attack and tried to talk to them and see what their reaction was to feasibility. And so this is what they said. We never consider the feasibility of hardware attacks.
05:42
We have to assume the worst. And she asks, do you carry your laptop with you all the time? They say how the user ensures physical security is not our problem. And she asks very correctly, why in the world do I need encryption then? So we live in the ‑‑ ignoring feasibility of an attack is just ‑‑ it's specious.
06:03
You can't do that. We live in the real world where we have these systems that we have to deal with. We have to implement them. We have to use them. And there's no way that you can compare a ten‑minute attack that you can conduct with just software like a flash drive to something where you need to pull apart the hardware and manipulate
06:21
the system that way. And regardless of what they say, physical security and resistance to physical attack is in the scope of full disk encryption. It doesn't matter what you disclaim in your security model. At the very least, if they don't want it to claim responsibility of that, they need to be very clear and unequivocal about how easily the stuff can be broken.
06:43
So this is a diagram of sort of an abstract system diagram of what is mostly in a modern CPU or a modern computer. And sort of what the boot process is, just so everyone is on the same page of what actually happens here. So as we know, the boot loader gets loaded from the secondary storage on the computer
07:03
by the bios and it gets copied into main memory through a, you know, data transfer. The boot loader then asks the user for some sort of authentication credential like a password or a key smart card or something like that. That password is then transformed by some process into a key which is then stored in
07:25
memory for the duration of the computer being active. And then the boot loader of course transfers control over to the operating system and then both the operating system and the key remain in memory for the transparent encryption and decryption of the computer. This is a very idealized view.
07:40
This assumes that nobody is trying to screw with this process in any way. And I think we can all think of a few different ways where this can be broken. So let's enumerate a few of the things that might go wrong if someone is trying to attack you. So I break attacks into three fundamental tiers. Non‑invasive, which is something that you might be able to execute with just a flash drive.
08:00
You don't even need to take the system apart. Or some other hardware component that you could attach to it like a PCI card, express card or thunderbolt, the new adapter that gives you basically naked access to the PCI bus. Secondly, we will consider attacks where a screwdriver might be required where you might need to remove some system component temporarily to deal with it in your own little environment.
08:23
And also soldering iron attacks which is the most complicated where you are physically either adding or modifying system components like chips in the system in order to try to break these things. And so one of the first types of attacks, a compromised boot loader or this is also sometimes known as an evil made attack where the boot loader itself, since you need
08:44
to start executing some unencrypted code as part of the system boot process, something which you can boot strap yourself with and prompt the user for credentials and then get into access to the rest of the data that's encrypted on the hard drive. There's a few different ways that you could do this. You could physically alter the boot loader on the storage system.
09:05
You could compromise the BIOS, you could load a malicious BIOS that hooks the keyboard adapter or hooks the disk reading routines and modify it that way in a way that's resistant to removing the hard drive. But in any case, you can modify your system so when the user enters their password it
09:22
gets written to disk unencrypted or something like that. In some way the attacker can get it. You can do something similar at the operating system level. This is especially true if you are not using full disk encryption, if you are using container encryption. There's the whole operating system that someone could manipulate.
09:42
This could also happen from an attack on the system like an exploit. So someone gets root on your box and now they can read the key out of main memory. It's a perfectly legitimate attack. And then that key could be either stored on the hard drive in plain text for later acquisition by the attacker or sent over the network to their command and control systems.
10:08
Another possibility of course is capturing the user input via key logger, be it software hardware, something exotic like a pinhole camera or maybe a microphone that records them typing in sounds and trying to figure out what keys they pressed.
10:23
This is kind of a hard attack to stop because it potentially includes components that are outside of the system. I also want to talk about data remnants attacks more colloquially known as a cold boot attack. So if you asked five years ago, even people who are very security savvy, what are the data
10:44
properties, what are the security properties of main memory? They would tell you when it powers down, you lose the data very, very quickly. And then an excellent paper from Princeton in 2008 discovered that actually at room temperature, you're looking at several seconds of perfectly good, very, very little
11:03
data degradation in RAM. And if you cool it down to cryogenic temperatures by say using an inverted can duster, you can get several minutes where you're getting very, very little bit degradation in main memory. And so if your key is in main memory and someone pulls your modules out and pulls out
11:22
of your attack ‑‑ pulls out the modules from your computer, they can attack your key by finding where it is in main memory in the clear. You can ‑‑ and there's like some attempts for resolving this in hardware like, oh, the memory modules need to be scrubbed and we're booting up. But it's not going to help you if someone takes the module out and puts it in another computer or a dedicated piece of hardware for extracting memory module contents.
11:47
And finally, there's direct memory access. Any PCI device on your computer has the ability in ordinary operation to read and write the contents of any sector in main memory. They can basically do anything.
12:01
And I mean this was designed back when computers were much slower where we didn't want to have the CPU babysitting every transfer from devices to and from main memory. So devices gain this direct memory access capability to just ‑‑ they could be issued a command by the CPU and then they could just finish it and the data would be in memory whenever you needed it.
12:21
And this is a problem because PCI devices can be reprogrammed. A lot of these things have writeable firmware that you can just reflash to something hostile and this could compromise the operating system or execute any other form of attack of either modifying the OS or pulling out the key directly. There's forensic capture hardware that is designed to do this in criminal investigations.
12:46
They plug something into your computer and pull out the contents of memory. You can do this with FireWire, you can do this with ExpressCard, you can do this over Thunderbolt, now the new Apple adapter. So these are basically external buses to your ‑‑ these are external ports to your
13:04
internal system bus, which is very, very powerful. So wouldn't it be nice if we could keep our keys somewhere else in RAM? Because we've sort of demonstrated that RAM is not terribly trustworthy from the security perspective. Is there any dedicated key storage or cryptographic hardware?
13:22
And I mean there is. You can find things like cryptographic accelerators, you use them in web servers so you can handle more SSL transactions per second. And they're tamper‑resistant or certificate authorities have these things that hold their top secret keys. But they're not really designed for high throughput operations like
13:43
using disk encryption. And so are there any other options? Can we use the CPU as sort of a pseudo hardware crypto module? So can we compute something like AES in the CPU using only something like CPU registers?
14:02
Intel and AMD added these rather excellent new CPU instructions which actually take all the hard work of doing AES out of your hands. You can just do the block cipher primitive operations with just a single assembly instruction. The question is then can we store our key in memory and can we actually perform this
14:21
process without relying on main memory? We have a fairly large register set on X86 processors. I don't know if any of you have actually tried adding up all the bits that you have in registers but it's something like four kilobytes almost on modern CPUs. So some of it we can actually dedicate to key storage and scratch space for our encryption operations. One possibility is using the hardware break point debugging registers.
14:48
There's four of these in your typical Intel CPU and in 64‑bit mode these are each going to hold 64‑bit pointer. So that's 256 bits of potential storage space that most people will never actually use. The advantage of course to using debug registers
15:05
is one they're privileged registers so only the operating system can access them, ring zero. And you get other nice benefits like when the CPU is powered down either by shutting off the system or putting it in sleep mode you actually lose all register contents so you can't cold boot these. And a guy in Germany actually implemented
15:25
this thing as Tresor for Linux in 2011 and he did performance testing on it and it's actually not any slower than doing a regular AES computation and software. How about instead of storing a single key we can store 228‑bit keys. This gets us into
15:45
more of the crypto module space. We can store a single master key which never leaves the CPU on boot up and then load and unload wrapped versions of keys as we need them for additional task operations. The problem is this ‑‑ we can have our
16:07
code and our keys stored outside of main memory but the CPU is ultimately still going to be executing the contents of memory. So a DMA transfer or some other manipulation can still alter the operating system and get it to dump out the registers whether
16:22
they be in main memory or if they're somewhere more exotic like debug registers. Can we do anything about the DMA attack angle? And as it turns out, yes, we can. In recently as part of new technologies for enhancing server virtualization for
16:41
performance reasons people liked being able to attach say a network adapter to a virtual server so it would need to go through a hypervisor. So IOMMU technology was developed so you can actually sandbox a PCI device into its own little section of memory where it can't arbitrarily read and write anywhere on the system. So this is perfect. We can set up IOMMU permissions to protect our operating system
17:02
or whatever we're using to handle keys and protect it from arbitrary access. And again our friend from Germany has implemented a version of Tresor on a micro bit visor called bit visor which basically does this. It lets you run a single operating system and it transparently does this disk access encryption. The guest
17:23
doesn't even have to care or know anything about it which is great. Disk access is totally transparent to the OS. Debug registers cannot be accessed by the OS and IOMMU is set up so that the hypervisor itself is secure from manipulation. But
17:42
as it turns out there's kind of other things in memory that we might care about other than disk encryption keys. There's the problem that I hinted at earlier where we do container ‑‑ we used to do container encryption and now we all do full disk encryption for the most part. We do full disk encryption because it's very, very
18:03
difficult to make sure you don't get accidental rights of your sensitive data to temporary files or caching in a container encryption system. Now that we're reevaluating main memory as a secure ‑‑ as a not secure, not trustworthy place of storing data we need to treat it much the same way. We have to encrypt everything we
18:22
want to leak. We do not want to leak. So things that are really important like SSH keys or private keys or PGP keys or password manager or any top secret documents that you're working on. So I had a very, very silly notion. Can we encrypt main memory? Or at least most of the main memory where we're likely to
18:44
keep secrets so we can at least minimize how much we're going to leak. And surprisingly the answer is yes. A proof of concept in 2010 by a guy named Peter Peterson actually tried implementing a RAM encryption solution. So it wouldn't
19:03
encrypt all of RAM. It would basically split main memory into two components. A small fixed size clear which would be unencrypted and then a larger sort of pseudo‑swap device where all the data was encrypted prior to being kept in main memory. It ended up being obviously quite a bit slower in synthetic benchmarks with
19:24
read performance more effective than write performance. But you know what? In the real world when you ran like a web browser benchmark it actually did pretty well. 10 percent slower. I think we can live with that. The problem with this proof of concept implementation it stored the key to the crypt in main memory because where
19:43
else would we put it. The author considered using things like the TPM for encryption operations but those things are even slower than dedicated hardware crypto systems so it would just be totally unusable. But you know what? If we have the capability to use the CPU as a sort of pseudo‑hardware crypto module it's
20:02
right in the center of things so it should be fast enough to do these things. Maybe we can actually use something like this. So let's say we have this sort of system set up. We've gotten ‑‑ our keys are not in main memory. Our code responsible for manipulating the keys is protected from arbitrary read and write
20:22
access by malicious hardware components. Main memory is encrypted so most of our secrets are not going to leak even if someone tries to execute a cold boot attack. But how do we actually get a system booted up to this state? Because we need to start from a turned off system, authenticate ourselves to it and get the system up
20:42
and running. How do we do this in a trustworthy way because after all someone could still modify the system software to trick us into thinking that we're running this great new system but in reality we're just not doing anything. So one of the very important topics is being able to verify the integrity of our computers.
21:05
The user needs to be able to verify that their computer has not been tampered with before they authenticate themselves to it. And there's a tool that we can use for this. The trusted platform module. It's kind of got a bad rap but we'll talk
21:20
about that a little bit more. But it has the capability to measure your booting sequence in a couple of different ways to let you control what data will be revealed to the system from the TPM unless you're ‑‑ in two particular system configuration states. So you can basically seal data to a particular software configuration that you're running on your system. And there's a couple of
21:45
different implementation approaches to do this and there's fancy cryptography to make it really hard to get around it. So maybe we can do this. And so what is a TPM anyway? It was originally sort of like hailed as the grand
22:02
solution to digital rights management by media companies. Media companies would be able to remotely verify that your system is running in some approved configuration before they would let you run the software and unlock the key to your video files. It ended up being really impractical in practice and so nobody
22:22
is actually even trying to use it for this purpose anymore. I think a better way to think about it is really just a smart card that's fixed on your motherboard. It can perform some cryptographic operations, RSA, Shaw, has a random number generator and it has physical attack counter measures to prevent someone from very easily getting access to the data that's stored in it.
22:44
The only real difference between it and the smart card is it has this ability to measure the system boot state into platform configuration registers. And it's usually a separate chip on the motherboard. So there's some security implications of that. And there's some kind of fun bits like monotonic counters, numbers that you can
23:03
only request the TPM increases and then you can check what the value is. There's a small non‑volatile memory range you can use for whatever you want. It's not very big, like a kilobyte, but it could be useful. There's a tick counter which lets you determine how long the system has been running since last startup. And
23:23
there's commands that you can issue to the TPM to let it do things on your behalf which include things like clearing itself if you feel the need to. So we want to have a protocol that a user can run against the computer so that they can verify that
23:43
the computer has not been tampered with before they authenticate themselves to the computer and then begin using it. So what sort of things could we try sealing to platform configuration registers that would be useful for this sort of protocol? And so a couple of suggestions that I have are seeds to one‑time password tokens,
24:03
either the time or the event variety. Maybe some sort of unique image or animation like a photograph of you somewhere. Something that's difficult to ‑‑ something unique that's not something that someone can easily find elsewhere. And then say disable the
24:22
video out in your computer when you're part in this challenge response authentication mode. You also want to seal a part of the disk key and there's a couple of reasons you want to do that. It assures that the system, you know, within certain security assumptions it assures that the system is only going to be booting into some approved
24:42
software configuration that you control as the owner of the computer. And so ultimately that means that anyone who wants to attack your system needs to do it either through breaking the TPM or they need to do it within the sandbox that you've created for them. And this of course is not very cryptographically strong or anything like
25:01
that. You're not going to have a protocol which allows a user to securely authenticate the computer to the same level that you have, say, security and AES. But unless you can do something like RSA encryption in your head, it's never going to be perfect. So I mentioned that there's a self‑erase TPM command that you can issue as in the
25:24
software. And since you're also running ‑‑ since you're also required the system to be in a particular configuration before it will release secrets, you can actually do something interesting like self‑destruct. If you develop the software and set up
25:41
your protocol to limit, say, number of times the computer has been started up unsuccessfully, have it time out once the password ‑‑ it's been waiting at the password screen for some period of time or limit the number of password attempts so you can enter. Or the amount of time since the computer has been started up,
26:01
say if it's been in cold storage for a week or two. You could also restrict access to the computer for periods of time. So you know you're going to be traveling to a foreign country and you want to lock down your computer for the duration of the trip. So when you get to your hotel or whatever on the other end, then you can unlock it, but not before. You can also do fun things like leave little canaries on the disk which appear to
26:25
contain the critical values for your policy but are really just trip wires and you're really just using the internal TPM values. You can also create a self‑destruct password or duress code to automatically issue this reset command. And since the
26:43
two options an attacker would have would be break the TPM or run your software, you can kind of make them play by these rules. And you can actually do an effective self‑destruct. The TPM is intentionally designed to be very, very hard to copy. Basically you can't
27:01
clone it very easily. So you could use things like monotonic counters to detect write blockers, any disk restore, replay attacks. And you know, once the TPM clear command is issued, it's game over for an attacker who might want to get access to your data. There's some similarities to a system that Jacob Applebaum discussed at the Chaos
27:25
Communication Congress many years ago, 2005. He proposed using a remote network server for many of these options. But admitted it was going to be brittle and kind of potentially difficult to use. Since the TPM is an integrated system component, you can get a lot of these
27:41
advantages by using the TPM instead of a remote server. And a hybrid approach is potentially possible. You could have a system set up say as an IT department where you temporarily lock down a system and it's only ‑‑ it can only become available again once you plug it into the network, call your IT administrator and they
28:01
unlock the system again. I'm kind of hesitant to expose a network stack this early in the boot process just because it massively increases your attack surface. But it's still a possibility. So I've sort of qualified all my statements. An attacker can only do this. That's of course under the assumption that they cannot break the TPM very easily.
28:22
So this is actually an optical microscope scan of a TPM or smart card done by Chris Tarnofsky who spoke here at DEF CON last year and then Black Hat a few years ago on the security of these TPMs. So he's actually done some really great work in figuring out how much ‑‑ how
28:43
hard these things are to break. He's enumerated the countermeasures and sort of figured out what would it take to actually break these things and actually has gone and done it and tested it. So there's things like light detectors, there's active meshes, there's all sorts of really crazy circuit implementations to try to throw you off the track of what
29:01
it's actually doing. But if you spend enough time, you have enough resources and you're careful enough, you can actually ‑‑ you can actually get around these ‑‑ you can actually get around most of these. So you can de‑encapsulate a chip, put it in an electron microscope workstation and, you know, go wild. Find where the unencrypted
29:25
data bus is and just glitch it and get the thing to spill out all of its secret data. But nonetheless, this sort of an attack, even if you've done all the R&D is something that's going to take hours with an expensive microscope and you're still going to need to spend months of R&D to figure out what the countermeasures are on the chip so
29:42
you can actually break it without frying the one chip of your attack target. There's also recent attacks. I mentioned earlier again that the TPM is a separate chip on the motherboard in almost all cases. It's very, very low in the system hierarchy. It's not up in the CPU like it is for DRM enforcement in video game consoles. If you manage to
30:07
reset it, you're really not going to adversely affect the system that badly. It's usually a chip that's off the LPC bus on the computer which itself is sort of a legacy bus that's off the south bridge or the platform hub. And on modern systems, really the
30:23
only sorts of things you're going to find on here are the TPM and things like the BIOS legacy keyboards. I mean, we used to have floppy controllers but I guess not really anymore. And if you find a way to reset, say, the low pin count bus, you'll reset the TPM into a fresh system boot state. You'll lose your PS2 keyboard but
30:45
not really a big deal. And you'll be able to play back the boot sequence of a trusted boot sequence that the TPM has data sealed to without actually executing that boot sequence and then you can use this to extract data. There's a couple of attacks that
31:04
have tried to exploit this. If you're using an older mode in the TPM called static root for trust measurement, you can do this pretty easily. I have not seen any research on a successful attack against the newer Intel trust execution technology version of the TPM activation. It's likely still possible. So this is an area
31:24
that probably needs more research to intercept the LPC bus and what it's communicating to the CPU. So that might be another way that you can attack the TPM. And so let's look at a blueprint, what I think we should have for getting the
31:43
system up from a cold boot state up into what we have, our running trustworthy configuration. There's a lot of really vulnerable legacy system components in our PC architecture. You can do all sorts of things in the bios like hooking the interrupt vector table and modifying disk read and writes or capturing keyboard
32:02
inputs or screwing with the system in all sorts of fun different ways, masking out CPU feature registers. There's plenty of options if you want to mess with people. And so in my opinion, you really want to get out of bias controlled mode, out of real mode into protected mode as soon as possible and really just do your
32:21
measurement stuff. So once you get into this pre‑boot mode which is really just your operating system like a Linux initial RAM disk, then you start executing your protocol and start doing these things. Because once you're using operating system resources, what someone does at the bios level as far as interrupt
32:41
tables doesn't really affect you anymore. You really don't care. And you can do sanity checks on your registers like if you know you're running on a core i5, you know it's going to be supporting things like no execute bit and debug registers and other stuff that people might try to mask out in capability registers.
33:03
So here's the run time blueprint. What we actually want the system to look like once we're in the running configuration. So there was the previous project which implemented many of the security aspects of doing disk encryption using CPU registers and having IOMMU protections on your main memory. The
33:25
problem is that bit visors are very specialty, not very commonly used program. Zen is sort of like the canonical open source hypervisor where there's a lot of security research going on. People are making sure it's not broken. And so in my opinion, we should use something like Zen as your bare level hardware, your
33:44
hardware interface. And then use like a Linux dom 0 administrative domain on it to actually do your hardware initialization. So again, in Zen, all of your pair of virtualized domains are actually running in nonprivileged mode in ring 3.
34:03
So they don't actually have direct access to things like the debug registers. So that's one thing that's already done. Zen exposes things like hyper calls that give you access to those sort of stuff. But it's something you can disable in software. And so the approach I'm taking is we'll sort of do that
34:22
master key approach in the debug registers. We'll dedicate two debug registers, the first two, to store 128 bit AES key, which is our master key. This thing never leaves the CPU registers as soon as it's entered in by the process that takes the user credentials. And then we use the second two
34:41
registers as virtual machine specific whatevers. It could be either as ordinary debug registers or in this case, we could use it to encrypt main memory. In this particular case, we still need to have a few devices that are directly connected to the administrative domain. That includes the graphics processing unit, which is a PCI device, you know, the keyboard, the TPM, all
35:04
this stuff needs to be directly accessible. So you can't really apply IOMMU protections on this. But things like the network controller, the storage controller, arbitrary devices in the PCI bus, you can set up IOMMU protections on it so they have absolutely zero access to your administrative domain or your hypervisor memory spaces. You can do similar
35:26
things by ‑‑ you can get access to things like the network by actually putting things like your network controller into dedicated virtual machines. So these things are ‑‑ these things get the devices mapped but have IOMMU protection set up so that device can only access the memory space of this
35:44
virtual machine. You can do the same thing with your storage controller and then you actually run all of your applications in virtual machines that have absolutely zero direct hardware access. So even if someone owns your web
36:01
browser or sends you a malicious PDF file, they don't actually get anything that would let them seriously compromise your disk encryption. So I can't take the credit for that architecture design. It was actually the base ‑‑ it was actually the design basis for a really excellent project called the cubes OS project. They basically describe themselves as a pragmatic
36:24
formulation of Zen and Linux and a few custom tools to basically do a lot of the stuff I just talked about. Implement these nonprivileged guests and do a nice unified system environment. So it feels like you're really running with one system but it's actually a bunch of different virtual machines under the hood. So I use this as the implementation of my code base. All the
36:43
encrypt ‑‑ all the crypto stuff is stuff that I've added on top of it. And so the tool I'm releasing, this is still really proof of concept experimental code. I call it fail links. It is a patch to Zen to do the implementation of the disk encryption stuff as I've described. Master key
37:04
in the first two debug registers, second two debug registers totally unrestricted. For security reasons, the XMM registers which are used as scratch space are encrypted as well as the key when you're doing some VM context switches. And I've also implemented a very, very simple
37:22
implementation of the crypt keeper paper, the encrypted memory using Z ram. It's been main lined. It does pretty much everything except crypto. So adding encryption on top of it is just a tiny bit of code which is great. The most secure code is the code you don't have to write, right?
37:42
And so it's ‑‑ the nice thing about Z ram is it gives you a bunch of the bits that you need to securely implement things like AES counter mode which is really great. Hardware wise, you do have ‑‑ you do have a few system requirements. So you need a system that supports the AES new instructions.
38:03
Reasonably common but not every system has it. Chances are if you have an Intel i5 or i7, almost all of them support it. But there are some oddballs. So check out like Intel arc to make sure it supports all the features that you need. Ditto, hardware virtualization extensions, these are very, very common as of like 2006.
38:22
IOMMU is a little bit more complicated to find if you're looking for a computer. It's not listed as a sticker specification. You kind of need to dig for it. And there's a lot of people who should know better but don't about what the difference is between VTX and VTD and so forth. So you might need to hunt for a system that supports this stuff. And you want a system of course with a TPM because otherwise you
38:41
can't implement this measured boot thing at all. So usually you'll want to be looking at business class machines where you can verify this sort of stuff exists. If you look for Intel TXT it will have almost everything you need. The cubes team actually keeps a really great hardware compatibility list on their wiki
39:01
which actually has details for a lot of systems that do this sort of stuff. So security assumptions. In order for the system to be secure, we have a few assumptions about a few of the system components. The TPM, of course, very, very critical component for assuring the integrity of the boot. We need to make sure that there's no back door
39:21
capable of dumping NVRAM or manipulating monotonic counters or putting the system into a state where it's not actually trusted. We just think it is by resetting the PCR attacks. And based on remarks by Tarnovsky who has reverse engineered these chips, I'm sort of setting a bound of roughly 12 hours of exclusive access to the
39:44
computers required if you want to do a TPM attack on it to pull out secrets. There's a few assumptions about the CPU, memory controller and IOMMU, mainly that they're not back doored and they're correctly implemented. Some of these might not necessarily be very strong assumptions because Intel could very easily back door some
40:03
of these things and we have no way of finding out. And some of the security assumptions about Zen, as a piece of software it actually has a very good security record but, you know, nothing is perfect and occasionally there are security vulnerabilities. In the case of Zen, given its privileged position in the system that's actually kind of a big deal and you really want to make sure it's secure. And so under those security assumptions
40:25
we have, you know, let's sort of put a framework up for a threat model. We want to do a realistic threat assessment where we realize that not every system is unbreakable. Especially when there's so many legacy components that were designed without any consideration of security. But at the same time also that not all theoretical attacks are
40:44
practical and you can't lump very, very simple attacks with a difficult complex hardware attacks. And I think a good analogy is thinking about safe security. We all know that every safe can eventually be broken. It's a question of how much time you have to reverse engineer it and how much time you have to break it. But eventually
41:04
it can be broken. And so I think we need to think about our systems in the context of having physical security defenses in terms of hours rather than minutes that we have right now. And as always, if I've screwed up, if I've made an assumption that you don't think holds, prove it, verify it, verify mine, make sure I'm right or wrong.
41:26
And so expected security, this is what you'll actually get. Cold boot attack is not going to be effective, period, against keys. And stuff that you have in main memory is going to be restricted by whatever you have in clear. Hardware based RAM acquisition is not going to be effective because they're going to be IOMMU sandboxed to
41:43
nothing. So they're not actually going to get your application state or your system state. And even if you manage to extract the secrets out of a TPM, all it's going to do is get you back to where we are right now. Where although it is easily broken, you're still not all the way down to zero. And I'm sort of setting an assumption
42:01
here that if you have a good security habit policy, which is reasonable, say 12 hours of no contact with your computer, you should be okay. As long as you're reasonably vigilant and not excessively vigilant, you should be okay. A couple of attack methods which are really ‑‑ these are the main ones that I would attack if I were trying
42:20
to break into a system that used something like this. Key loggers and friends are still going to be very much not defended against. You can do some mitigation of this by using one‑time tokens, but it's still, again, you're going to be imperfect. TPM attacks, as mentioned before, either NVRAM extraction or LPC bus reset intercept
42:41
hardware. Find some way of tricking the TPM into getting into a configuration that it thinks is trustworthy but is actually not. In RAM manipulation, if you have something which looks like RAM, quacks like RAM, acts like RAM, but isn't actually RAM, it pretends to be RAM for most of the time but lets you manipulate it externally, then there's really nothing you can do because you'd be able to manipulate
43:01
the contents of the system no problem. You could also try things like transient pulse injection which is how George Hotz broke the hypervisor security on the PS3. A little quick bit about legal notes. I'm not a lawyer, obviously, not your lawyer, not legal advice. As far as I know, if you have ‑‑ okay. If you have self‑destruct,
43:22
as far as I know, it is not illegal yet. But there has been no legal test case of this. Might be interesting to find out. But I'm not sure I'd want to be that test case either. But you also need to be aware that TPM and strong encryption is not ‑‑ it's illegal
43:41
in certain jurisdictions. You can't use a TPM in say China, you can't use a TPM in Russia. And some countries like the United Kingdom have mandatory key disclosure. You will go to prison if you do not hand over a key like the RIPA act. Future work and improvements. Production version, stable version. Right now it is not stable.
44:04
If you put your computer to sleep it will eat your data among a couple of other problems. I'm working on it. And some other things that might be fun to do in the future like open SSL keys are really important. So if there's some API that you could do to basically
44:21
let you swap out your contents of memory very, very quickly so your exposure time is very small. Something easily installable that you guys could all install and maybe upstream the patches to Linux and the goons are getting ready to kick me off the stage. Conclusions. I'm almost done. I'm almost done. So best security in the world goes unused if it's unusable.
44:41
The model needs to account for realistic use patterns. And it's not just disk encryption. You really need to think about it holistically from the perspective of the whole system. And it's challenging to do this but I think it's possible and we should try. Thank you.