A systematic evaluation of OpenBSD's mitigations
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 254 | |
Autor | ||
Lizenz | CC-Namensnennung 4.0 International: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/53220 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
| |
Schlagwörter |
36C3: Resource Exhaustion29 / 254
1
7
8
9
10
11
24
26
27
28
35
39
40
41
43
44
47
49
50
55
56
60
62
63
64
68
71
72
74
75
77
78
79
82
88
93
99
100
102
106
109
111
112
113
118
119
122
124
125
127
132
133
135
136
137
138
140
141
142
143
144
145
146
150
151
156
157
158
161
162
164
165
166
167
170
173
175
177
179
180
182
183
187
201
202
208
213
219
224
226
233
234
235
237
239
240
241
244
246
247
249
251
253
00:00
ComputersicherheitSchnittmengeNetzbetriebssystemLeistungsbewertungJSONComputeranimationVorlesung/Konferenz
00:57
MAPTeilmengeGüte der AnpassungComputersicherheitBefehl <Informatik>QuellcodeSoftwareschwachstelleExploitHardwareHalbleiterspeicherKette <Mathematik>ComputersicherheitFlächentheorieKartesische AbgeschlossenheitSoftwareschwachstelleSeidelComputeranimation
01:55
BildschirmfensterZahlenbereichNichtlinearer OperatorBasis <Mathematik>ComputersicherheitWeb-SeiteWeb SiteFreewareNetzbetriebssystemPhysikalisches SystemNichtlinearer OperatorBasis <Mathematik>ComputersicherheitQuellcodeFreewareComputeranimationVorlesung/Konferenz
02:35
ErwartungswertMenütechnikComputersicherheitPunktMailing-ListeEreignishorizontExploitBenutzerbeteiligungErwartungswertExogene VariableGleitendes MittelAppletQuellcodeUMLComputeranimation
03:33
ComputersicherheitMultiplikationsoperatorSoftwareschwachstelleVorlesung/Konferenz
04:18
LoopProgrammfehlerComputersicherheitNachbarschaft <Mathematik>Klasse <Mathematik>Protokoll <Datenverarbeitungssystem>Web logQuellcodeExploitComputeranimation
05:25
CodeBildschirmfensterGüte der AnpassungEndliche ModelltheorieMultiplikationsoperatorSchlussregelThumbnailComputersicherheitWort <Informatik>QuellcodeThumbnailComputeranimationVorlesung/KonferenzBesprechung/Interview
06:43
CodeProgrammierungProdukt <Mathematik>TropfenOrdnungsreduktionGrenzschichtablösungProjektive EbeneRechenschieberProzess <Informatik>RPCWurzel <Mathematik>ComputersicherheitNabel <Mathematik>Klasse <Mathematik>MinimumFlächentheorieQuellcodeWeb SiteMultiplikationsoperatorMechanismus-Design-TheorieTropfenProgrammiergerätGrenzschichtablösungE-MailFlächentheorieComputeranimationVorlesung/Konferenz
08:17
ProgrammierungTropfenFaserbündelGrenzschichtablösungStellenringProzess <Informatik>Wurzel <Mathematik>ATMComputersicherheitOffene MengeAppletQuellcodeDirekte numerische SimulationElektronische PublikationClientDifferenteNeuroinformatikMultiplikationsoperatorSchreiben <Datenverarbeitung>Wurm <Informatik>NetzbetriebssystemKonfiguration <Informatik>SystemaufrufVererbungshierarchieFlächentheorieComputeranimationVorlesung/Konferenz
10:40
HardwareProgrammierungDateiverwaltungBefehlsprozessorBeweistheorieSystemaufrufInternetworkingVererbungshierarchieBrowserSoundverarbeitungElektronische PublikationSichtenkonzeptMini-DiscSoftwareschwachstelleExploitSchreiben <Datenverarbeitung>CachingBenutzerbeteiligungNetzbetriebssystemDefaultHardwareProgrammiergerätVerschlingungSoftwareschwachstelleComputeranimation
13:05
BeanspruchungBildschirmfensterOrdnungsreduktionBefehlsprozessorIndexberechnungMereologieVerzweigendes ProgrammComputersicherheitFlächentheorieSeitentabelleBildschirmfensterCompilerZellularer AutomatMini-DiscComputeranimation
14:44
ProgrammbibliothekBildschirmfensterBinärcodeE-MailFlächeninhaltBlockade <Mathematik>RandomisierungFokalpunktMailing-ListeBefehl <Informatik>BootenPatch <Software>MultiplikationsoperatorURLKeller <Informatik>Mapping <Computergraphik>Minkowski-MetrikNetzbetriebssystemProgrammbibliothekBildschirmfensterRandomisierungBootenMini-DiscUmkehrung <Mathematik>ComputeranimationVorlesung/Konferenz
16:55
HardwareMathematikOrdnung <Mathematik>RelativitätstheorieSoftwareProgrammierungProgrammbibliothekBefehlsprozessorBinärcodeDebuggingLeckLeistung <Physik>Physikalisches SystemRechenschieberVirtuelle MaschineVersionsverwaltungStochastische AbhängigkeitComputersicherheitZeiger <Informatik>PunktEin-AusgabeKernel <Informatik>DifferenzkernSystemzusammenbruchBefehl <Informatik>BootenWeb SiteAutorisierungSystemplattformObjekt <Kategorie>Einfache GenauigkeitMultiplikationsoperatorNetzbetriebssystemOrtsoperatorHumanoider RoboterDefaultCodeHardwareImplementierungComputerarchitekturBefehlsprozessorBinärcodePhysikalisches SystemPi <Zahl>StellenringStochastische AbhängigkeitNichtlinearer OperatorRandomisierungSampler <Musikinstrument>Zeiger <Informatik>Kernel <Informatik>QuellcodeBootenSystemplattformObjekt <Kategorie>OrtsoperatorHumanoider RoboterComputeranimationVorlesung/Konferenz
20:38
Ordnung <Mathematik>RelativitätstheorieProgrammbibliothekTropfenBitLeckEinflussgrößeRPCRandomisierungZeiger <Informatik>PunktEinfache GenauigkeitMultiplikationsoperatorHumanoider RoboterDefaultEntropieProgrammbibliothekRandomisierungZeiger <Informatik>AdressierungBootenSymboltabelleProxy ServerComputeranimationBesprechung/Interview
21:47
CodeHardwareCodierungHalbleiterspeicherMAPBildschirmfensterÄquivalenzklasseBinärcodeGarbentheorieSpeicherverwaltungSystemaufrufCASE <Informatik>Kernel <Informatik>Web-SeiteWeg <Topologie>Patch <Software>MultiplikationsoperatorWurm <Informatik>NetzbetriebssystemCodeSinusfunktionKontrollstrukturKernel <Informatik>ComputeranimationVorlesung/Konferenz
24:01
CodeDatenstrukturSoftwareProgrammierungHalbleiterspeicherROM <Informatik>GrenzschichtablösungBinärcodeBitGarbentheorieMereologieSpeicherverwaltungTeilmengeFlächeninhaltSystemaufrufVersionsverwaltungPunktLesen <Datenverarbeitung>Web-SeiteMetadatenBetriebsmittelverwaltungAdressraump-BlockFahne <Mathematik>FreewarePufferüberlaufURLJust-in-Time-CompilerNetzbetriebssystemGamecontrollerCodeDatenstrukturDatenverwaltungHalbleiterspeicherProgrammierumgebungKonfiguration <Informatik>Physikalisches SystemFlächeninhaltSystemaufrufComputersicherheitRandomisierungFormation <Mathematik>Web-SeiteRichtungMetadatenBetriebsmittelverwaltungQuellcodep-BlockFreewareComputeranimation
26:56
MaschinenschreibenSoftwareProgrammierungBenchmarkBitFunktionalComputersicherheitWeb-SeiteBetriebsmittelverwaltungURLBrennen <Datenverarbeitung>Humanoider RoboterDatenstrukturDatenverwaltungPlotterKonfiguration <Informatik>SystemaufrufRandomisierungFormation <Mathematik>Web-SeiteMetadatenBetriebsmittelverwaltungPartielle DifferentiationSchnelltasteCookie <Internet>FreewareComputeranimationVorlesung/Konferenz
28:14
ROM <Informatik>SystemaufrufNabel <Mathematik>MultiplikationsoperatorCachingComputeranimationVorlesung/KonferenzBesprechung/Interview
29:00
ProgrammierungHalbleiterspeicherFunktionalZeitzoneSystemaufrufAdressraumSchnelltasteCookie <Internet>MultiplikationsoperatorNetzbetriebssystemPlotterSystemaufrufPartielle DifferentiationCookie <Internet>ComputeranimationVorlesung/Konferenz
30:08
CodeProgrammbibliothekBeweistheorieBinärcodeBitFunktionalKette <Mathematik>Metrisches SystemWechselsprungZahlenbereichComputersicherheitVollständigkeitKernel <Informatik>SkriptspracheTrennschärfe <Statistik>Patch <Software>GraphiktablettAssemblerMathematikProgrammiergerätProgrammierumgebungBinärcodeForcingIdeal <Mathematik>Kette <Mathematik>Leistung <Physik>Kernel <Informatik>AdressraumQuellcodeTrennschärfe <Statistik>Patch <Software>StandardabweichungGraphiktablettComputeranimation
32:33
OrdnungsreduktionBinärcodeKernel <Informatik>DifferenteVorlesung/Konferenz
33:23
ProgrammierungKonfiguration <Informatik>FunktionalSystemaufrufComputersicherheitPunktAdressraumWeb SiteSoftwareentwicklerBinärcodeKette <Mathematik>Leistung <Physik>Zeiger <Informatik>Kernel <Informatik>Disjunktion <Logik>AdressraumCookie <Internet>StandardabweichungComputeranimationVorlesung/Konferenz
34:40
ProgrammierungSingularität <Mathematik>HydrostatikCompilerMereologiePaarvergleichDatenfeldRandomisierungZeiger <Informatik>PunktKontrollstrukturDisjunktion <Logik>Primitive <Informatik>AdressraumCookie <Internet>FreewareKeller <Informatik>Vorlesung/Konferenz
36:06
CodeSpieltheorieIntegralProgrammverifikationFunktionalPhysikalisches SystemProjektive EbeneWechselsprungProgrammfehlerRandomisierungZeiger <Informatik>Kernel <Informatik>AdressraumCookie <Internet>ExploitComputerarchitekturIntelEvolutionsstabile StrategieDefinite-Clause-GrammarVererbungshierarchieKernel <Informatik>Web-SeiteQuellcodeComputeranimation
38:12
AdressraumMultiplikationsoperatorExploitVorlesung/Konferenz
38:55
SoftwareMAPProgrammfehlerATMEmulatorKernel <Informatik>Wort <Informatik>Fahne <Mathematik>Proxy ServerWurm <Informatik>FaserbündelSIMA-DialogverfahrenProxy ServerComputeranimationVorlesung/Konferenz
40:04
ValiditätHalbleiterspeicherMAPBildschirmfensterMereologieZahlenbereichSystemaufrufZeiger <Informatik>PunktSchnitt <Mathematik>GenerizitätProxy ServerWurm <Informatik>Keller <Informatik>BildschirmfensterSystemaufrufZeiger <Informatik>Computeranimation
40:54
ProgrammierungHalbleiterspeicherMAPAggregatzustandBinärcodeGarbentheorieSpeicherabzugServerInternetworkingZeiger <Informatik>PunktVererbungshierarchieInformationsspeicherungSystemzusammenbruchClientWeg <Topologie>Mini-DiscFitnessfunktionCookie <Internet>Proxy ServerMessage-PassingDemoszene <Programmierung>ImplementierungProgrammiergerätVerschlingungQuellcodeCookie <Internet>ComputeranimationVorlesung/Konferenz
43:02
CodeComputerspielProgrammierungMAPIntegralVirtuelle MaschineZentrische StreckungE-MailProgrammfehlerComputersicherheitStrömungsrichtungKernel <Informatik>Web-SeiteMailing-ListeBefehl <Informatik>BootenEndliche ModelltheoriePatch <Software>Hinterlegungsverfahren <Kryptologie>Kontextbezogenes SystemMultiplikationsoperatorGamecontrollerSoftwareentwicklerCodeIntegralComputersicherheitEndliche ModelltheorieHinterlegungsverfahren <Kryptologie>Message-PassingSoftwareentwicklerComputeranimationVorlesung/Konferenz
44:33
KryptologieKomplex <Algebra>SystemaufrufVersionsverwaltungComputersicherheitHash-AlgorithmusWeb-SeitePrimitive <Informatik>PasswortEndliche ModelltheorieMultiplikationsoperatorEntropie <Informationstheorie>Schreiben <Datenverarbeitung>Message-PassingSoftwareentwicklerBinder <Informatik>ComputersicherheitComputeranimationVorlesung/Konferenz
46:12
Produkt <Mathematik>Eigentliche AbbildungAdressraumQuellcodeDomain-NameWeb SiteComputeranimationVorlesung/Konferenz
47:01
InformationZahlenbereichResponse-ZeitMultiplikationsoperatorVorlesung/Konferenz
48:05
ImplementierungÄhnlichkeitsgeometrieExogene VariableInternetworkingProzess <Informatik>Blockade <Mathematik>Befehl <Informatik>SoftwareentwicklerVorlesung/Konferenz
48:54
ZahlenbereichSystemaufrufZeiger <Informatik>BildschirmfensterSystemaufrufZeiger <Informatik>ComputeranimationVorlesung/Konferenz
49:35
SoftwareTypentheoriePhysikalisches SystemKugelkappeAppletSchnelltasteKanalkapazitätWeb logSocketVorlesung/Konferenz
50:54
ZahlenbereichInternetworkingInteraktives FernsehenKontextbezogenes SystemSoftwareentwicklerVorlesung/Konferenz
51:41
ComputersicherheitVorlesung/Konferenz
52:24
ComputeranimationVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
00:21
born some 24 years ago as a foric of NetBSD. And we're currently running OpenBSD 6.6. And the operating systems hallmarks are easy installation, quick installation, rich set of documentation, and very, very good security features.
00:40
What about security mitigations? Our next speaker, Steen, is going to have a very close and systematic look at them. Steen? So good morning, everyone.
01:02
I'm Steen. And today, we're going to talk about evaluating OpenBSD mitigations. So I'm going to explain to you why I'm here on stage. And then together, we're going to go to an arbitrary subset of OpenBSD mitigations. And at the end, as usual, with talks, there is a conclusion. So why am I even here?
01:20
Because like every good computer-related stories, it starts on IRC. We were discussing with a couple of friends the exploitation of a particular vulnerability. And at some point, someone said, whenever I read RopChain, I'm reminded why I run OpenBSD. I didn't know much about OpenBSD. I was like, why? Because OpenBSD is taking security seriously.
01:41
Wow, that's a short statement. So I did what everybody else would have done. I spent dozens of hours reading OpenBSD source code, mailing lists, design documents, PDF, and everything. And then I rented back on IRC. And another friend said, hey, you should do a talk at the CCC about this. It's a good idea. It's never going to be accepted anyway, so why not?
02:02
So here I am. Yeah. So OpenBSD, it's an operating system, for example, like Windows or Linux kind of things, except it's based on NetBSD. It was forked by Theo de Rad in October 1995. The goals taken from the goals of the HTML web page
02:20
of his website is to pay attention to security problems, fix them before anybody else does, and to be the number one most secure operating system. That's really cool. Also be politics free as possible. Solutions should be decided on the basis of technical merit. I really like this. That's nice. People had low expectations for my talk,
02:41
things like misinformation, low quality talk, false assumption, international embarrassment, apparently. So we'll try to disappoint these people. More seriously, when the talk was, the abstract of the talk was published on the C36C3 website, there were a lot of heated responses,
03:02
like just look at the innovation web page, just look at the events web page, there are a lot of mitigations. How dare you say that OpenBSD is not secure? That's the point of my talk, actually. OpenBSD has a lot of mitigations, but I want to know if they are working or not. Having a list is not enough. Also, there are almost no exploits for OpenBSD. Well, there are no exploits for Temple OS
03:20
or Haiku or menu OS, but I'm not sure they are super secure. Also, OpenSSH and OpenSMTPD are really great. I know, I'm using them. This talk is not about OpenBSD software, it's about OpenBSD security mitigations. Someone else said that all the mitigations are complementary, why are you nitpicking on small mitigations? They're just a hand-wavy statement,
03:41
like everything is complementary, don't you dare criticizing OpenBSD. Someone else said, just read undeadly.org, which is a website dedicated to news about OpenBSD, but on undeadly.org, every time there is a new mitigation, everybody is cheering at it, but I haven't seen anyone criticizing the mitigation, saying, there is a weakness here, or maybe this or maybe that.
04:00
And also, when I was discussing with friends that are writing exploits, they are complaining about people fuzzing OpenBSD, not about new mitigations. Also, someone said that the title is clickbait, but apparently it's not clickbait enough because the room isn't full. So, security mitigation, how do we measure security?
04:21
It's really hard. Alvar Flait designed the Mitigator Creator, which is an alligator always working to make exploitation harder, like, yes, I killed his vulnerability, but it's not really helping. So, how do you measure good mitigations? How do you design good mitigations? Alvar also wrote on Twitter,
04:41
because apparently he disliked writing blog posts for some reasons. He said that to have a good mitigation, you should avoid hand-wavy statements, like, it makes harder for an attacker to do that. You should have stuff like, what class of bugs does it kill? Or what CV, like, this is killing CV one, two, three, four, for example.
05:00
Or by how many hours does it delay the publication of a working exploit, for example. Because it makes harder for an attacker, it's not something that would be acceptable, for example, when designing cryptographic protocol or a security protocol, like, yes, I added a for loop here because it makes harder for an attacker to get a clear text. Doesn't work as well. Also, for a good mitigation,
05:20
you should ask your friendly neighborhood exploit writers about this, like, hey, I've wrote this mitigation, it's killing this class of bugs, here are old exploits, can you bypass it, please? Can you try? And also, code review. Where is the mitigation coming from? Did you read some papers? Or did you come up with the idea yourself? Other people are using it, maybe. What is the code complexity?
05:42
So that's, it doesn't guarantee good mitigation, but at least I think that good mitigation stems from these good practices. Also, threat modeling, I think this is really important. Someone called Ryan Mallon said, threat modeling rules of thumb shouldn't explain exactly what you are securing against and how you are secure against. The answer can be assumed to be bears
06:00
and not very well. So for example, there was a mitigation added to OpenBSD and the commit said, I quote, thereby forcing the attacker to deal with the hopefully more complex effort of something, something. This is not something you want to read in a changelog for adding a new mitigation. So here we go. In this talk, as I said, I'm going to go with you
06:22
through a subset, an arbitrary run of OpenBSD's mitigations. Where are they coming from? Were they invented by OpenBSD or improved by OpenBSD, maybe? What are they defending against? Are they effective? Are they killing exploits? And how is the outside world doing compared to OpenBSD? For example, Linux has been not really improving,
06:40
but Windows has been investing a lot of money and effort and time into making it more secure. Why an arbitrary subset? Because I only got 45 minutes and a bunch of questions, so that's not enough time to go through every single one of them. There are not a lot of sources in my slides because I don't have much space. Well, it's a big screen, but still.
07:01
But there will be a website at the end with all my research material published there. I also put small prefer fishes at the bottom of the slides as annotation mechanism to express my opinions about the mitigations. So yeah, here we go. Attack surface reduction. Ivan Fratek, which is someone working
07:22
for Google's Project Zero, said in 2019 that empirical evidence suggests that attack surface reduction is one of the most impactful things that can be done for product security. So this is a class of mitigations that should be really effective. Privileged separation, privileged drop. In 1997, a long time ago,
07:43
Daniel Bernstein wrote Qmail, and Qmail was composed of several processes with only one running as root. So as an attacker, if you managed to compromise, for example, a process talking to the internet, maybe it's running with low privileges so it wouldn't yield you automatically a root shell. That's really good. Postfix did the same the same year,
08:02
and five years later, OpenSSH got privileged separation. So if you have a remote code execution, OpenSSH maybe doesn't give you a root shell automatically. That's pretty cool. Almost all OpenVSD written programs nowadays are using privileged separation and privileged drop. Privileged separation is to have different processes
08:21
running with different privileges, and privileged drop is the idea of dropping your privileges as soon as you don't need them anymore. For example, when you're issuing a ping command on Linux, maybe on OpenVSD too, I don't know, the ping is reset your ID because it needs to have high privileges to open a specific socket, and then it immediately drops privileges to send a payload.
08:42
So that's the idea of privileged drop. OpenVSD is using it almost everywhere. I said there, that's why I put five Perfer fishes. It's a really good unit, I think. For example, they've got rootless Xorg since 2014. That's really amazing, but they kept its situ ID, so this resulted in a trivial local root on OpenVSD.
09:03
Okay, that's just me being mean. Pledge, I really like this one. In the Linux world, there's something called seccomp that was created in 2002 and merged in 2005. The idea is that the process could enter a mode of secure computation in which only the exit, sigreturn, syscallr, elode,
09:22
as well as read and write on already open file descriptor. That's not really convenient for real-world programs. So seccomp BPF was created in 2012, and with this, you can restrict what syscallr your program can make, for example. OpenVSD created TAME, which was renamed as Pledge,
09:42
in 2015, and it's a really amazing mitigation. I really like it. It's really simple to use, because contrary to seccomp, it's not based on syscalls, where you have to dig through your operating system source code, saying, hmm, what is this syscallr doing? Do I really need it for this Java program?
10:01
Maybe not, I don't know. Here, it's capability-based. So for example, you can say, hey, this program is only to use standard inputs, standard outputs, or this program is allowed to do DNS resolution, and that's it. For example, I think the NTP client of OpenVSD is running different processes with different pledge policies, like one is allowed to resolve the domain,
10:22
the other one, more privileged, is allowed to change the time of the system, for example. That's really neat. Also, it's more used at seccomp. Seccomp is mostly used in Docker, for example, or Tor, APT at some point as well, a couple of other programs, Chrome, maybe.
10:42
But in OpenVSD, there are 850 calls to Pledge in OpenVSD SRC, so it's used a lot in OpenVSD. It's code-based, and I think it's really impressive engineering work, and it's working very well. Super effective. They've got Unveil, which is kind of Pledge, but for files, not really.
11:02
The idea is that Pledge allows you to restrict the view of your file system to a specific program. For example, if you've got a web browser, the web browser needs to be aware only of, for example, a folder for the cache and the cookies, and another one for downloads, and that's it, the web browser shouldn't have access to SSH keys, for example.
11:22
It doesn't abort on violation, so if your program is behaving weirdly, like trying to access your SSH key, maybe you will get a log message, but the program won't be aborted automatically. It's used by 77 Userland program, OpenVSD. That's kind of a decent number, because OpenVSD, with its default install, doesn't come with a lot of programs.
11:43
I think that this one is also really good. It's like Apple More or SELinux, running on Debian, Ubuntu, Android, for example, but I think it's much better, because the policy is residing inside of the program. Let's say you're using wget to upload some file on the internet. You can make the whole file system read-only,
12:02
because wget only needs to read some file on your disk and then upload it to the internet. Or if you're downloading a picture, the only thing that needs to be writable on your disk is the destination file for the picture you're downloading. You cannot have that with Apple More, which is more like, yeah, wget can only access this,
12:20
and that's it. So being able to reduce the attack surface, depending on what your program is doing, I think is really cool. Hardware vulnerability, we got a lot of them in the last three years. Apparently, you cannot trust your CPU anymore. That's a shame. Here, I think the most interesting thing
12:41
that I'm going to talk about is the reaction time, because it's usually faster to update your operating system than it is to update your CPU. And for some vulnerabilities when they were published, researchers managed to write proof of concept in the matter of hours, so I'm quite sure that serious players are able to have production-grade exploits in a couple of weeks, maybe months.
13:04
Hyper, hyper, hyper, what? Hyper-threading. So OpenBID is able to hyper-threading support by default, which is a bold move, and a lot of people call their names because of this, like, OpenBID doesn't care about performance, blah, blah, blah. But they did some benchmark, and the performance impact is pretty low,
13:21
except for some specific workload. And this allowed them to dodge a couple of vulnerabilities, for example, Lyft in user land, or MDS in its variants, like zombie load, or RIDL, for example, as well as user land. So this maybe should have been in the attack surface reduction part instead of here. I think it's really cool, it's a really bold move,
13:42
and I think it's a good indicator that OpenBID, there are some people at OpenBID that care about security. Spectre v123. The idea of Spectre is that the branch prediction, speculative execution of your CPU has observable side effect, like your CPU tried to be smart and infer some things,
14:01
and an attacker can watch the observed CPU doing this and extract some data from this. So Spectre v1, which is the first variant of the Spectre attack. The mitigations on Windows is compiler-based. On Linux, it's manually removing some gadgets using a magic grep. And OpenBSD, there is nothing, so you need to update your CPU if you're worried about this.
14:22
Spectre v2, also compiler-based on red polins. Day zero for everyone, that's really impressive. Three months for AMD64 and OpenBSD. That's not a long-ish time, that's all right. KPTY, also Spectre v3. Candle page table isolations. Day zero for everyone, one month for AMD64.
14:42
That's pretty fast. Interestingly, and because I'm a mean person, OpenBSD got KPTY after DragonflyBSD, NetBSD, and FreeBSD. That's just me being mean. There are other ones, Lyft, MDS, Swapargs. Everybody was using the same mitigations,
15:01
except that OpenBSD was able to dodge a couple of them for user land because they disable hyper-threading. That's really cool. So everybody pretty much the first week, day zero, day three, nine. Yeah, that's really good. And for Lyft, interestingly, nine days after the embargo was Lyft, Theodora had said that there won't be any mitigations
15:21
for OpenBSD 6.2 and 6.3, despite them still being supported at the time, which is an interesting statement. He sent this on the mailing list. I'm not sure if people know about it. Randomization, OpenBSD has a really strong focus on randomizing everything to make the live phone attacker harder.
15:43
ASLR, so the idea of ASLR is to map area of the Azure space at random location. For example, your stack is the random location every time your program start. So is your heaps, your libraries, and everything. It was invented by the PAX project, which is the patch for Linux in 2001.
16:01
And the same year, OpenBSD added a random offset for the stack. That's pretty fast, pretty neat. 2003, OpenBSD added a random offset as well for libraries and MMAP, and it took two more years to Linux to join the bandwagon. Technically, it's ASR and not ASLR for OpenBSD,
16:23
because the delta between the different maps is constant between the launch. For example, when you're running your binary the first time, you've got the delta between your library and your stack, for example. And when you relaunch it, there are mapped at different offsets, but the delta is still the same. It doesn't matter that much,
16:42
at least still better than per boot randomization like Android, iOS, and Windows are doing. Also, OpenBSD claimed to be the first widely used operating system to have ASLR, but there was Gen2R then before and Adamentix. I don't know if Gen2R then was more popular than OpenBSD because it didn't publish numbers,
17:01
but I think, I'm not sure the statement is true for OpenBSD. Position independent code, so here is not only the stack, the heap, and the library that are mapped at a random offset, but every time you're running your program the binary will be mapped at a random offset, removing fixed point for the attacker to see where it is, or what to overwrite, or where to jump, these kind of things.
17:23
Also, invented by Pax, 2001, Gen2R then enabled this for the whole user land in 2003, and Federer and Red Hat Enterprise Linux used this for security and network facing binary because there were some performance concerns to enable this mitigation.
17:41
OpenBSD got support for Pi five years afterwards. 2011, Pi by default on iOS and OSX by Apple. There are a lot of text in this slide because I think that here the timeline matters. Also, 2012, Android, that's really cool, and 2012, Pi enabled by default on OpenBSD.
18:04
That's pretty nice. Except that on the OpenBSD website it's written that OpenBSD 5.3 was the first widely used operating system to enable it globally by default on seven hardware platforms. Android was first for six different architectures, Federer was first for eight different architectures,
18:22
and also there was Gentwarden, Adamentix, maybe they got less users at OpenBSD at the time, but also Apple enabled it for OSX and iOS, and I'm quite sure this is more mainstream operating system than OpenBSD is. Oh, it's still an amazing mitigation. Carl, I really like this one as well.
18:42
July 2017, OpenBSD relinks kernel object at a random author after every boot. The kernel is relinked, if your kernel was a giant puzzle, the pieces would be shuffled and assembled in a different order after every boot, so when you reboot, your kernel looks entirely the same,
19:01
and for example, on Ubuntu, every time you're rebooting, the kernel is the same, so as an attacker, I only have to have the same version of Ubuntu as yours, write my exploit for the kernel, it works, it will work usually on your machine. So it's pretty nice, it kills single pointer leaks and relative overwrites, because if I can leak a pointer
19:21
to your kernel, I know where the pointer is, but since the kernel changes upon every boot, it doesn't give me much information, and also relative overwrite, if I'm able to write whatever I want, but in a relative manner, I don't know what I'm going to overwrite.
19:40
Now it's really useful against attackers that doesn't have an arbitrary read, or a CPU side channel, so yeah. Also, the debuggability of this is really horrible, because everybody has a different kernel, so if my OpenBSD crashes and I want to send you
20:00
the stack trace, I will have a different one that you do, for example, or maybe I'm not a power user, I don't know much about this, so I'm just sending you a screenshot, and there is no way for you to know what my kernel layout is looking like. Also, it doesn't work very well with trusted boot, because you've got a different kernel after every reboot, and you would have to sign it every time,
20:21
it would kind of defeat the purpose of trusted boot, but OpenBSD, I think, doesn't really care about trusted boot, so it's all right. This one is interesting as well. They are randomizing like they do for the kernel, except for the libc and libcrypto, at boot time. That's pretty nice, 2016.
20:41
It also kills single pointer leak and relative overwrite. If an attacker has an arbitrary read, this mitigation is moot, and also, this one is vulnerable to blind drop, but OpenBSD has some measure in place to make blind drop a bit more difficult. It's useful against remote attacker, but since it's per boot, it's entirely,
21:02
usually useless against local attackers. Library order randomization. So here is not randomization inside of libraries, but the library are mapped in a different order every time. 2003, this was also done by Android at some point by default. The smallish improvement over ASLR,
21:22
but when you've got a single leak to a library large enough, for example, the libc or libcrypto, I don't know, there are usually enough gadgets there that you don't need to look for other libraries. Also, the entropy is pretty terrible because an attacker, I don't care about figuring the particular order of all the libraries.
21:41
I only need my libc to be the first one, for example. So if I've got N libraries mapped, I've got one chance out of N to exactly hit this library the first try. So it doesn't hurt to have this, but it's not very effective. WX, this mitigation ID is to have the memory section
22:00
either writeable or executable, but never at the same time. It's a pretty old mitigation. It was, I think, first made public by Casper Deek for Solaris in 1990 something, something. Solar Designer wrote a patch for Linux kernel for this as well. It prevents the introduction of new codes
22:21
because an attacker cannot put his code into a writeable section, directly jump onto it. This used to be the case in the 90s, but nobody's doing this anymore because of these mitigations. OpenBSD was pretty late to the party. Took them a couple of years, 2002 for user land, 2015 for kernel land. Amazing mitigations, except that it's lacking things
22:44
like packs and protect from packs and netBSD nowadays, I think, or ACG on Windows or the kind of hardware equivalent in the Apple world, which is KTR or in hardware. The idea of this is that the operating system will keep track of the memory allocation.
23:03
For example, if you are locating a page as writeable, it can ever, ever be mapped as executable. Even if you map it as protnone, for example, and then map it as executable. And this really prevents introduction of new arbitrary code because otherwise an attacker, if I've got, I don't know, some ROP gadgets and I've got code execution,
23:22
what I can do is that I can allocate a section of memory, mark it as writeable, put my payload there, map it as protnone, and then map it as executable and jump on it. So this is basically the attacker adding new code inside of the binary. And when you've got packs and protect kind of things,
23:40
you're not allowed to do that. So you have to write your whole payload in ROP or use that on the attacks, but you cannot bring your own shellcode anymore. Yeah, WX refinement in 2019. So Tio said that he wanted to block direct syscalls from TOSARIA, forcing the attacker to deal with the hopefully more complex efforts
24:02
of using JIT or probably even harder, discovering the syscall stuff directly inside randomly re-linked libc. What is the point of this? That's a subset of packs and protect that we mentioned previously. They are blocking syscall from executable memory. So if an attacker has an executable memory,
24:22
he cannot issue syscalls from here. And the further refinement was to block syscall from memory that doesn't have the M syscall flag. So the operating system will map a particular section of your address space when your binary is running, and you can only issue syscalls from this version of the binary.
24:43
Couple of days ago, Samuel Grosz did a talk about iMessage exploitation, and his exploit would have entirely bypassed its mitigation, and it's not even present on Android because when, as an attacker, you've got enough control to map an area as writable,
25:02
put your code there, map it as executable, and then jump on it and then do a syscall. Usually you've got enough control to just rope your way to syscall stuff wherever they are because you usually have an arbitrary read anyway. This mitigation is pretty useless. I think this is the juicy part of the talk. It's about other memory corruption mitigations.
25:25
Userland heap management, July, 2008. Auto-malock by Otto Merbeek. I think, sorry for butchering your name. And Damien Miller is an amazing piece of software. Out-of-band metadata,
25:41
so when your allocator is allocating some stuff, the metadata about the data that was allocated are kept separately. So as an attacker, if you've got an overflow, for example, you're not able to mess with the data that are here. Also, read-only structures. So as an attacker, if I've got an arbitrary read
26:00
and arbitrary write, for example, there are some structures that I cannot mess with. Quarantine with delayed free. The idea is that once your program doesn't need the memory anymore, maybe you want to free it, but the free doesn't happen immediately. The section will be put into quarantine. At some point, we'll be free. This helps to mitigate use-after-free because as an attacker, it's a bit harder
26:21
to know when the memory will be freed. Junking, like when some memory is allocated, there are junk data put there. So as an attacker, if I try to immediately look into this memory, I'm not able to leak some things. Canaries to detect linear overflows or linear underflows. There is a secret value that's put behind
26:42
or before the buffer, and when an attacker, when I overflow it, the program will notice at some point that the canary value has changed. Gua pages, or page alignment. The idea is to align your location per pages. So as an attacker, when I've got an overflow, odds are that I will fall into a page that is not mapped.
27:01
Gua pages, like canaries, but instead of putting secret values, you're putting entire pages before and after, map them at prot known, for example, so when the attacker touches them, everything will explode. As usual with OpenBSD, everything is randomized everywhere. It's a really cool piece of software.
27:21
Unfortunately, it's a bit slow compared to, for example, Scudo, which is the Google hardened allocator that they plan to use for Android. Some benchmarks shows that AutoMalloc is 12 times slower, but apparently, the OpenBSD people care more about security than care about performance,
27:41
and that's entirely fine. Read on your location. So it's more tricky to explain, but basically the idea is that when your program needs a function from another library, like let's say you want to display some text, you use printf,
28:01
and your binary doesn't implement printf. So it will ask you, hey, where is the function printf again? And there is small stuff that will say, oh, let me look, here it is, there you go, and the program will take the function, put in a small cache, so next time it needs to call printf, you can just look in the cache and call printf.
28:21
The idea of read-only relocation was created by Red Hat, is that to make this cache as read-only, because an attacker, for example, if the cache is not made read-only, what I can do is that I can swap the pointers there. For example, next time you're going to call printf, since I messed with the cache, you're going to call system and give me a shell.
28:42
So the idea is to have this as read-only, but the caveat is that you need, when you're starting your program, to resolve everything, because you cannot dynamically change the cache. But there is a plot twist. OpenBSD still has lazy bindings, which means that they're resolving things at runtime,
29:03
but still having a read-only zone. So it's a bit weird. But, so the way they are doing this is by adding a new syscall called kabind, and the idea of kabind is that it allows the program to have an arbitrary write inside every memory
29:21
that is mapped in the address space. So even if it's read-only, the program can still write there. So to prevent an attacker from using it, there is a call side verification, like the first time it's called, the operating system will remember where the function was called, and also use a magical cookie to make sure then the caller knows the magical value to be able to use it.
29:41
Unfortunately, you can just rope your way to bypass the call side verification, and also the cookie value, when you've got an arbitrary read, it's pretty moot. So I think this is a dangerous syscall, and the right way would have been to have immediate bindings,
30:00
instead of still supporting lazy bindings which are things from the past. Trap sled, so this is hilarious. So Todd Mortimer sent a patch to replace the padding between function that used to be knobs by traps. And the idea is to remove knobs sleds from program libraries and makes it harder
30:22
for another code to hit any ROP gadget or other instructions after a knobs sled. Nobody is using knobs sled with ROP. Knobs sled were used back in the day when the stack was executable. People are jumping precisely to the gadgets nowadays. You can look at every exploit out in the wild.
30:40
Nobody is using knobs sleds. Also, Microsoft Visual Studio had these features since the 2010 editions and never branded it the security features. Also, OpenBSD has an obsession about removing ROP gadgets. They're doing this by changing the register selection algorithm,
31:00
like instead of using eax, for example, they were a favor, ebx instead, why not? They're also replacing instructions, so instead of moving a to alpha, they are exchanging a and b and then moving and then changing again. They are forcing alignment with the trap sled. There is a whole jump above a trap sled and a ret instruction to prevent an attacker
31:21
from jumping in the middle of an instruction and hitting the ret afterwards. Also, they've got ret guard to protect against aligned ret usage, but I'm going to discuss ret guard a bit more after. ROP gadget removal, why? Why would you do this? Because they are using a script called ROPgadget.py, which was written by Jonathan Salwan.
31:42
It was written as a proof of concept for fun. Nobody is using it except maybe during CTF as a try and usually doesn't work because the heuristic it's using are pretty simple. And the way they are measuring the success is that they are running ROP gadgets on a kernel binary to generate a user land,
32:03
exec v ROPchain. And the ROP gadgets, the py script managed to generate a full chain before their mitigations and when they applied mitigations, ROP gadgets doesn't manage to generate the complete chain anymore. That's a weird metric.
32:20
Also, apparently all their DANs to remove gadgets are reducing the number to 11% on MD64. That's not a lot. When you're writing a ROP chain, usually you need like, I don't know, a dozen gadgets, maybe 20, but not that much. And 11%, this doesn't make any difference.
32:42
Like there are still dozens of hundreds of gadgets lying around. They claim that there are no more ROP gadgets on AM64. That's amazing. So I've run ROP gadgets.py on the kernel binary and remove all the red base gadgets and there are still 12,891 JOP or COOP
33:04
or P-COP gadgets everywhere. Also, it doesn't kill, as I mentioned, the ROP, COOP, P-COP and all the return to, return to CSU, return to lib, return to anything. And Tio said that once they address the red problems, everything else will be easy to address.
33:22
And also, in any case, substantial reduction of gadgets is powerful, except it's not. There was a paper published in 2019 by Michael Brown and Santosh Pade called Is Less Really More that is explaining why removing a ROP gadget doesn't usually improve security and sometimes even worsen it. Amusingly, GCC used to have an option called
33:42
dash dash and mitigate ROP, where it's now removed because, I quote, this option is fairly ineffective, nobody seems interested to improve it, deprecate the option so you won't lure developers to the land of full security. Red Guard. So Crispin Cohen and his friends wrote Stack Guard, 1997.
34:04
The idea, as mentioned previously with cookies, is to have a secret value somewhere and when the attacker tries to overwrite things to memory, the attacker will also overwrite the cookie value, thus allowing the program to detect that something is wrong. Here, it's usually on the stack,
34:20
like when you're calling a function, the return address, because you're executing your program some point you might want to call a function, for example, but you need to know where to come back. So the idea is that when you're calling a function, usually, at least on MD64, you're putting the return address on the stack, the function is doing things and then it's taking the address back and coming to the call site.
34:41
So as an attacker, if I can overwrite this address, I can make the control flow of the program point wherever I want. OpenBSD added Stack Cookies in New Zealand in Kano Land in 2003, six years after invention. And amusingly, they were using a segment field with random data for this.
35:04
And they marked it as static const, so the compiler was smart enough to say, huh, this is a static const segment, so it must be zero, so it simplified the comparison for cookies. So the cookies in OpenBSD were ineffective between 2016 and 2017.
35:20
So RedGuard 2017 edition, the idea was to XOR, do an exclusive OR on the return address at top of the stack with the stack pointer value itself. That's an interesting move, except it doesn't protect again partial write, like if you can partially overwrite a pointer, you don't care about this.
35:40
Also, if you've got the read primitive on the heap, because in the heap, there are stack pointers usually, you defeat the cookies and the SLR. Kano Land, if you can leak some part of the Kano stack and the Kano text segment, you get the cookie for free. So this was not the smartest move ever, that's why they improved it with RedGuard 2018 edition. So here are some assemblies, assembly, singular.
36:04
The idea is to move like RedGuard thing, this is from the random, the segment with random data here, XORing with RSP at the beginning of the function, and at the end of the function, there is a verification that the value is still the same, and then a big jump above a trap sled and the return.
36:24
Nice, that's nice. R11 is spilled on the stack when you're calling different functions, so if you've got an arbitrary read, you can just leak the cookie values from all the functions above you. There is one cookie per function with a small improvement. Also, cookies are stored in a dedicated segment,
36:42
so you cannot overwrite them, which you can do in other operating system. And I think this is really interesting, the integrity is on the return address itself, it's not just a cookie anymore that is shielding the return value below, but the integrity is on the return address itself,
37:01
so even if you've got an arbitrary write, you cannot really mess with this. Well, it's still a small improvement over SLR, because, well, a small improvement, sorry, over regular stack cookies, because when you've got an arbitrary write, an arbitrary read, oh, I'm tired, sorry, when you've got an arbitrary read, it's still game over, because you can leak everything.
37:23
Null-derif kernel-end. So the idea is that when you've got a null pointer at the reference, like a pointer pointing to zero in kernel-end, like you forgot to initialize a function pointer, let's say, since the address zero is in user-end, what the kernel will do is that it will jump to user-end, so as an attacker,
37:41
you just have to map your shellcode there, trigger a null pointer at the reference, the kernel will jump there, and you've got code execution. The PAX project killed this in 2004 with scan-exec, and apparently twice in 2004, a null-derif in 2006, and the title here does not really matter.
38:01
Eliavands Prudel gave a talk at the 23C3, that's pretty old, 2006, called the unusual bug where it demonstrated some null pointer at the reference exploits. This was mitigated by Linux the year after with m-mapping, as they are preventing user-end from mapping things at the very beginning of the address space to prevent an attacker
38:22
from putting the shellcode here. 2007 to 27, everybody was copy-pasting exploits for OpenBSD, it was a really fun time to be alive. 2008, OpenBSD prevented to map the first page as well. Theo De Raad, which is the person leading the project,
38:40
says he's not super proud of the solution. It seems best-faced with stupid Intel architecture, so it seems that everyone else is slowly coming around the same solution. It took them two years to implement this. I think it's the other way around. They were slowly coming to the solution. Smap-smap, all the friends. The idea of this is that you can enforce
39:02
at the CPU level that things running on supervisor mode cannot access user-end. For example, your kernel, maybe you don't want your kernel to access things that are residing in user-end to prevent people from writing the payload in user-end, triggering a bug somewhere in the kernel, and then jumping to user-end,
39:22
forcing an attacker to put the payload directly into the kernel instead. Pax-UDRF was kind of an emulation of this, implementing software. The Intel and then IMD released word for SMAP and SMAP. SMAP stands for execution, so the kernel cannot execute stuff coming from user-end,
39:42
and SMAP is access, so the kernel cannot access things coming from user-end. It was added in 2012. Everybody had the support for it. Amazing. And then someone burned a cool OpenBD SMAP bypass because they forgot to clear a magical flag on interrupting kernel entry. So they were vulnerable for five years.
40:02
It was a really fun bug. Map stack. It's present on Linux. Almost no practical use besides when you're cutting slash proc, you will see this memory part here is the stack. Windows used to have anti-exclamation mark PS validate user stack because it's Windows.
40:22
They removed it in 2012 because it was useless. The idea was to check that the stack pointer was pointing on something that was mapped at the stack upon every single syscall. There were generic bypasses from this, mostly published by Ivan Frantek. The idea, number one, was to write this tab that called mmap with the map stack flag,
40:42
put your payload there, jump on it. Or before every syscall, you can just make the stack pointer point to the stack, do your syscall, and then make it point something else. OpenBD improved on this. They didn't cite Windows mitigation or any paper. And they're checking the stack pointer upon every syscall, but also page fault.
41:01
I think this is a really cool improvement, except that there are some OpenBD-specific bypasses that are left as an exercise to the crowd. Other mitigation that didn't fit into the previous categories, but I think they are worth mentioning. Scene cookies. Daniel Bernstein, again, 1996, with Eric Schenk.
41:22
The idea is to have a stateless storage of the scene handshakes. For example, when you are establishing a TCP connection, you're doing scene, and then the server replies hack, and then you reply scene hack, and then you can exchange data. It's amazing. But if a client sends scene, scene, scene, scene, scene, scene, scene, scene, scene,
41:40
the server needs to keep track of everything at some point will just blow away. So the idea of scene cookies was to be able to store all the scenes in storing a stateless state. Anyway, it landed in Linux the same year, enabled by default, everything's super great. And OpenBD implemented it last year.
42:00
Nowadays, it's kind of useless because everybody on the internet can trivially dose you with terabytes of data for just a couple of bucks by renting a botnet. Maybe it's useful on your LAN, but if someone is dosing you on your LAN, you usually got bigger problems. Map conceal. FreeBSD, the idea is to be able to mark
42:21
some section of the memory in your binary as please never touch the disk. So for example, when you've got a crash, a crash dump, maybe you want to give the core dump to the developer, but you don't feel comfortable leaking your secret message key, for example. So you map it as no core or don't dump or conceal,
42:42
and there will never be put on the disk, so you can safely give the core dump away. 2012 for Linux, 2019 for OpenBSD, they bragged about it. Ted Unaix, which is a core OpenBSD developer, said the name conceal was chosen to allow some flexibility, like prohibiting ptrace, the idea to keep secret from escaping into other programs.
43:02
It seems that there is a threat model issue here, because if you've got ptrace control of a program, you can just rewrite the code of the program to access the things that you store in map conceal or mount some data on the attack, I don't know. Development practice. I think this is important. They don't have any bug tracker.
43:20
Everything is done by email, so you don't know if somebody is assigned to your bug or this kind of thing. You have to go through the mailing list. There are no public code review when they are pushing code. They say, oh, Theo said okay, or Bob said okay, or Ted said it's okay. It's literally Theo okay at the end of the commit. There is no justification context
43:41
or threat model for mitigations. Like, hey, I edit mitigations to make the life of an attacker harder. There is no paper. There is no threat model. There's just hand-wavy statements. Also, a security issue. When security is through a patch, there is an errata web page with, here is the patches, here is the signature, so I can verify, stress-worthy, do patch.
44:02
But there is nothing about, is it a remote vulnerability? Are there exploits on the wild? Can I have a write-up? Well, what is the context here? Do I need to reboot? Is it in kernel land? Is it in user land? No, just apply the patch and reboot. This doesn't scale very well when you've got hundreds or thousands of machines running OpenBSD. You cannot reboot all of them instantly. They've got no current integration.
44:21
They've got stable release, and they've got current. Current is broken from time to time. My VM stopped booting at some point, every month or two, something like this. Apparently, it's accepted there. Also, they're using CVS for a version control system instead of other things, so they have no branches almost.
44:41
50% of the commit messages are less than 10 characters long. Hello World is 11 character longs. Three-quarter of the message are less than 20 characters. So if you write, Hello World, Hello World as a commit message, it would be longer than three-quarter of the commit message of OpenBSD.
45:00
Conclusion, write in time. OpenBSD has invented some really cool stuff. I really like Auto-Malock. I really like what Damian Miller is doing with hardening OpenSage, for example, and other things. They've also got an entropy gathering syscall that I didn't mention also.
45:20
Yeah, they've got some good ideas. They improved some ideas of others sometimes without giving credit. For example, they've got Tame page. They've got password hashing. They invented bcrypt, that's amazing. They've got some useless mitigations that are adding either complexity or are just hilarious. Trapsled, for example, the whole wx refinement,
45:42
the weird ROP gadgets, removal ideas, kabai and everything. And I think that this could likely be improved with systematic security engineering, like doing more tests, maybe writing threat model and everything, because the SMAP bypass, for example, shouldn't have been living this long. Also, nobody would create cryptographic primitives today
46:02
the way that OpenBSD is doing security development. This wouldn't be acceptable. Why is it acceptable to develop mitigations this way? Proper mitigations, I think, can stem for proper design and threat modeling, strong reality-based statement, like tskills, tskills vulnerability,
46:20
or tskills TCV delays the production of an exploit by one week, and also thorough testing by seasoned exploit writers. Anything else is relying on pure luck, superstition, and wishful thinking. Thank you very much.
46:43
Also, since I didn't put a lot of sources there, I did a fancy website with a crazy domain name. It doesn't address the question, is OpenBSD secure or not? I didn't address this in my talk either, because I think it's important to empower and help people to answer this question by themselves.
47:04
Thank you, Stein, for this definitely systematic review. So let's go to the question and answers. Do we have any questions from the internet? I see a no. So are there any questions here in the hall? I don't see any people at the microphone.
47:22
Nobody? Some more time to think about some questions? Guys? No questions? Ah, microphone number two is our starter. Thanks. When you showed the response time
47:40
regarding some of the mitigations, do you know if the OpenBSD people had access to the information ahead of time, like the others? Because Linux and Windows, I would assume they would have access to the information to be able to write a mitigation in time
48:00
to deliver a zero-day mitigation, whereas OpenBSD, I'm not sure. It's an interesting question. I didn't mention embargo handling on purpose, because apparently it's a sensitive topic. OpenTO say vehemently that OpenBSD never broke any embargoes, but they are known for not playing nice with embargoes,
48:23
so they are usually nowadays excluded from embargoes, so they weren't included in the disclosure process. They just had to deal it in a rush. It was no rush. Okay, we got a question from the internet.
48:42
Yes, thank you very much. And there's one question. Do you have a response to the statement of OpenBSD developer Brian Steele that MapStack is something very different from similar implementations in Linux and Windows?
49:00
MapStack, this is the mapping, maybe I can show the slide again. Oh, this is confusing. Yes, MapStack, as I said, on Linux, is just used for cosmetic purposes. Windows, it was removed, but MapStack is the same idea of verifying the stack pointer upon every syscall.
49:21
OpenBSD improved it by doing it on pagefault. It's an improvement, but it's still not a tremendous mitigation. Okay, we have a person standing at microphone number four. So how do you compare the plate
49:41
with the capability system on Linux, because there is a such a thing on Linux, and how is this different? On Linux, what do you mean by capabilities? Like, for example, there is the, for example, this CAP network bind
50:03
that runs the capability for ping, for example, to create a raw socket or something, I can't remember the exact name. Yeah, I see what you're talking about. They're really confusing. There are a lot of them. The documentation is scarce.
50:22
I think that spender from geosecurity wrote a blog post detailing all the capacities and how much of a mess it is. Maybe it's sufficient, I don't know, but since it's not really usable by normal human beings, I think it's not a good mitigation. What I really like about PLEDGE is that you can just say input, output, and that's it,
50:41
or network, and that's it. You don't have to mess around with a lot of documentations everywhere. Do I need this particular type of exotic socket in my Java program, I don't know. Thank you. All right, another question for microphone number five, please. There used to be this developer channel, the ICB.
51:02
Is this something which is still active in OpenBSD, or did they switch to like IRC now? I don't know much about OpenBSD ecosystem and everything besides the mitigations and didn't interact with our community at all, so no idea.
51:22
Any more questions? So, there is one more question from the internet. Sorry. Yes, and how does OpenBSD compare to FreeBSD in the context of your talk?
51:45
I don't know much about OpenBSD. Maybe I will do a talk next year. No, more seriously, there is the HardenBSD project, which is a soft fork of OpenBSD, of FreeBSD, trying to improve the security of FreeBSD,
52:01
but I don't know much about it. Okay, any more questions in here? We still have time. Internet, no questions anymore.
52:20
Well, I'm then gonna close this session here and thank Stein again with a nice applause, please.