BCOS Monero Village - BCOS Keynote
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 322 | |
Autor | ||
Lizenz | CC-Namensnennung 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/39801 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | |
Genre |
00:00
Minkowski-MetrikTermMultiplikationsoperatorComputersicherheitRichtungResultantePunktZahlenbereichWhiteboardQuick-SortBitKryptologieKontextbezogenes SystemFlächeninhaltPlastikkarteEinfügungsdämpfungNeuroinformatikProgrammierumgebungPhysikalischer EffektGruppenoperationGüte der AnpassungGanze FunktionGebäude <Mathematik>SichtenkonzeptWurzel <Mathematik>InformationAutorisierungTotal <Mathematik>SoftwareschwachstelleGraphApp <Programm>ServerOrdnung <Mathematik>Leistung <Physik>DifferenteRuhmasseProtokoll <Datenverarbeitungssystem>Mailing-ListeHypermediaRepository <Informatik>EinsMAPSpezifisches VolumenProgrammierungRechter WinkelSelbst organisierendes SystemWasserdampftafelSpitze <Mathematik>GeradeMarketinginformationssystemNP-hartes ProblemStützpunkt <Mathematik>KurvenanpassungBesprechung/Interview
08:02
ServerTermGüte der AnpassungDatensatzDigitalisierungNP-hartes ProblemEndliche ModelltheorieAnalogieschlussProzess <Informatik>Cookie <Internet>ComputersicherheitMetropolitan area networkTypentheoriePhysikalischer EffektCachingRechter WinkelVererbungshierarchieTransaktionQuaderQuick-SortRechenschieberBesprechung/Interview
10:32
Endliche ModelltheorieEinsFitnessfunktionZweiMAPProgrammierungMultiplikationsoperatorDifferenteGamecontrollerMereologieQuick-SortRichtungBitEreignishorizontComputerarchitekturKontextbezogenes SystemMaßerweiterungComputersicherheitOnline-DienstPhysikalisches SystemFlächeninhaltFramework <Informatik>HypermediaMinkowski-MetrikSelbst organisierendes SystemGanze FunktionAnnulatorKlasse <Mathematik>NP-hartes ProblemRechenschieberRechter WinkelWeb SiteBasis <Mathematik>Open SourcePunktBesprechung/Interview
15:01
EinfügungsdämpfungServerPhysikalischer EffektComputersicherheitProgrammierungQuick-SortRechenschieberZahlenbereichWeg <Topologie>NP-hartes ProblemVerknüpfungsgliedRechter WinkelDruckverlaufDienst <Informatik>MittelwertEinfache GenauigkeitWechselseitige InformationEinfach zusammenhängender RaumOrdnung <Mathematik>MultiplikationsoperatorVererbungshierarchieCodeDatenflussDatenstrukturFront-End <Software>Kernel <Informatik>SoftwareentwicklerExploitZusammenhängender GraphProgrammierumgebungVirtualisierungDatenverwaltungDeskriptive StatistikPhysikalisches SystemHalbleiterspeicherPunktElektronische PublikationStützpunkt <Mathematik>BitKryptologiePunktwolkeSystemzusammenbruchFrequenzGeradeSoftware EngineeringWeb ServicesSeitenkanalattackeDateiverwaltungVirtuelle MaschineOffene MengeSchnittmengePatch <Software>Web logTermApp <Programm>PolstelleSchlussregelCASE <Informatik>FokalpunktProgrammbibliothekInformationLastVersionsverwaltungPerspektiveChiffrierungWellenpaketGraphEndliche ModelltheorieMereologieWärmeübergangGruppenoperationSoftwareKonstanteCoxeter-GruppeQuellcodeProzess <Informatik>RandomisierungHyperbelverfahrenZweiDruckspannungBesprechung/Interview
24:50
SoftwareschwachstelleServerKartesische KoordinatenGemeinsamer SpeicherDienst <Informatik>Physikalisches SystemGruppenoperationFramework <Informatik>Rechter WinkelDifferenteCodeGleitendes MittelKeller <Informatik>Selbst organisierendes SystemResultanteStreaming <Kommunikationstechnik>AnalysisTeilbarkeitProjektive EbeneHIP <Kommunikationsprotokoll>MathematikQuick-SortFormale SpracheProzess <Informatik>SoftwareentwicklerRandomisierungDateiformatWhiteboardCASE <Informatik>SpeicherabzugSchlussregelBitMaschinenschreibenMAPProgrammierungNP-hartes ProblemApp <Programm>GeradeKryptologieComputersicherheitSchreiben <Datenverarbeitung>MereologieKomponententestInteraktives FernsehenIndexberechnungDesign by Contractp-BlockRückkopplungQuellcodeOffene MengeInzidenzalgebraKette <Mathematik>FunktionalMultiplikationsoperatorSchießverfahrenHydrostatikGanze FunktionMultiplikationStatistikVorzeichen <Mathematik>RoutingEndliche ModelltheorieSystemaufrufWeg <Topologie>ZweiBesprechung/Interview
33:17
LoginComputerforensikResultanteSoftwareProzess <Informatik>InzidenzalgebraWarteschlangeFächer <Mathematik>PunktProgrammierumgebungExogene VariableZweiSpeicherabzugServerVersionsverwaltungPhysikalisches SystemDerivation <Algebra>Fahne <Mathematik>Open SourceKryptologieDienst <Informatik>FlächeninhaltZahlenbereichDifferenteInformationFigurierte ZahlMetadatenQuick-SortMaßerweiterungInstantiierungWhiteboardGruppenoperationVorzeichen <Mathematik>Deutsche Physikalische GesellschaftDreiecksfreier GraphBitDeterminanteMAPMultiplikationsoperatorSoftwareschwachstelleHilfesystemEindeutigkeitKreisflächeRechter WinkelTextur-MappingProjektive EbeneProdukt <Mathematik>Notepad-ComputerNotebook-ComputerInformationsspeicherungMittelwertGesetz <Physik>Mailing-ListePeer-to-Peer-NetzMathematikElektronische UnterschriftInternetworkingBenutzerfreundlichkeitBesprechung/Interview
41:34
Dienst <Informatik>KryptologieMinkowski-MetrikGamecontrollerMarketinginformationssystemPlastikkarteComputersicherheitInformationInteraktives FernsehenProgrammierumgebungAnalysisPaarvergleichPasswortQuick-SortInverser LimesTransaktionZahlenbereichDruckspannungStandardabweichungSelbst organisierendes SystemSpeicherabzugMinimumRechter WinkelMultiplikationsoperatorAnalogieschlussMAPEinfügungsdämpfungRegulator <Mathematik>IdentitätsverwaltungZentralisatorGebäude <Mathematik>VerschiebungsoperatorTeilmengeAbstraktionsebenePublic-Key-KryptosystemHintertür <Informatik>PunktLoginURLElementargeometrieDeterminanteMobiles InternetGruppenoperationSpieltheorieZweiBrennen <Datenverarbeitung>Güte der AnpassungEndliche ModelltheorieBesprechung/Interview
49:50
TransaktionRegulator <Mathematik>ComputersicherheitRandwertSchlussregelStandardabweichungMusterspracheMinkowski-MetrikOrtsoperatorDienst <Informatik>NormalvektorSoftwareQuick-SortMaßerweiterungProgrammierumgebungDifferenteKryptologieDatenmissbrauchGruppenoperationQuaderBeanspruchungMAPFramework <Informatik>Endliche ModelltheorieUmwandlungsenthalpieGamecontrollerTabelleInteraktives FernsehenNichtlinearer OperatorFahne <Mathematik>TermOffene MengeStrategisches SpielRechter WinkelNP-hartes ProblemProzess <Informatik>QuellcodeBitVorlesung/KonferenzBesprechung/Interview
53:51
MAPTermArithmetische FolgeDienst <Informatik>Regulator <Mathematik>SoundverarbeitungPhysikalisches SystemFramework <Informatik>MultiplikationsoperatorFrequenzMereologieBitQuellcodeDatenmissbrauchOpen SourceQuick-SortTypentheorieGemeinsamer SpeicherBitrateSoftwareSoftwareschwachstelleCodeLeistungsbewertungZusammenhängender GraphInstantiierungFlächeninhaltGruppenoperationSystemplattformÄhnlichkeitsgeometrieBeanspruchungGeradeSensitivitätsanalyseProgrammierungNetzadresseDigitalisierungCodierungKryptologieRechenschieberZeitrichtungProgrammierumgebungMarketinginformationssystemStrategisches SpielPunktSystemaufrufKette <Mathematik>Hintertür <Informatik>Rechter WinkelAnalysisNichtlinearer OperatorApp <Programm>DatenflussSoftwaretestComputersicherheitBesprechung/Interview
01:01:50
Besprechung/Interview
Transkript: Englisch(automatisch erzeugt)
00:00
Welcome everybody, welcome Philip, I hope you enjoy the talk. After the talk, if you have any questions, that's when everyone is kind of available to answer anything you have, Monair related, or Coinbase related, or cryptocurrency related I guess as well. And blockchain related, right? They're all kind of related in some way.
00:21
Okay, so enjoy the talk and hopefully talk to you soon. Look at that, computer worked. I'm going to apologize in advance for my voice. It's not keeping up super well this weekend.
00:41
But with the amplification, hopefully it'll go okay along with some water as well. Good morning, got a good group of hecklers here today, it's going to be a good time. Thanks for coming out at 10 o'clock in the morning.
01:00
I'm sure some of you just stayed awake until 10 o'clock in the morning, so more power to you. You can go to bed soon. As I said, my name's Philip, for the last two and a half years I've led security at Coinbase. Normally this is the point where I ask the question, who's heard of Coinbase? But I'm pretty sure that's an obvious question here.
01:22
How many of you guys are customers? Nice, I'm a half-ish. Really cool, thank you. So I'll skip the sort of what is Coinbase. The other way I describe what we do is we run the world's largest CTF with the highest stakes sort of outcome.
01:45
So we store somewhere on the order of 10 to 15 billion dollars of cryptocurrency. Which is not something that anyone wants to lose. Before we get into all that stuff, I just want to talk about the elephant in the room for a second.
02:02
I'm the first talk of the day, half of you are asleep, half of you are drunk from last night. So I appreciate you came out, and I think what I want to give you back for your attention here is a bit of insight into what it takes to protect a modern cryptocurrency exchange.
02:20
We'll get into why that's hard, in my point of view. As well as some sort of viewpoints into where I think we as an industry need to get better. And need to get better in order to push crypto, actually push crypto to the masses. Not just a very small boardroom, a very small room in the middle of DEFCON.
02:41
So I've personally been doing this thing for a while. I try to pick where I work with a very simple lens. It's where can I find the most interesting attackers. And that's led me to a bunch of different places over my career. But I will say cryptocurrency I think takes the cake so far in terms of interesting challenges.
03:00
And really in terms of doing things that no one's ever done before in this context. Which has been pretty cool. I would be remiss if I didn't say Coinbase is hiring. Coinbase is always hiring for security specifically, but really across the board. If what I'm talking about is interesting to you, you want to learn more, come grab me. I'm happy to chat.
03:22
So bottom line, what are we going to talk about today? Number one, we're going to talk about a little bit the industry at large. Give some context on why cryptocurrency is actually a hard, or being an exchange is actually a pretty hard problem. We're going to talk a bit about how Coinbase looks at the problem. I'm not going to go through Coinbase's entire security program, because we'd be here way longer than an hour.
03:42
But I am going to hit on a few areas that I think are interesting. And how we execute security in this environment. And then we're going to talk a little bit about the industry as a whole. And where I think we can improve and do better and actually push ourselves out there. And become the kind of industry that my mom is okay investing in.
04:01
It's really my bar when I think about this stuff, because I think about my mom. If she could interact with whatever we're building in the same way that she would interact with Citibank or Wells Fargo or anything else that has to do with money, then right direction. So that's a big number. That number is the total losses in cryptocurrency 2011 to 2018 across the industry.
04:27
It's compiled by a nonprofit called CryptoAware. They're actually a really cool little nonprofit. They're focused on user security and advocacy in cryptocurrency. Actually, my opinion is they're underreporting that number.
04:43
That number doesn't really include the sort of retail scams, social media stuff, tech support scams. That number is probably actually significantly larger than we can track in terms of exchange compromises and major scams.
05:01
I'll give you a slightly scarier number. Of that almost 3 billion that's been lost in 2011 and 2018, 1.7 was lost year to date in 2018. That is the wrong slope for that curve. We want that curve, that slope the other direction. We want that going down. We want less losses over time.
05:23
And so that begs the question, and this is really what sort of starts our approach to a security program, Coinbase. Why is this hard? What is so hard about protecting crypto? Another way we break down this industry is in terms of causes of loss.
05:43
So this is a graph from the blockchain graveyard. If you guys are not familiar with it, it's an outstanding resource maintained by a guy named Ryan McGee in Magoo. He basically crawls whatever public information there is out there about a breach at a given organization,
06:00
tries to find a root cause or tries to sort the wheat from the chaff, and puts it on his GitHub repo. And then over time he charged that. I think he's tracking breaches from 2011 to 2018. Really great summary of each one. If you're interested, you have time, I highly recommend if you're in the industry, you go read it.
06:22
Because it is a list of lessons learned in our industry. Or put another way, a list of things to avoid in our industry that's really, really important. Just Google blockchain graveyard, it's the first result.
06:40
Kind of a unique term. So this is total he's tracking in their 59 breaches since 2011. That's an average, by the way, to save you guys math, of eight breaches a year in this space. Incredibly high number, higher than basically any other industry I can really think of.
07:01
Besides maybe the payment card industry, but that's such a huge industry that the numbers don't really quite compare. So when we ask the question again, why is this so hard? This starts to give us some answers, give us some insight into how are we losing. And the interesting thing to me is, by and large, we are not losing because of esoteric cryptocurrency vulnerabilities as an industry.
07:24
We're losing because of bread and butter, server volumes, app volumes, not enough customer auth. We're losing because of scams. We're losing because of security problems that we as an industry have been working on for decades at this point.
07:43
What does it say down here? Protocol cold, I think there are four breaches of the 49 that he can actually point at a protocol level volume or something really really deep in the cryptocurrency world.
08:02
Do we go back again to why is this so hard? If server breaches are the cause, why haven't we solved this problem yet? Can anyone tell me what movie that's from? Die Hard. Yes! Outstanding! Give that man a cookie.
08:22
Which Die Hard? First one. There you go. Not going to matter. So those are bear bonds. Those are $100,000 bear bonds. And when I think about the security model analog for cryptocurrency, what I think of is digital bear bonds.
08:40
A lot of people say cash. I don't like the cash analogy because if you've ever tried to move $10 million of cash, it's actually not really easy. It's very heavy and bulky and not easy to move around. Bear bonds are super easy. That stack right there is, if it was real, which obviously it's not, but is easily hundreds of millions of dollars.
09:01
Yes, sir? Yeah! Give him something. That was awesome. I expected no one to get it. Whoo! Good job. I may throw more of these out there now. Should have gotten more screenshots. Fail. Sorry.
09:22
So why is it like a bear bond? I'll quote you from Wikipedia. Bear bond differs from the more common types of investment securities in that it's unregistered, and the records are kept of the owner, the transactions, or ownership. Whoever physically holds the paper owns the bond. Does that sound familiar to anybody?
09:41
That sounds a lot like cryptocurrency to me in terms of the threat model for theft. So what we're trying to do is protect an asset that's globally valuable, and Bitcoin's a Bitcoin no matter where you are in the world, that's digitally transferable, which is actually fairly unique, and it's irrevocable. If we set up to design an asset that people would want to steal, I'm not quite sure we could have made a better one.
10:07
I don't know what we'd add to that to make it more attractive to attackers. So that's why it's hard. We're trying to do and protect a new asset, a new thing.
10:24
I'm going to leave it to Simon at the end, if you don't mind. What slide? Like I said, we're protecting really what is a new class of asset that has fundamentally different risks from previous assets.
10:49
In a really interesting way, we're doing that in the context of an online service. It's not like this thing is in a vault somewhere necessarily. Most of us in the industry who are building these systems are building them in a way
11:03
that users are interacting with them on a website, on an exchange, on a wallet, on whatever. Because we're protecting a brand new asset class, we as an industry are learning as we go to some extent here. What works, what doesn't work, and how it works, especially around the areas of
11:22
protecting these assets that don't quite fit the mold of anything else in the world. So we're sort of inheriting the threat model as an organization in this space of part maybe social media company, part bank, and part something else that's not yet defined. So we're trying to fit solutions and controls into a framework that is new.
11:48
And we're naturally having teething pains doing that as an industry. So no wonder this is such a hard place to exist.
12:02
So great, it's hard. Shock. I'm sure all of you are shocked to find out that cryptocurrency, defending cryptocurrency is hard. So what does it look like for us to defend this stuff? So like I said at the beginning, I'm not going to go over Coinbase's entire security program.
12:22
We'd be here for a week. But I think there's some stuff to talk about that is interesting and unique and how we approach this problem. And the first thing, and actually I'll talk through a few of these interesting things. We actually open sourced a fair bit of this stuff already.
12:43
The stuff that's not open sourced, most of it is moving that direction over the next six or nine months. I'll highlight what bits are open sourced, what bits are coming, and what bits. You can learn more about it in other talks. One of our foundational ideas here is that trust should be created through transparency, not through blind faith.
13:05
So if I'm asking you, hey, trust me with your money, I should be backing that up with a, and here's why. Here's what we're going to do to protect it and make it safe and keep it safe. So we spend a lot of time talking at conferences and events about the tools, the techniques, about what and why and how and where we do it.
13:23
And our intent is to continue and do it even more. So the first and most important thing to think about when you think about Coinbase's security program is the people. So today Coinbase is, call it 500 people. Security at Coinbase is 30 people.
13:43
That's 6% of the company is focused on security, which is an insane ratio for most organizations, most industries. And I think especially when we're talking about an asset like cryptocurrency, where there is a ton of innovation, we build a lot of our own tools, we're really forward looking.
14:02
You have to start with the people because they're the ones that are going to innovate, that are going to find the new ways of thinking about securing this stuff, that are going to actually be the ones solving the problems. I can't look at a vendor for this. There are no vendors that look at, there are probably some, but there are no vendors that say, you know what,
14:22
I'm going to protect your cryptocurrency and you put it here and we're going to make it safe and make it easy. So it goes back to the people. And I'll say, I'll just say once again, I'll sit out there, we're hiring. Just saying. Some people over there we can talk to if that's interesting.
14:43
So this is a pretty picture of what we're not actually going to talk about. But it's very pretty. It's a high level architecture of Coinbase. Very, very high level. So high level that's not actually useful. But it's pretty, should I have it. So if we go back to that blockchain graveyard slide for a second.
15:03
Number one leading cause of these tracked breaches, or these tracked losses, is server breach. So when we think about our security program, we walk through why is this hard. One of the answers is, you know what, attackers are walking through the front door. Let's talk a little bit about how we close and lock the front door in this kind of environment.
15:31
Coinbase and cryptocurrency in general, I think we're actually super lucky. Because we don't have a legacy to deal with, for the most part. We get to build this stuff from the ground up.
15:41
And we get to build it, and I see this as lucky, others will disagree. We get to build it under constant pressure from attackers. My philosophy here, this is one of the reasons that when I look for places to go, I look for places that are great attackers. No one teaches you like an attacker. You never innovate as well as you do when you have a clear and present threat or danger to innovate against.
16:07
It motivates you, it focuses you, it helps you do your best. So we get to build this ground up, new technology, under pressure, under focused attack. What better place could we go to build something.
16:22
A lot of people look at this and say, wow, super stressful. And it is, but it's also amazing. So what have we built here? Coinbase is fully containerized, every single service in Coinbase is deployed in a container. Virtualized, we're in AWS. Continually and immutably deployed service.
16:42
So let me break that down for a little bit. First of all, this is all based on a custom orchestrator we built internally called Codeflow, which are open sourcing piece by piece. We open sourced the actual deployer itself called Odin, I don't know, three months ago. It's a thing that takes a description of a deployment, it's a JSON file, and actually makes it happen in AWS.
17:06
And it's like the rest of the point, it's service oriented architecture, so we're open sourcing piece by piece of Codeflow. As we, quite honestly, as the development teams are happy with it and want to actually get it out in the world.
17:22
So Codeflow handles code from PR to deploy in prod. It handles the entire path and manages everything from consensus requirements on code submission, which we'll talk about more in depth, to security scanning, to CICD, to builds, to consensus on, or to secrets management, to deployment.
17:47
It's all in one long CICD path. What that means is that we can make a lot of this stuff transparent to our developers, build a really overall very, very safe and secure CICD pipeline, which I'll get into a little bit deeper a little bit later.
18:05
So let's dive into the rest of that, so containerized. Like I said, every single service in Coinbase is in a container. Some of you were probably shuddering when I said containers because you have some sort of container fear. Containers can be the best thing in the world or the worst thing in the world when it comes to software management and deployment.
18:25
It can be the worst thing in the world when you back into it, as you discover six months after the fact that your dev team is using containers. It can be the best thing in the world if you walk into it eyes wide open and say, you know what, we're going to be able to use containers. But, you know what, we're going to make it easy for you guys to use containers. We're going to manage a bunch of it for you.
18:43
So at Coinbase, the infrastructure team manages a base container that all containers in Coinbase must descend from. We're not pulling stuff off Docker Hub, it just won't work in our environment. Developers, development teams then base their services from that set of base containers so that we control the underlying layers,
19:00
we control the patching, we control what tools are installed, the whole nine yards. The developers are defining service-specific Docker files. As those of you who have some container exposure know, you can do a lot of damage in a Docker file. The most obvious and consistent example is a beautiful base layer that then has a curl, open SSL, 0.9 directly on the file systems library.
19:34
That's not great. And it's very, very difficult to detect that in a lot of cases.
19:41
So what do we do? We actually run our Docker files through a linter and say, hey, development team, you can't actually do that. You can't use run and wget in the same line. We're not going to let you do that. We're not going to let you shoot yourself in the foot. But instead, here, go through this paved road, this other way of doing this. We'll get you the packages you need that are up to date, that we can track and update and make better.
20:02
And that not only takes load off of our developers, but it means that we control our environment in a way that's really hard to do outside of that kind of setup. We control every single package, every deployment, every line of code, every version. We know where it is, when it rolled out, where it rolled out, how it rolled out.
20:23
That's awesome, from my perspective. So you might ask, okay, containerized and virtualized, like what? What? Why do you both? What's going on? Sounds like a train is driving outside here.
20:41
So there's two reasons for that. One is, when we think through the threat model on cryptocurrencies, one of the things you quickly come up with with this, again, this digitally transferable, non-revocable, globally valuable currency, is that this is one of the relatively few, in my opinion, industries
21:01
where dropping and burning an ODA actually might make sense. Most of the time, it probably economically doesn't work out in terms of risk of loss versus risk of gain. Here it might. So one of the foundational things we looked at is, we should always, this is sort of a common flatter,
21:22
we should always layer our security. So how we do this is, this is also one of the reasons we love CodeFlow, because nothing else could do this. When we deploy, we get containers on verts that are mutually trusting, where if you popped one, you're probably going to get to those anyway, because there are credentials sitting on that one, or otherwise, it's highly likely you're going to be able to hit it.
21:46
We then ensure that containers that don't have a mutual trust relationship never exist on the same virtual machine. So that means, in order to hop from a front-end web service to a back-end payment service,
22:03
you're not going to do that through a side channel on the vert or through popping an ODA prevesque in the Linux kernel. You're actually going to have to do the hard work of moving through my environment and pivoting where I can see you, not trying to do it in memory on a Linux system. The other way I think about, and we think about defense a lot, is there's the common defenders have to always be right,
22:26
attackers have to be right once, which is true as far as it goes. The other way of thinking about this setup is that attackers have to play on my playground. They have to come to me and exist in my environment.
22:41
So then my mission then is to make my environment as inhospitable as possible to anyone who's going to come and try to take my crypto. And if I do my job well, I can make that actually quite frustrating. That was the wrong thing.
23:02
At the end, let's talk about that continuously deployed immutable bit. We deploy, on average, we've published some blog posts about this so you can actually look at the data, something like 20 times a day. We deploy a lot. Every time we deploy, we are rebuilding that service from the ground up with no overlap.
23:23
New verts, new containers, new security group, new ASG, whole nine yards. There's not even any network connectivity between the two services. If we're deploying 20 times a day with that kind of rebuild structure built in, that means on average, I think our average lifetime is something like 1.5 hours for a service running in prod.
23:49
That's a fairly frustrating environment for an attacker to live in. And really what it does, and this is why I particularly like it, is it makes the attacker re-exploit every single time.
24:00
Exploits are inherently unstable, especially remote exploits. You're trying to land on some random AWS virtual machine in the cloud. You're maybe not sure what actual operating system is underneath there. You're going to crash stuff if you're re-exploiting at that frequency. And when you crash stuff, even if the rest of your exploit was completely stealth, I'm going to know because it's going to crash, and we track that.
24:23
So then I can come back and figure out what crashed, why it was going on, and respond. This goes back to, as an attacker has to live in my playground, that means I get to set the rules.
24:41
Let's talk about AppSec for a second. We do a bunch, this is one I think of the focuses in our program. Again, walking back to this fun graph. So right after server breach, ignoring unknown, because unknown is unknown, we have application vulnerability.
25:04
So we spend a lot of effort on making sure that the systems and services we deploy are safe, are defensible, are tested, are documented, are threat modeled. And that fundamentally we understand what we are running, not just so that we can defend it,
25:23
but so that we can act to prevent anything from happening in the organization. So one of the things, I'll walk down this overall thing. The first one I'll start with is SALIS. So this is on track to open source, I would guess before the end of the year.
25:42
So SALIS is our stack analysis framework. So what we did early on was looking around, we had the realization of course that, hey you know what, setting aside the server breach thing, attackers are to come in through the app, by and large. So how do we make sure that as we're shipping,
26:02
and we're shipping very fast, 20 times a day, as we're shipping these updates, how do we make sure it's safe? So one of the answers to that is through automated analysis of code before it hits prod in a way that gives engineers immediate feedback
26:21
as to why, assuming we flagged something, why we flagged it, what's wrong with it, what they can do to fix it, and hopefully not do it again, and in a way that aggregates those stats over our entire base of developers so that we can look for hotspots, we can look for issues, we can look for teams that need a little bit more engagement.
26:42
So that's SALIS. Now one method we could have done here is let's build a static analyzer. But that doesn't make sense to me because there are a lot of great stack analyzers out there already. What's missing is the ability to point them at what they're best at and normalize their results in a way that we can then use downstream
27:04
to say, hey, the problem exists here, this is what the problem is. So that's what SALIS is. SALIS is essentially a framework for interacting with static analysis tools that can pick and route based on the language it detects for a given project,
27:21
pick the analyzers that are relevant for that language in that project, take the results back in whatever format we get from the random analysis framework we chose to use for that language or that problem, put it into a common format across the board, and then use that to interact with the original pull request and say, hey, line 15, this trip to this rule from this analyzer,
27:44
hey, developer, please resolve this before you're allowed to deploy. And then SALIS will say, and by the way, you can't merge this change until this is resolved. No humans, no humans in the loop, no human interaction, but our development teams get security scanning, they get instant feedback, they get feedback in line
28:02
in a way they would have gotten it in a code review, and we protect Coinbase itself from whatever that change may have been. It's a really, really cool little tool. I'll be really happy when we get it out in the world. The second thing we focused on, again, was this idea of the CICB pipeline.
28:24
And if we're integrating security into an organization as moving as fast as Coinbase, we have to be as close to zero touch as we can possibly be. So this mirrors a little bit of what I talked about for the infrastructure side of things, about that. We want to integrate with development practices seamlessly.
28:44
We want to be there as part of the CI pipeline. We want developers to find it easy to write security unit tests. Part of following up on any incident is how to make sure this doesn't happen again. So we want to be there to help them say,
29:01
you know what, let's put in a security unit test in this CI pipeline. We want that deployment pipeline to be easy and straightforward and have developers not have to worry about managing their systems, because we can manage that for them. Consensus requirements on code margins, this is actually, I think, a pretty nifty thing that we do.
29:22
So to push any code change out in Coinbase requires sign off from multiple engineers. So you want to push a change to Coinbase, you, yourself as your submitter, are going to have to go get, call it, three other people to say, you know what, this is a good idea.
29:42
Those three people can't have added code to the actual PR. The question has to be two. Three is totally independent people. As you do that, that plus one, it's just a plus one in line with the pull request, it's two-factor auth. So you get that nice push to your phone that says,
30:01
hey, you just said we should deploy from service X, commit hash Y, was that changed? Did you mean to do that? And assuming the answer is yes, then great. Everything merges, it all gets deployed. Assuming the answer is no, that kicks off a sort of a minus one process, where we can actually say, you know what,
30:22
you actually need more eyes on this. This is super terrible. You should require four or five or six reviewers, depending on how many minuses the pull request is getting. This lets us ensure that no one individual is another sort of core concept we shoot for. No one individual, no single person,
30:43
can do a thing that impacts Coinbase or the PII, the fiat, or the cryptocurrency that we have stored. It should always require a conspiracy, because conspiracies are fragile, and they're scary, and they're really high risk for the conspirator.
31:02
Again, the attacker, in this case, has to play on our particular playground. We want to make that playground rule as hard as possible for that attacker. The Concierge AppSec program. This is one of the reasons the AppSec team is probably my biggest team at Coinbase is because we want AppSec not to be the guys
31:23
that sort of parachute at the end and say, no, everything's terrible. Go home. This is crap. You can't deploy it. That's not productive for anybody. Instead, we want our AppSec team to be in the standups of these developer teams, to be embedded at the hip,
31:40
to have all the same incentives, to get what they're trying to do so that as that team is writing code, is making changes, that AppSec team member knows exactly what's going on across the board and can render for that independent third party, oh, hey, that's actually pretty risky. You may not know, and let's engage, and let's help out, and let's make this change. Again, this is the app-level volume,
32:04
and for an exchange especially, is one of the most worrying things that we deal with. We want to make sure that we're providing that expertise, and we're providing it in a way that developers want to use it, that want to get engaged. The last thing I'll highlight here
32:21
that I think is quite cool is that we're getting much, much more into recently is blockchain monitoring. If you imagine a world where there's, you don't have to imagine it, it's today. What are there? 1,800 or so assets somewhere in that vein,
32:40
in that vicinity? 1,800 or so crypto assets in the world today. As we think about looking at assets to add, you guys saw we just added ETC. We've talked very publicly about the desire to add more. One of the things we want to be sure of is,
33:01
as we add those assets, that we're proactively looking at those blockchains to say, is this asset still safe? Do we know what's going on with this asset? Is there a 51% attack happening? Is there a contract indication of a function, a token that should never be invoked so that we can take immediate protective action
33:20
on our systems so that we're protecting, to the extent we can with whatever the vulnerability is, our assets or our customers' assets more appropriately. This is also an area that we're pushing pretty hard in and that I hope we'll open source some tools in as we get that stuff built and rolled out.
33:41
Because like I said, at the end of the day, trust should be based on transparency. You guys shouldn't have to wonder how I'm protecting your crypto. You should know. Let's talk about detection of spots. Because at the end of the day, you can't win them.
34:02
You need to have the ability to detect when things go wrong, and you need to have the ability to respond safely. So that goes to the heart of this first one. My favorite name of all of our projects, Dexter. He's our friendly forensics assistant. A few Dexter fans in the audience I can see, outstanding.
34:21
The rest of you can Google it later, and you'll laugh when you do it. What's Dexter? Why do we build it? To go back to what I said earlier, right? No one person should have the ability to impact or steal Coinbase, crypto, PII, fiat, et cetera.
34:41
But when we think about incident response, that's one of those areas where the response is frequently, oh, we'll just throw all that shit out the window, and we'll just go ahead and respond as an individual. That, to us, is extremely dangerous and worrying. We don't want that. But at the same time, we want to be able to respond to incidents quickly and very, very agilely.
35:02
So how do we square that circle? What Dexter does is two core things. Number one, it provides a consensus-based approach to executing forensic commands. So something happens, some instance is doing a weird thing in production,
35:21
and maybe it's an instance that's actually dealing with crypto. I don't want an incident responder hopping on there. But the incident responder can spin up Dexter to start an investigation and say, this is weird, give me a process listing, give me basic live response, processes, LSOF, maybe some stuff from proc,
35:41
depending on what the actual problem was, and that investigation spins up, and then it sits there and waits for a plus one from another incident response engineer. Depending on the command you're executing via Dexter, the number of required reviewers changes. So maybe process listing, LSOF, not a big deal, it's just a plus one.
36:01
Maybe he then says, you know what, I actually need a memory dump because I don't know what's going on here, and I need to really get deep into this. That's gonna require a lot more consensus. Maybe it requires a plus two, plus three, maybe it requires sign off from me. Maybe it requires sign off from legal. But it lets us very flexibly define
36:21
who and how can execute sensitive commands, even in a fast-moving, highly critical incident response situation. We never want to be back to that place where one person can do a thing. It's also built to be, there's a bunch of other cool things about it around.
36:42
In our AWS environment, it's highly segmented. So hosts normally can't talk to each other. There's no place on our network where you can go talk to everybody to do incident response. So it's S3-based, so we're pushing into a queue. The hosts are monitoring, they're pulling down and saying, is this for me, is this for me, yes, no, maybe. It's all backed by DPG signatures,
37:00
so we're actually basing this stuff in crypto, not on a software counter that says, oh yeah, you plus one me enough, I'm good to go. This is actually, so the guy who wrote this is open-sourcing it at DerbyCon this year. Just got his talk accepted, I don't know, two, three weeks ago. If you're at DerbyCon, you should really go to the talk
37:22
because it'll be awesome. So the second thing I'll talk about here is, again, another derivation of that container-based stuff. So I do another talk, which you can go find from ShakaCon, I think, was the last place I did it.
37:40
It's incident response in Dockerized and containerized environments, where I talk through sort of what's unique and special about containerization when it comes to incident response and detection. It's a really cool and interesting environment. But the thing I'll highlight here is that because we've Dockerized everything, everything's in a Docker container,
38:01
everything is running isolated, that means we can very easily pull specific behavioral information about a given service. So the reason that this stuff doesn't work well across the board normally is because there are just so many signals to deal with. There's system level signals, if you try to do this on laptops, God help you,
38:23
because that's even worse. But even on server environment, it's really, really hard. In our environment, we can constrain it down to a Docker container and say, how is this Docker container behaving? How is this single process acting? And is it acting different than its peers? And then based on that, we can do things
38:40
like using core basic tools, audit D, eBPF, look at that and say, how is this acting at a system level different than its peers? Is this behaving in a way that's indicative of an attacker being on this environment? And if so, we go back to Dexter and we can actually respond very, very flexibly to that area, figure out what's going on, and figure out what we need to do next.
39:05
The last thing I'll talk about that we do that's cool and special is we log everything. When I say everything, I mean everything. We do this for a couple of reasons. We want to go back to that deployment cycle. If we're deploying 20 times a day,
39:20
average lifetime is like one and a half hours. What happens if there was an incident and we discover eight hours later? The container's gone. What are we going to do with it? So the answer to that is we log and, most importantly, enrich everything immediately. So when that container's gone, I can still walk back
39:42
and pull all the logs that were issued by that container, tagged with that container's name and version number, tagged with the processes, the result process names, the whole nine yards. Most of the stuff I would get from a live response, I can regenerate from logs that we maintain of these instances, and I can get it quickly, I can get it searchably.
40:03
I think the most important and different thing we do here is metadata and metadata, just all the metadata. Anything that might possibly be useful in the future, we'll tack on a log. Uptime, we tack on logs. So that as we roll into this, any future investigation,
40:22
there's never any, to the extent we can, any ambiguity. There's never any question that we have the data we need to make a determination as to what happened. So we've got about, I don't know, seven or so minutes left before I open it up for questions.
40:42
Before we do that, I want to talk a little bit about what we can do better as a community. I think there's three specific points I want to drive home. One is today, there we go. There's not enough threat intelligence sharing in the crypto community. There just isn't.
41:01
We're fragmented ideologically, we're fragmented commercially, and we do not talk to each other enough, even though we share the same opponents. The same attacker coming after me is coming after Gemini, is coming after anybody else here who's running an exchange or a currency. We share opponents.
41:20
Why wouldn't we share threat intelligence about those opponents to make us all stronger? So the one concrete thing here, I'll flag really quick, is we recently, in working with some other exchanges and some other traditional financial folks, started an FSI SAC working group.
41:40
If you don't know what FSI SAC is, it's the Financial Services Information Sharing Analysis Center. It's been sort of a cornerstone of traditional finance, how traditional finance has done this for decades. This is sort of a nonpartisan, not owned by any one company, a way to interact and share with peer organizations. So we started this working group,
42:00
and we're reaching out now to anyone and everybody who is involved in the crypto space to say, come join, help us establish this sort of basic level of security trust and cooperation so that we can all get better. Number two, and this is actually also related to FSI SAC,
42:25
we're not really good at standards in the community. There are a few out there, right? The CCSS is one that's been out for a while and gotten some traction. But when you go back and you look at
42:40
sort of how traditional finance evolved over time, I'll give an analogy that I like. I'll see if you guys like it too. If you were in the 1800s, 1850s, right, walking around looking for a bank to deposit your money, you're walking down, walking to the bank, this is nice, this is a place, this is great. One of the questions you're probably going to ask is, tell me about your vault, right?
43:02
How are you going to secure my money? Because bank robbery was much more common, there are a lot more risks, everything was more fragmented, and as an informed consumer, you needed to ask that. Today, let me ask, let me just poll the room. Has anyone ever walked into a bank and said, before I deposit my money, I need to see your vault? Does anyone ever ask that?
43:21
Now, you know why? Because that industry grew up, it built standards, it built, and through those standards, it built trust with people, right? It's a more complicated story, there's insurance involved, there's regulation, right? I'm simplifying it. But through the use of standards, we can build trust, trust with customers,
43:41
trust with regulators, right? So that the question shifts from where it is today for crypto, I'm going to deposit my crypto, tell me about your security, to where it needs to be, tell me about your services, tell me about how you deliver better service than whoever else, tell me about why I should put my money here because I'm going to get these other benefits.
44:03
So then the ask is, either under the auspices of the FSIS working group, or via an organization like the CCSS, or whatever, we as an industry need to invest in standards. We need to invest in not just creating standards,
44:20
but coming together around standards, to driving ourselves as a community, holding ourselves to that standard, so that we can be worthy of that trust. Last, and this is obviously a self-interested point,
44:41
really not enough folks in the security community are looking at crypto and saying, you know what, that's a great place to do security. We see some of it, it's starting to change, certainly over the past year I've seen a change, but I don't think we as a community are doing enough to talk to the rest of the security community,
45:02
not just about how cool crypto is, because it is really cool, but about what an interesting problem set this is. How interesting is it to secure a thing that's never been secured before, to write the book on protecting an asset in this adversarial environment?
45:23
People should be knocking down the doors of cryptocurrency companies to say, wow, that's a great challenge. Why wouldn't I want to engage there? And I think they're not, because we as a community, again, we're not doing the outreach. We're not talking to the rest of the community about the interesting security implications of this space.
45:43
We're talking about how cool crypto is. This is also a place to say if you want to run for Coinbase, come talk to me, but you know. And with that, what are your questions?
46:16
I'll let you pick who gets first. You are making that comparison
46:31
because anyone who has access to a private key can immediately gain access to the funds. So the question is how does that differ from possession of just a password to a bank account?
46:45
Because there are other controls there, right? So a bank would have the ability to do fraud detection on transactions. The bank would have the ability to do geo-determination. You're in the US and there's a login from Ukraine. Maybe we should do something about that, right?
47:01
There are other steps you can take in the middle there, as opposed to possession of a private key, possession of a bear bond. That's it. Game over. Done. But it's like a debit card, for example. Anyone who extracts the money can just run away with it. Sure, but depending on how they do it and where they do it, right? If you have a debit card, that debit card is connected to a bank account
47:20
and there are controls on the interaction between that debit card and that bank account. The debit card doesn't necessarily mean I get all the money in the bank account, right? Because there's a control environment between the two. There are daily limits. There's the ability of the bank to say this is an odd transaction, I'm going to freeze this debit card, right? The core of the comparison, I think, is
47:40
once you have a bear bond, once you have a private key, there is nothing anyone else can do to limit or protect from the loss of that value. Thank you. Hi. I have a couple of questions about custody.
48:01
To have a compliant custody solution, number one, is it necessary to provision identity over the account so that you know exactly who they are? Secondly, for custody, is centralization essential to secure custody? And thirdly, when it comes to the regulatory regimes
48:22
that make, I guess, create the most work for you, is it going to be the OCC or is it going to be the NYDFS? Where does most of your, as far as complying with regulations, who's the toughest regulator? The first question is really a question about
48:42
BSA and AML practices. And in general, yes. For U.S. consumers, we have to walk the BSA requirements. We have to do KYC on customers before we're dealing with their assets. The second question on centralization on security,
49:06
it's a hard question to answer in general in the abstract because security can be defined relatively. So certainly, an individual can invest in a level of security
49:21
that's good for their needs and their threat model without ever having to put assets anywhere. And a lot of folks do, especially in the crypto land. I think the other side of that coin is people choose to use a centralized service because we can invest in centralized security or they don't want to spend the time and effort
49:43
to build what to them is an acceptable sort of security arrangement for their crypto. I mean, we're a centralized service. And then the third one about regulators.
50:02
You know, really, I see the space as the regulation is evolving and working with regulators, especially coming from a company like Coinbase who has always been very sort of regulation forward. What we always want to do is work with them and educate them around how this space should be regulated
50:23
in our opinions. I think calling out any one regulator as hard, besides being unwise, just generally, also doesn't make sense to me because all the regulators I've worked with
50:40
have come to the table with an open mind about crypto and wanting not just to make us go through a check the box exercise, but actually wanting to go back and forth about a new asset and how the old framework should apply to this new kind of asset. Well, I just read the NYDFS,
51:01
particularly the cybersecurity rule and the new AML transaction monitoring and filtering rule that that's becoming. I think to the extent that it impacts everyone else, obviously it impacts us because we're operating like a financial institution.
51:26
You talked about the missed, yeah, I think it works. You talked about the missed model security and detection, like you mentioned containerized. Versus what? Segmented networks.
51:41
So the difference is in the layer of granularity. So containerized workloads, the security boundary is at the process, the workload layer, as opposed to network segmentation where you're segmenting at the server or at the group of hosts layer.
52:01
The layer of level of visibility is different, as well as the actions you can take are different. To me, I personally prefer the segmentation on both, but if I had to pick one, I would pick host because you get a much richer data set out of a host-based detection
52:21
than you get out of a network-based detection. No, we're just pretty much all just, yes.
52:43
Yes. So user trade data is actually one of the most important things that we look at. We think the last thing we want to do is expose user trading activities, both in terms of individuals, we take individual privacy extremely seriously in that sense,
53:02
as well as folks that are active traders with specific strategies they're executing. The last thing we want to do is be in a position where we're leaking data on those strategies in a way that can be taken advantage of by others. So thinking about that environment a little bit,
53:25
control of any sensitive data like that is really predicated on making sure you know who is accessing it, for what reason, when, and why, and having the ability to flag deviations from any sort of normal patterns there, which is at a high level the approach we take. Very restricted access,
53:41
lots of monitoring on interactions with those kinds of data sources, and sort of active oversight by the security team on that kind of activity. High level. Sorry, I'll just start calling people now since we have no longer mics. You and then the gentleman in front of you.
54:12
Yeah, so we do use dedicated instances in areas where we think the workload sensitivity merits it. The second piece there,
54:22
and I've had discussions with people about this in both directions, but because we move so rapidly in the environment, it's sort of a side effect of our deployment strategy, it's actually fairly hard to end up on a host with one of our systems reliably.
54:42
So the setup for that attack is pretty significant. The gentleman in the white shirt, and then black. Someone talked about custody and regulation earlier, and one thing I wanted to ask you is that many centralized exchanges are in the process
55:00
or have launched 0x-like exchanges, like with on-chain custody. What's the house you... Yeah, so we acquired a company called Paradex, what was it, three months ago, that's operating as a 0x-based relay. Right, so, yeah.
55:20
And from a regulatory standpoint, do you have any remark on that, or not really? I'll say that it's coming back outside of the U.S. first. Yep, gentleman here.
55:43
Yeah, so that's an interesting question. We actually offered a multisig wallet for a long time. Had an extremely low uptake rate from customers.
56:00
So low that it was... When we did the evaluation on should we keep this feature or not, the code we could simplify by taking it out was a better trade-off than keeping a feature that almost nobody used.
56:20
So I think if the customer demand is there, sure. But we just don't see it. Thanks for coming and giving the talk. What do you think... What needs to be done for Coinbase to be able to accept Monero and other similar strong privacy coins that are out there?
56:43
Fair enough. I actually am surprised that question took that long. Look, I think we've seen some really promising moves from regulators in the privacy coin space. Jim, in particular, has made great progress with Zcash
57:01
and getting regulators comfortable with privacy coins. I think for us, the primary thing that we want is to make sure that customers on our platforms are getting access to the assets that they want and that they want to use. So I think that we look at this primarily in terms of
57:20
A, in terms of our digital asset framework that we've published and said, hey, here's how we look at digital assets, but also in terms of what's actually going to be most useful for our customers and what do they want the most. And then we figure out and take a look at how can we do that. Yes? Yeah, just a follow-up comment on that.
57:40
The reason that the die-hard bonds were valuable was that they're fungible. Also, I had a question on your Preedy slide, which makes it look like the hot wallets, you guys named that cluster Knox. Is that actually true? It seems a bit obvious. Yeah. Yeah, okay.
58:03
Is that a comment on our poor naming schema? We like easy, we like simple names, short names. Dexter, Knox, in the corner there.
58:22
So one of the things you talked about was the various things that you've built internally and are now in the process of open sourcing. So sort of in the build versus buy open source world. It seems like the period immediately after open sourcing an internal product is sort of the most dangerous period.
58:41
You've yet to get people fixing things externally, but the code's out there now. So what are you doing to defend yourselves from the potential zero days in your own software kind of a scenario? Your own internal vulnerability scanning
59:00
for those types of systems. Yeah, that's one of those very broad questions we can be here a week to answer, but we don't treat internal software any different than external software in terms of our overall AppSec program. So CodeFlow is going through the same, which is an example I'll use, is going through the same scanning tools, the same AppSec process, the same testing,
59:21
the same everything as a public facing service that's part of our overall Coinbase services. Probably the second part of your question is interesting, is like the risk trade off between exposing source code and not exposing source code. I think it's a really interesting
59:43
and sort of nuanced trade off to make between the sort of principle that trust should be transparent and risks we have around releasing code that might have issues in it. We, in particular,
01:00:00
particular open source components like, like Odin which we did several months ago, um, we, we spend a lot of time and effort looking at that before the release. We tend to release small pieces of code, um, as opposed to like here's a 100,000 line behemoth, we want to release, here's a 5,000 line tool, right, because we can much, we can, we can,
01:00:20
we can put that much more effectively in terms of AppSec. Yeah. So, so, so, it can connect to S3 and then the endpoints pull S3 for information, right? So it's not,
01:00:41
it can't connect to anything, it can just talk to an S3 bucket. I think we, we might have time for one more question if there's another one. There's a bit more time if you have time yourself and we have nothing scheduled for 11 o'clock, so if you want to ask. I'm not going to stay up here for an hour just to be clear. Yeah, yeah, totally, but if you want to do a couple more that's okay, there seems to be a lot, but if you don't that's fine too. Okay, I'll, I'll do a couple more and then, and then, and then get
01:01:03
out of here. You mentioned intelligence sharing and FSISAC as a working group. I'm curious about like how valuable that data is for you as like probably your tech stack is completely different from a Wells Fargo. Sure, and, and that's why I think this is, so there's two pieces, right? One is the financial actor sharing and one is like
01:01:23
sharing within crypto. So two answers to that question. One is yes, my tech stack is probably totally different. Attacker behavior is probably not that different, right? How they, the kinds of things that they target, how they move internally, how they act, things like that. That's what I, I don't really care about IPs and hashes, right? That's, that's good, but it's not what I really want. What I really want are attacker
01:01:45
behaviors, because that then feeds into my roadmap, right? Attackers want to do this. How can I make that hard, frustrating, annoying, prone to failure? Yep, there's