We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Little Services, Big Risks

00:00

Formal Metadata

Title
Little Services, Big Risks
Subtitle
Extending capability-based security models to achieve micro-segmentation for grids of services
Title of Series
Number of Parts
50
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
As we isolate functionality into services distributed across networks, we increasingly strain the concept of trust boundaries. Hosts are no longer simply trusted or untrusted, and each host comes with a new foothold for attackers. This risk is called the Confused Deputy Problem, and it’s part of a growing number of attacks, including the one on Equifax. We need to stop assuming trusted services -- especially huge ones like those in traditional web stacks -- remain trustworthy. Capability-based models offer hope, but we need a few more patterns to use them with modern microservices. The Confused Deputy Problem has a history going back to early Unix. Attacks on passwd are a classic case. The architecture of passwd is simple: allow unprivileged users to invoke a setuid executable that performs a high-privilege operation under defined preconditions. This means passwd has been deputized to alter the passwords for all users. However, if you can confuse passwd into misusing its ambient authority, you can gain privileges. As a rule, a compromised service can do anything the service itself is allowed to do. One of the answers has been Mandatory Access Control (MAC) like selinux. Firewalls are analogous for networked services. These systems operate by confining which principals can access which files or services. This helps somewhat; passwd has no business updating /boot (for example). However, in cases like Equifax, principals were only talking to the expected systems; the attacker used the web server to exfiltrate data from the database backing the web application, even accessing tables the web application would be expected to access. Just as with passwd, Equifax’s problem was trusting an application to both perform its functionality and enforce authorization. Capability-based models are one answer: rather than trusting a complex service to enforce authorization against deeper systems, we provide the client with proof that it’s allowed to access its own records and have the service forward this proof to deeper systems in order to read or manipulate corresponding records. It’s now harder to turn a compromised service into a system-wide attack. Systems using capabilities are in widespread use: Kerberos, Google’s Firebase, and concert tickets all use the concept of a token providing proof-of-authorization in a way decoupled from principals. However, things get weird when interaction is no longer directly between the primary principal (say, a web client) and nested services. Because possessing a capability is sufficient to exercise it, how do we handle cases where deeper services should have access to fewer (or more, or different) objects than the ones in front of them? In this presentation, we’ll look at state-of-the-art methods for delegating capabilities, including “sealing” (reducing the authorization of shallow services) and original techniques to forward only a subset of the caller’s capabilities (reducing the authorization of deep services), all designed for use with distributed services across networks. Note: The capabilities here are unrelated to Linux kernel capabilities. However, Linux does have some capabilities like the ones mentioned, including the idea of handing off a file descriptor from one application to another. This allows one app to open the FD and hand it to another that cannot (but can use the FD once in possession). Socket activating a web server on port 80 despite starting the web server as an unprivileged user is one example of handing off a capability via FD.
SpacetimeSystem programmingWeb serviceInformation securityGUI widgetOrthogonalityFirewall (computing)Service-oriented architectureDifferent (Kate Ryan album)QuicksortTerm (mathematics)Projective planeArithmetic progressionInformation securitySystem programmingType theoryEnterprise architectureSoftwareMeeting/InterviewComputer animation
Information securityData integrityModel theorySystem programmingConfiguration spaceServer (computing)Information securityImplementationMereologySoftware maintenanceInformationOperator (mathematics)Single-precision floating-point formatPoint (geometry)Web 2.0Reflektor <Informatik>Distribution (mathematics)3 (number)Different (Kate Ryan album)InternetworkingINTEGRAL
System programmingQuicksortSystem programmingMultiplication signBasis <Mathematik>CASE <Informatik>FrequencyExtension (kinesiology)Enterprise architectureComputer animation
System programmingCASE <Informatik>Order (biology)System programmingReal numberComputer wormQuicksortWeb 2.0DatabaseRemote procedure callServer (computing)Instance (computer science)1 (number)Enterprise architectureVulnerability (computing)PentagonComputer animation
Information securitySystem programmingVulnerability (computing)Patch (Unix)Game controllerQuicksortSystem programmingDifferent (Kate Ryan album)Information securityCASE <Informatik>HypermediaPunched cardVulnerability (computing)Computer animation
GUI widgetWeb serviceEncryptionPhishingVirtuelles privates NetzwerkLocal area networkMalwareComputer networkKey (cryptography)System programmingSoftwareEncryptionMalwareSystem programmingVapor barrierOffice suiteProper mapGame controllerPhishingQuicksortDifferent (Kate Ryan album)Data centerControl flowFirewall (computing)Group actionVisual systemComputer animation
Web serviceKey (cryptography)MalwarePhishingSystem programmingGUI widgetEncryptionVirtuelles privates NetzwerkLocal area networkComputer networkInformation securityKey (cryptography)CASE <Informatik>Vapor barrierEncryptionWeb applicationWeb serviceIntegrated development environmentPoint cloudQuicksortService-oriented architectureDifferent (Kate Ryan album)Medical imagingSystem programmingComputer animation
System programmingProxy serverWeb serviceModel theoryService-oriented architectureInformation securityPoint cloudService-oriented architectureQuicksortSystem programmingWeb serviceEnterprise architectureAuthenticationDynamical systemGame controllerPublic key certificateSoftwareSet (mathematics)CASE <Informatik>Matrix (mathematics)Attribute grammarBoundary value problemSI-EinheitenModel theoryPoint (geometry)CuboidTerm (mathematics)Remote procedure callUsabilityCodeType theorySurfaceComputer programmingProxy serverFilter <Stochastik>Validity (statistics)MereologyEmailVulnerability (computing)Combinational logicFirewall (computing)WebsiteFormal languageFront and back endsProcess (computing)Server (computing)AuthorizationPolygon meshVirtualizationWind tunnelMultiplication signWordPartition (number theory)Programming languageBargaining problemDifferent (Kate Ryan album)Software frameworkUniform resource locatorGroup actionGraph (mathematics)LeakWikiComputer animation
System programmingServer (computing)Client (computing)AuthenticationWeb serviceKerberos <Kryptologie>Public key certificateConstraint (mathematics)Source codeKernel (computing)Public key certificateGame controllerWeb serviceOrder (biology)Service-oriented architectureCASE <Informatik>Latent heatAuthorizationComputer fileProof theoryPhase transitionMereologyConstraint (mathematics)EmailWeb applicationClient (computing)Datei-ServerAbstractionSystem programmingSign (mathematics)ChainData structureMultiplication signFile systemControl flowPoint cloudKerberos <Kryptologie>ImplementationTerm (mathematics)Information securitySet (mathematics)Context awarenessDiagramLibrary (computing)Touch typingDescriptive statisticsServer (computing)Model theoryLimit (category theory)QuicksortArrow of timeAreaNetwork topologyProjective planeJSONComputer animation
System programmingConstraint (mathematics)Web serviceUser profileToken ringService-oriented architectureAuthorizationClient (computing)TopologyProjective planeSystem programmingSign (mathematics)Web serviceToken ringPoint (geometry)SurfaceMultiplication signPerspective (visual)Service-oriented architectureModel theoryConstraint (mathematics)User profileBitSubsetSingle-precision floating-point formatQuicksortStrategy gameAuthorizationMereologyProfil (magazine)CASE <Informatik>Information securityDatabaseJSONComputer animation
System programmingMultiplication signService-oriented architectureSystem programmingQuicksortDifferent (Kate Ryan album)Term (mathematics)Group actionSoftware testingComputer animation
Web serviceToken ringUser profileService-oriented architectureConstraint (mathematics)FingerprintWeb serviceToken ringException handlingIdentity managementFault-tolerant systemService-oriented architectureTerm (mathematics)AuthorizationDecision theoryQuicksortProfil (magazine)Source codeLogicPoint (geometry)Public key certificateMereologyCentralizer and normalizerCartesian coordinate systemBitCryptographyLoginScripting languageEntire functionHacker (term)Sign (mathematics)Content (media)Row (database)Public-key cryptographySet (mathematics)Food energyChainInteractive televisionOffice suiteMultiplication signServer (computing)Computer animation
System programmingWeb serviceConstraint (mathematics)Service-oriented architectureToken ringUser profileSign (mathematics)AuthorizationPasswordMultiplication signFormal verificationWhiteboardConstraint (mathematics)Validity (statistics)Token ringAuthenticationFormal languageSoftware frameworkLogicLibrary (computing)Combinational logicConsistencyPublic-key infrastructureEnterprise architectureMatching (graph theory)Data structureEscape characterSimilarity (geometry)QuicksortWeb serviceService-oriented architectureAuthorizationCentralizer and normalizerPoint (geometry)System programmingDecision theoryStandard deviationFront and back endsClient (computing)Intelligent NetworkUltraviolet photoelectron spectroscopyCASE <Informatik>Pairwise comparisonGraph (mathematics)Uniform resource locatorPublic-key cryptographyComputer animation
System programming
Transcript: English(auto-generated)
So, my talk today is about the way we think about services, especially as we stack them on back ends, have different services access each other, and the different methods of compromise that these sorts of services face.
But to start off, we're going to have to look at how we define security, the different types of attacks that we've seen in the wild against these sorts of infrastructures, especially as things have moved more towards stacked services and microservices. The sort of solutions that people have today.
Other approaches to the security that exists in both practice and in academic work. As well as how we can achieve a more modern approach to least privileged architecture in terms of how services communicate with each other that sort of builds on all this
progress and all this work that's occurred so far. I'm David Strauss. I am one of the cofounders of Pantheon, which uses systemd extensively to run a container grid as well as a bunch of other services. I've also done work with the systemd project, and I do work with the Drupal project on the security team there as well.
So the way that I like to define security is the CIA triad, and there are a lot of things that are pretty uncontroversial. One of the parts is confidentiality. The idea that someone can't see information that they shouldn't be privy to.
Then there's also integrity, which is the question of security and whether someone can actually alter data, tamper with systems, and actually damage the configuration or operations of the system. These two parts, pretty much everyone includes in security, but I think the important thing
to also add in is the concept of availability, which is that we deploy systems because we want them to actually do work. We deploy them out on the web because we want them to be accessible to the public. And this means that we can't do what is the trivially most secure thing, which is
just to stuff a server in the closet, disconnect it from the Internet, and that's a great way to keep it from getting hacked, but it's also a really bad way to get anything done. So some of these models will also focus on how they can be deployed in a distributed fashion that doesn't have things like single points of failure and doesn't require ongoing engineering
and implementation and maintenance effort to be able to continue to use these systems. A lot of this comes down to the attack versus defense. The ways that attackers are confronting systems now have become much more based on not attacking directly the system they want to compromise, but on achieving some sort of foothold in
the target infrastructure, often for an extended period of time. Or possibly they might exploit one system and use that as a foot in the door to exploit a different system. Some of the most damaging compromises in the past decade have occurred through methods
like this. And what this means is that we can't think about defending systems purely on the basis of a system defending its own resources. We also have to think about systems in the sense of what sort of foothold they provide for compromising other systems. So just a few kind of breaches over the past five to ten years that have involved
this sort of foothold architecture. In the case of Sony Pictures, for example, they convinced a bunch of users with phishing attacks to be able to give up their credentials. They sat on the system for approximately a year before actually doing the real payload
damage and extracting data and threatening the company in various ways. So it's often about these initial steps that allow the compromise to fester. The Panama Papers breach had an instance of Drupal that was unpatched for three years.
And then they used that as a step forward to be able to compromise all the documents behind that infrastructure. And then one of the most recent ones was Equifax, where they used an Apache struts vulnerability in order to get remote execution capability on the web servers and then be able
to use that to exfiltrate all the data from the database. But there's one thing that all three of these have in common, which is that the thing that they attacked is not actually what the payload was on. It was just a step forward toward that.
So I have a fundamental problem with the way that a lot of these breaches have been treated in the media and by security professionals, because in many cases, what comes up is they should have kept up with patches. They should have been managing their systems better, and that's true. I'm not saying that that's not actually the case.
But what I think is more important when we think about security for these sorts of systems is that one vulnerability shouldn't allow such a large breach. We should be building systems where even if they remain unpatched, they're able to be resilient against attack becoming severe.
Because patching can only go so far with these systems. When you have something like a zero day attack and a vulnerability gets discovered or announced well before any fixes are available, you have to rely on things like defense in depth and sophisticated methods to contain attacks that don't rely on every
system being perfect. So I think they should have kept up with the patches, but they also should have designed all these systems where it wouldn't have been so catastrophic. And we're also facing new threats against each of these sorts of controls. We have in the traditional case, we've had things like a trusted LAN and VPN where someone
has something like an internal office network, and then they make a system whitelisted for say IP access from their office, and then they require people to VPN in to be able to get access to those systems. This is where phishing and malware attacks come in. Basically by compromising the systems on trusted network, they're able to get a foothold
to be able to access the systems that were supposed to be protected behind it. The same thing occurs on data center networks as well. A data center network segmentation is a really common way to basically break up the networks between different systems. But if you can compromise something on the target segment of the network that's within that trusted firewall or behind that firewall, then you have that foothold, and that external
barrier is no longer sufficient to protect all the systems within it. And then we have a lot of issues with the way that I see systems, for example, like Equifax is managing things like encryption and access credentials. Because encryption is almost unbreakable if it's implemented properly.
The problem with most encryption is that people can actually get the keys to it. They can decrypt the documents, and in many cases, the security with encryption is most weak by the way that the keys are managed for it.
And cases like Equifax, for example, some of their data was encrypted, but the web application had basically the keys and the necessary things to be able to get to some of the data. So these are traditional defenses. They're still useful. But as attackers have gotten better and better at achieving a foothold behind the defenses and then working from there, they are no longer the strong barriers that they
used to be, because attackers are getting very good at this. And I think that microservices and the way that we're handling things like container environments, stacked services, cloud deployments are actually making the stakes even worse
for these sorts of systems. And I'm saying this out of love for these things, because I want to kind of save the security of some of microservices and service-oriented architecture, but I see a lot of problems with it as well. One of the ways that I often see people deploying these services is where they'll
have an edge that's sort of a firewall, and they'll have something like a virtual private cloud or some sort of other network partition, and then all the services get thrown into there. In this case, what happens here is we have something like an edge proxy.
It's performing any kind of filtering, validation, et cetera, before forwarding things onto these services. But this has an enormous amount of attack surface. It has much more attack surface in my mind than an equivalent monolithic application, because each of these services may be running on a different framework, a different programming language. They may have different vulnerabilities.
For example, if one of these services behind here has an unserialization vulnerability that gives something like remote code execution, and I can get the edge to hand it to that service, then I only need one of these services behind this firewall to be vulnerable for me to have a launching off point for talking to the other. And in this sort of scenario, it's often the case that there's not a lot of authentication
or security once you're within that trusted boundary. And that means that one leaping off point can quickly become a complete compromise to the system. So I don't think that this approach is really where it's at. Even though it's the easiest way to basically convert a monolithic service into these sort
of service-oriented architectures and microservices. This is the approach more that we are using at Pantheon, and this is becoming increasingly popular in a lot of container orchestration systems. The name of this seems to have been picked up is micro-segmentation. This is the idea that rather than trusting that you have one big boundary around everything,
you actually have a lot of flexible boundaries and tunnels between these systems. This is also something that can be enforced by more advanced virtual private cloud infrastructures as well as some of these sorts of network mesh underlying layers that can then create
a sort of virtualized and dynamic network that allows services to talk to each other only when they're allowed to. In the case of what we're doing at Pantheon, we're deploying certificates into every one of our containers and then whitelisting certain attributes of these certificates to basically
create a set where we don't have we get rid of the concept of there being a matrix where any service can talk to any service. And this means that we've somewhat reduced the amount of compromise that can occur to the system because if you compromise, say, that rightmost service in here that only receives requests from these other things and nothing else actually permits to make requests,
at least you don't have every single one of these be a launching off point for a complete compromise of the whole grid of services. And there's almost an orthogonal approach that I've seen. This is what my understanding of the approach from things like GraphQL is.
And it's the idea that the edge itself is sort of untrusted, which means that you have a fairly permeable boundary. You're not really relying on the edge to do much filtering. And then you can have each of the individual services basically defend themselves, manage all their own authorization, manage all their own validation of requests coming in.
The edge tends to forward it to each of those services. And the edge is basically unprivileged. And these two approaches have almost opposite benefits. And one of my goals here is to try and bring these things together and find solutions
that actually provide both types of segmentation without compromising the usability of the system. And oh, that was... So the empty box in there, I'll fix that before I export the deck. But that is like a neutral face.
So I tried to basically visualize what happens with each of these segmentation architectures in terms of what happens when something gets compromised. In this case, with the trusted edge, being able to get access to any system, the edge system or an internal system provides a great leaping off point for almost generalized
compromise of the system. Like half the stuff that I've seen on WikiLeaks is usually from this sort of approach, whether it's through microservices or just the act of basically putting something like an email server on the same network as the website. And then once they get one foothold, it's end of the story.
The sort of micro-segmentation approach that we have for the kind of Pantheon and the very container-oriented setups, it basically trusts the edge to defend a lot of the internal infrastructure and then de-privileges a lot of the deeper services.
So this means that if you compromise a deeper service, you're not actually able to do that as a launching off attack to everything else. But if you can attack the edge, then you actually get an enormous amount of access because the edge is kind of fundamentally trusted to have validated and approved requests
before they go back to deeper services. And the edge uses its own authority and reputation as a way for the deeper services to get the job done. And basically the deeper services will do anything that the edge asks for. And then going back to that kind of opposite approach where you have the more forwarded
credentials where the edge is de-privileged but the deeper services perform all of the heavy lifting. On one hand, compromising the edge or proxy at the boundary of the infrastructure is less of a problem. It still has a little sweat mark there because no compromise is completely without consequence.
And this one would allow you to access the credentials of whatever users are active at the time but not necessarily to compromise the infrastructure in general. But the downside of this case where basically you forward things like user session data to these backend services is that if an attacker gains control of any of these backend
services, you're basically forwarding user session data that provides full control of a user account or a full control of whatever that API token does for every one of those backend services. So if I can get remote code execution on any of the backend services that are getting the credentials forwarded, I can start harvesting those credentials and making use of them.
And one would think you might be able to combine these two things directly, but they actually have completely opposite philosophies. Which is really why I started examining this problem. That if you use micro segmentation, you're leaning on the edge so that you can have
unprivileged deep services in the system. And if you forward your credentials, you're relying on your deep services so that you can weaken the trust in the edge. So there's not a trivial way to just combine these two models because you end up with some combination of the worst of both.
In the sense that both rely on trusting different parts of the infrastructure to distrust the other part of the infrastructure. So how can we combine this stuff? I started looking for answers. I started looking for answers on this in some new and actually pretty old places.
The first place that I looked was one of the oldest sources of design around things like capabilities. How many people in this room are familiar with capabilities not in the Linux kernel sense? Okay. I see a few hands.
So Kerberos has a really interesting model that took the better part of 10 or 15 years to kind of iron out. And this is just the diagram off of Wikipedia if you want to kind of see this in more detail and more description. But the heart of this is that this has decoupled reliance on the services from the
actual user credentials and user session data. So what you have here is basically three phases. In the red arrows, which are on top, the user is authenticating, basically proving that they are who they say they are. And then in the yellow area, the user is saying, given that you know who I am, give
me basically a permit to access this specific service. Often in the case of these traditional Kerberos things, this could be anything like a file server or an email server, even some web applications.
And then finally, that ticket that it gets to actually have permission to access the service is then used to talk directly to the service. But that ticket is only useful for talking to that specific service. So in this case, like a traditional case of mounting a file system, it would be you'd go through the first phase to off yourself, you'd get a ticket in order to be able to
mount a specific file system, and then you would talk to the file server and say, here's my proof that I'm allowed to mount this file system. And that is what a capability is, like in the sense moved from the Linux kernel. Since that it is something that through possessing it alone proves that you have permission to
access something or do something. And this file server doesn't need to actually know anything really about the user. Because the ticket alone proves that they have control to be able to access that file system or have specific permissions on it. But this has some serious problems when you start stacking it for services.
What happens is that since tickets are service specific, you end up with a case where either a service then has to be privileged to be trusted for all the deeper services it might access. Let's say that file server is part of a federated system that talks to other file servers.
It could forward this ticket from the user, but that would mean that this file service basically has similar access to the user on all these deeper systems. There's not a great solution here. And in fact, a lot of the stuff around Kerberos was designed with the idea that you would access a single monolithic service at a time and not have a service necessarily
front a bunch of others. So there was some work done on this in 2017. It's actually pretty recent that a lot of this work has gotten done. It's a neat paper called CapNet, security and least authority in a capability enabled cloud. And they created this neat way of stacking services where they created this thing called
sealed capabilities where basically you could give someone two tickets and you could say this ticket is good for mounting this file system and you could also give this ticket in and then that will give permission to perform things with the deeper services. And this is quite secure in my opinion, but it's also a huge break in abstraction in
the design because if I have a nested service like this, I now have to have the client be aware of both of the things that it's connecting to because it has to get a ticket for this service and a ticket for the service that's behind it. And this means that the client is now more closely coupled with the nested set of service
implementations. It means that the system that's granting the tickets has to be aware of all these different services and how they're nested and it makes the whole system rather onerous to maintain because every part of the system has to be aware of every granular part of the capability infrastructure. Another kind of thing that I found interesting is name constraints for X.509 in terms of
another inspirational thing. I don't recommend using this because the implementations of this are wildly inconsistent across libraries and are often not enforced. So unless you really audit your libraries, it wouldn't touch this. But it's a neat concept. It's basically the idea that you can create a limited certificate authority that's able
to issue certificates within a certain scope. And in this example here, you could have a CA that then issues a certificate that allows issuing certificates for any of the subdomains of all systems go.io. And then that intermediate certificate can then actually issue and sign things for things
beneath that. And then it basically walks up the chain and says, is there actually a coherent structure of trust here with the permitted subtrees? So I started putting these pieces together of these different systems that inspired this project.
And I started coming up with this thing that basically combines these kind of ticket granting tickets, as they're called in Kerberos, where it's basically the ticket that it uses to request access to other services, plus the ceiling and plus the name constraints. And started working on a strategy where basically a user would authorize itself from an authorization
service, very much like Kerberos. But instead of getting back something that's specific to a single turf service, it gets something that's scoped. And in this case, it gets back a token that is designed to provide access to any part of the user profile of user A. But it's also addressed.
And this is bringing forward the sealed, the capability sealing concept, where this capability is only going to be usable by whatever is the destination. So it can actually talk to a service here and say, I want to pull my user profile. And it's able to send its request. It's able to send this token. And then service P, which is sort of the profile service, is able to say, yes, they should be
able to pull profile A. But then let's say we have something that is like marketing database. And someone has marketing preferences. And then pulling the profile needs to also pull data from that service as well.
What this starts supporting is the idea that much like the sort of nested X.509 thing with the delegation, if service P here can actually create a new token that it signs and delivers alongside the first capability token, we start getting this sort of nested
infrastructure where the privileges get dropped more and more as requests go deeper and deeper into the infrastructure. And so service M here only gets a token that permits access to the marketing services. So if you compromise service M, you wouldn't be able to capture any credentials that are useful for compromising other services.
And if you compromise service P, you would only be able to get the necessary data to compromise the actual tokens being passed to service P at that point in time. So this represents a strict subset of what's possible to compromise versus something like the kind of more GraphQL-forwarded credentials perspective or the micro-segmentation perspective.
And it allows combining both of those benefits by taking these kind of proven security model designs and then actually pulling them together, not just in a way where they're stacked with each other, but actually integrated with each other.
And that actually reduces the attack surface quite a bit. And with that, I will open it up to questions. I actually have a question mic if anyone has any. Or actually, I don't know whether the question mic is the back one or not.
Test it. I think this one works. Anyone? So we need to send all these tickets among different services, and many times we have
lots of services talking to each other. How does performance fare with this kind of system? Because we have lots of tickets going back and forth. And how does that compare with, for instance, getting just a grant for one ticket on the edge? In terms of the performance?
So one of the beautiful things about the performance of these sorts of systems is that it requires very little external lookup for anything, in the sense that it's usually implicitly validatable any of these sorts of tickets, because they're digitally signed, usually have an expiration, and that public key gets distributed through the infrastructure.
So each of these services is equipped to validate the ticket that rides along with the request and fully make an authorization decision without consulting any other services, with the possible exception of doing revocation on some of this. But often, the solution for revocation in this sort of infrastructure is to have very
short token lifetimes. And then by having a five-minute token, if you can tolerate a five-minute revocation time, then you don't actually have to have any revocation infrastructure, just synchronized clocks. I think there was a question from back. Oh, there. Hi. Could you talk a little bit about how the subrequest thing works?
What gets signed? How does the cryptography work? So this does assume that each service and the user basically has some sort of public key identifying that entity. And that's kind of why I have a little bit of script below this that says, like, signed by user A or signed by service P. And what's happening here, so would it be sufficient
to talk about what happens in service P when it's trying to communicate with service M? So what happens is service P receives the user's request to download their profile. And they get that request with the capability token A sub P, which is on the left, which basically has a scope of the entire profile contents of user A. If you see the I don't know
if this has a laser pointer on it. I think it does. But I don't know how to use it. Oh, there we go. So you can see that this has actually a pretty broad scope here. And then what service P can do is because it's addressed as the destination service,
that means that it can actually use this token to sub sign other tokens with its own identity. So service P has its own kind of certificate or something that isn't really shown here. And it's able to take the token AP, which is addressed to service P, its own identity
as service P, and then use that to sign token kind of A sub M here, which actually only has the scope of the marketing part of the profile. But has a source of service P and a destination of service M. And that basically means that it's doing a delegated handoff to deeper services, but it's only it would not be allowed
for it to have a scope here that exceeded the scope here. So it's only able to close down the scope at each stage. And because each of these is restricted to a certain destination, the broader scope tokens are not usable by the deeper services either.
Because even though service M will be in possession of token AP, A sub P, it won't actually have the authority to use it because token A sub P is not actually addressed to service M. It's not sealed for service M. But by using this chain, we can actually validate that because service P had this authority and service P used its identity to delegate
to M this token, it allows it to tighten the scope and delegate and do the delegation in a way that doesn't allow service M here to actually have any authority beyond what
it beyond the marketing scope. Does that answer the question? Okay. Let me give you a mic. Oh, does that mic not I think one of these mics has
an issue. It's good? Okay. I think it's recording. But it's pretty bad if this gets hacked.
The saving grace of the authorization service is usually that you can break out any identity services from it in the sense that you can usually break out any kind of thing that the user interacts mostly with in terms of how they provide their credentials,
how those credentials get checked. That can be separate from the actual thing that creates the signed tokens. And really the other saving grace of this is that it can be extremely small. And this design relies largely on the fact that this is while a fairly central point of trust is a very small point of trust. There's not much application logic
in here. There's just whatever lookup there is between user identity and authorization. Oh. Did you know that you can defeat the time out of the login on Fedora by pressing
escape twice if you get the wrong password? It's something I use when I mistype it all the time. So are there any further questions or are we out of time? Oh, there's one
in the back. Oh, I should also say you generally have to have something like this regardless of the infrastructure you have. Either you have to embed that in all the services
or you have to break it out and have something make authorization decisions. Yeah, I think my question is about that actually. So it sounds like the drawback of this approach is that you have to duplicate your token verification logic to all your services which may be written in different languages and so on. How do you deal with that in practice? The nice thing about a lot of the tokens and token verification is you don't usually
have to combine much business logic with it because since the tokens actually contain the authorized scope to access implicitly, you basically have to validate that the token is legitimate and that the actual specified scope that it's accessing matches whatever
resource it's accessing on that service. And that's in practice in my mind a lot simpler than having services call out to some sort of R back system to be able to figure out whether a particular user is authorized to do X or Y or Z on a service. And getting even more practical with it, if you start looking at libraries like things for
JWT, there are JWT libraries for almost every major language and framework and that allows choosing a standard like that allows you to embed validation across the board in almost all the frameworks that you might use for microservices and that builds in
all the validation for checking against the public key, checking for expiration, checking certain scope constraints. You can actually put the scope constraints into the validation that's performed by most JWT libraries. You can do similar things with most other kind of public key infrastructure setups, but I'm quite fond of some of the JWT stuff
because it has so much consistent validation across languages and frameworks that you can just include back there. Hi. You've already spoken about the size of the services and not having to call to
a centralized R back. I don't know if you've explored some of the claims that Istio makes to do similar sorts of things. Can you give a comparison? I'm not actually familiar with the claims of Istio. Would you mind summarizing them? In this case, delegated authentication for interservice requests. But the structure is
slightly different in that it runs its own PKI. It would end up being a comment if I carried on further. Okay. Maybe a comment is the right answer to this, but the real question for me is do the credentials that it forwards contain
sufficient material for the backend services to be able to get the same privileges as the client that made the request? It can do, yeah. But the architecture has more central points of failure, I guess, than this does. The main thing I'd like to avoid with the forwarded credentials is the idea that every
one of those backend services becomes privy to whatever API or session token the request came in from the edge with. At least that's what I've seen with the GraphQL stuff. I haven't looked at Istio for it. I think we're probably out of time. Yeah. Okay. Thanks, folks.