We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

The CSRF Resurrections

00:00

Formale Metadaten

Titel
The CSRF Resurrections
Untertitel
Starring the Unholy Trinity: Service Worker of PWA, SameSite of HTTP Cookie, and Fetch
Serientitel
Anzahl der Teile
85
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
CSRF is (really) dead. SameSite killed it. Browsers protect us. Lax by default! Sounds a bit too good to be true, doesn't it? We live in a world where browsers get constantly updated with brand new web features and new specifications. The complexity abyss is getting wider and deeper. How do we know web technologies always play perfectly nice with each other? What happens when something slips? In this talk, I focus on three intertwined web features: HTTP Cookie's SameSite attribute, PWA's Service Worker, and Fetch. I will start by taking a look at how each feature works in detail. Then, I will present how the three combined together allows CSRF to be resurrected, bypassing the SameSite's defense. Also, I will demonstrate how a web developer can easily introduce the vulnerability to their web apps when utilizing popular libraries. I will end the talk by sharing the complex disclosure timeline and the difficulty of patching the vulnerability due to the interconnected nature of web specifications.
47
Rechter WinkelWeb ServicesWeg <Topologie>SchnittmengeCookie <Internet>Dienst <Informatik>Computeranimation
ResonanzGammafunktionKonvexe HülleCross-site scriptingBrowserSystemplattformHIP <Kommunikationsprotokoll>Cookie <Internet>Dienst <Informatik>MarketinginformationssystemComputersicherheitDatenmissbrauchWeb SiteWeb-SeiteGEDCOMCracker <Computerkriminalität>Komplex <Algebra>BenutzerbeteiligungCookie <Internet>Web SiteComputersicherheitFokalpunktProtokoll <Datenverarbeitungssystem>InformationsspeicherungClientMereologieAttributierte GrammatikMechanismus-Design-TheorieAggregatzustandDatenverwaltungServerZahlenbereichVererbungshierarchieCross-site scriptingTupelURLInternetworkingBrowserFehlermeldungDatenbankSoundverarbeitungDomain <Netzwerk>Registrierung <Bildverarbeitung>NummernsystemMailing-ListeWeb-SeiteDatenmissbrauchUmwandlungsenthalpieMultiplikationsoperatorSchnittmengeExpertensystemWeb ServicesE-MailNichtlinearer OperatorBildgebendes VerfahrenSerielle SchnittstelleSocketApp <Programm>MatchingDomain-NameSystemplattformGruppenoperationStandardabweichungEndliche ModelltheorieLeistung <Physik>Dienst <Informatik>Dichte <Stochastik>Exogene VariableMAPWeb-DesignerComputeranimation
E-MailZeitbereichAdressraumGRASS <Programm>Cookie <Internet>Web SiteGruppenoperationDynamisches RAMNormierter RaumNummernsystemKonvexe HülleCliquenweiteEin-AusgabeSISPGammafunktionWarteschlangeHIP <Kommunikationsprotokoll>DebuggingDualitätstheorieResonanzAttributierte GrammatikTupelDomain <Netzwerk>ComputersicherheitBrowserWeb SiteVerschlingungSkriptspracheBildgebendes VerfahrenNummernsystemAusnahmebehandlungBildschirmmaskeCookie <Internet>Cross-site scriptingObjekt <Kategorie>URLsinc-FunktionEin-AusgabeAttributierte GrammatikClientAdressraumServerVererbungshierarchieRegistrierung <Bildverarbeitung>Formale SemantikProzess <Informatik>Web-SeiteComputeranimation
Cookie <Internet>LaufzeitfehlerMaßerweiterungSoftwareentwicklerBrowserDienst <Informatik>CachingInformationsspeicherungAutomatische IndexierungRechnernetzDigitalfilterFirefox <Programm>ZeitbereichHypermediaVideokonferenzHilfesystemLokales MinimumBenutzerdefinierte FunktionWeb-SeiteVerschlingungAttributierte GrammatikCookie <Internet>E-MailBildschirmmaskeBrowserComputeranimation
Web SiteCookie <Internet>Attributierte GrammatikE-MailStrategisches SpielSkriptspracheEreignishorizontExogene VariableKonvexe HülleCachingRechnernetzGoogle ChromeSpezialrechnerDefaultWärmeübergangDienst <Informatik>Web-ApplikationInteraktives FernsehenAttributierte GrammatikFunktionalDatenflussPerspektiveWeb ServicesDefaultInformationsspeicherungCookie <Internet>Arithmetische FolgeProgrammbibliothekHyperbelverfahrenObjekt <Kategorie>SoftwareStandardabweichungThreadTypentheorieZahlenbereichEinsBrowserBenutzerbeteiligungProzess <Informatik>Kontextbezogenes SystemStrahlensätzeRoutingEreignishorizontCachingURLServerExogene VariableStrategisches SpielBildgebendes VerfahrenSoftwareentwicklerOpen SourceWeb-DesignerProxy ServerSpeicherabzugGraphische BenutzeroberflächeMatchingWeb-SeiteCASE <Informatik>Twitter <Softwareplattform>BildschirmmaskeKlasse <Mathematik>CodeComputeranimation
WärmeübergangSkriptspracheDienst <Informatik>Cookie <Internet>StandardabweichungKommensurabilitätNummernsystemExogene VariableMathematikEreignishorizontStrahlensätzeRechnernetzEin-AusgabeKontextbezogenes SystemE-MailClientStrom <Mathematik>CachingAlgorithmusDatenverwaltungHIP <Kommunikationsprotokoll>Mechanismus-Design-TheorieTablet PCInstantiierungWeb SiteZeitbereichInverser LimesGruppenoperationSynchronisierungHyperlinkCookie <Internet>BrowserZweiE-MailBildgebendes VerfahrenKontextbezogenes SystemResultanteZeichenketteWeb SiteCachingAlgorithmusApp <Programm>Konstruktor <Informatik>GarbentheorieNummernsystemClientAutomatische IndexierungWeb ServicesWort <Informatik>EreignishorizontSystemaufrufCodeParametersystemExogene VariableGrenzschichtablösungDemoszene <Programmierung>AnfangswertproblemKollaboration <Informatik>Objekt <Kategorie>GeradeEin-AusgabeElement <Gruppentheorie>ServerFreier ParameterKategorie <Mathematik>SoftwaretestFahne <Mathematik>PunktBitUmwandlungsenthalpieWeb-SeiteDomain <Netzwerk>Rechter WinkelBildschirmmaskeVerschlingungMultiplikationsoperatorDienst <Informatik>InstantiierungComputeranimation
BrowserCookie <Internet>MaßerweiterungDienst <Informatik>TUNIS <Programm>VideokonferenzCachingInformationsspeicherungAutomatische IndexierungRechnernetzFirefox <Programm>MehrrechnersystemSoftwareentwicklerEreignishorizontLokales MinimumDigitalfilterLaufzeitfehlerHypermediaSichtenkonzeptÄhnlichkeitsgeometrieWeb-SeiteBrowserEreignishorizontCookie <Internet>BildschirmmaskeWeb SiteCodeWeb ServicesEinsZahlenbereichMehrrechnersystemComputeranimation
VideokonferenzGeradeRechnernetzSynchronisierungEreignishorizontHyperlinkGruppenoperationResonanzSkriptspracheExogene VariableAchtCachingSpezialrechnerWärmeübergangDefaultZahlenbereichSoftwareSoftwareentwicklerVerschlingungBildschirmmaskeBildgebendes VerfahrenWeb ServicesWärmeübergangComputeranimation
Cookie <Internet>AutorisierungDienst <Informatik>MaßerweiterungFirefox <Programm>HypermediaVideokonferenzSichtenkonzeptInverser LimesInformationsspeicherungCachingAutomatische IndexierungRechnernetzSpielkonsoleWurm <Informatik>SoftwareentwicklerWärmeübergangDigitalfilterFehlermeldungDichte <Stochastik>ThreadLaufzeitfehlerSpezialrechnerDebuggingQuellcodeDefaultWeb-SeiteLoginBildschirmmaskeCodeCookie <Internet>EreignishorizontWeb ServicesVerschlingungFunktionalWeb SiteCross-site scriptingComputeranimation
SichtenkonzeptVideokonferenzDefaultCodeSpezialrechnerHyperlinkWärmeübergangGruppenoperationCachingSkriptspracheExogene VariableCookie <Internet>Wurm <Informatik>Dienst <Informatik>Web ServicesDefaultMathematische LogikWärmeübergangSchnittmengeZahlenbereichSoftwareBildschirmmaskeWeb-SeiteComputeranimation
Wurm <Informatik>Lokales MinimumMaßerweiterungDienst <Informatik>VideokonferenzSichtenkonzeptCachingInformationsspeicherungCookie <Internet>Automatische IndexierungSoftwareentwicklerRechnernetzAutorisierungSpitze <Mathematik>HypermediaDemo <Programm>WärmeübergangHyperlinkGruppenoperationSkriptspracheMeta-TagEreignishorizontLaufzeitfehlerDigitalfilterLoginSpezialrechnerFirefox <Programm>App <Programm>VerschlingungBildgebendes VerfahrenWeb ServicesCookie <Internet>Web-SeiteWeb SiteLoginComputeranimation
SichtenkonzeptCodeGammafunktionWeb-SeiteInformationRechnernetzLeistungsbewertungDienst <Informatik>Cookie <Internet>VideokonferenzMaßerweiterungFirefox <Programm>SoftwareentwicklerBrowserInformationsspeicherungHypermediaSpezialrechnerAggregatzustandZeitbereichDatentypElektronische PublikationInhalt <Mathematik>DickeVersionsverwaltungE-MailZeitdilatationDemo <Programm>Exogene VariableComputersicherheitp-BlockDigitalfilterNormierter RaumFormale SpracheWeb ServicesSoftwareDemo <Programm>MultiplikationsoperatorClientSoftwareentwicklerCookie <Internet>Rechter WinkelSchnelltastePatch <Software>Computeranimation
Elektronische PublikationSichtenkonzeptCodeComputersicherheitSerielle SchnittstellePatch <Software>MaßerweiterungWeb SiteCookie <Internet>Kontextbezogenes SystemEin-AusgabePatch <Software>ProgrammfehlerE-MailComputersicherheitRahmenproblemVollständiger VerbandGrenzschichtablösungDifferentePartitionsfunktionInformationWeb ServicesInformationsspeicherungCookie <Internet>ClientVerkehrsinformationUmwandlungsenthalpieVersionsverwaltungWeb SiteCASE <Informatik>SoftwareentwicklerServerGraphische BenutzeroberflächeGoogolMathematische LogikComputeranimation
DigitalfilterMaßerweiterungDienst <Informatik>Firefox <Programm>AutorisierungLoginCookie <Internet>CachingInformationsspeicherungRechnernetzSoftwareentwicklerLaufzeitfehlerEreignishorizontSichtenkonzeptZeitbereichRahmenproblemAttributierte GrammatikDefaultServerCookie <Internet>Web ServicesInformationsspeicherungCross-site scriptingAttributierte GrammatikAdditionRahmenproblemInteraktives FernsehenBrowserMAPRückkopplungVektorraumRelativitätstheorieBenutzerbeteiligungDatenflussMechanismus-Design-TheorieCodeCachingDefaultSoftwareentwicklerKomplex <Algebra>SoftwareUmwandlungsenthalpiePartitionsfunktionServerWeb SiteTeilbarkeitSchnittmengeVersionsverwaltungDelisches ProblemComputeranimation
Transkript: Englisch(automatisch erzeugt)
Welcome to the last talk of Saturday today here at DEFCON in track 3. So this is the CSRF Resurrections, starring the unholy trinity, service worker of PWA, same set of HTTP cookie, and fetch. Please welcome Dongseo Kim.
Thank you. Right. Sorry for making you wait. Thank you. Well, this is the last time, so I can use whatever time I have.
Thank you. Good evening. Thank you all for coming to my talk. It's so great to see you all in person. So I'm excited to present my research. The CSRF Resurrections, starring the unholy trinity, service worker of PWA, same set of HTTP cookie, and fetch. That's a long name. Before we begin, let me introduce myself.
My name is Dongseo Kim. I'm an IT security expert at the Security Office. We are based in Sweden and South Korea and are part of the TRPSEC group. If you have any questions or comments, please feel free to reach out. So firstly, let's touch on the current landscape of the web
and what we are going to discuss. The web platform is evolving faster than ever before. Every day, new features and APIs are constantly being proposed, developed, tested, shipped, and standardized. They not only include aesthetic improvements, but also open powerful operations to web developers,
such as web USB, web GPU, NFC, serial socket sensor, et cetera. Then, what does it mean for security? Well, new technologies bring new security problems to consider. They are intertwined with each other, which by nature,
it is highly complex to maintain. Previously, researchers like Frank and Adal took a look at broader features. Such as PDF.js, app cache, pre-render, web socket, et cetera, and they discovered that for all browsers, at least one cookie protection policy can be bypassed.
Some of the findings that had been fixed were reintroduced. Meltdown and Spectre, they changed our way of thinking about the traditional web threat model. Powerful web APIs, such as performance or shared array buffer, had to be disabled or reduced.
This showed the complexity of keeping the web secure, although every step of the way, it gets thoroughly reviewed, errors slip through the cracks. In this talk, I will put my focus on that complexity, especially we'll discuss HTTP cookie same site,
PWA service worker, fetch, then how the three can bring back CSRF from the dead. So let's dive in. So first, same site cookie, the conclusive antidote to CSRF. But before we even get to same site cookies,
let's start with what actually is HTTP cookie. So HTTP cookie is a state management mechanism built on top of the stateless HTTP protocol. It defines two headers, a cookie for requests, set cookie for responses, typically used to store server session identifier, often a random secret, in the client storage.
Then user agents or clients or browsers always send the appropriate cookie value, but only with the matching requests. What we are gonna focus is the matching part. What does that matching mean? If we dissect the below example into parts,
session is the cookie value, no, cookie name, sorry. Secret nonce is the cookie value. Domain and expires are the attribute names. And example.com and the written date are the attribute values. So matching is mainly about the attributes.
The domain attribute states that this cookie is for example.com, and the expires attribute states that it should be discarded after that specific date. Assume the user agent accepted the set cookie header when the user agent makes a request like image.give.
Since the domain is a match with the server and the expiration date is a match with the clock, it will include the cookie value in the cookie header. That's kind of about it, and that is a problem. The cookie standard, put it bluntly, quote,
for historical reasons, cookies contain a number of security and privacy infelicities. Just to name a few, third party cookies, super cookie, eavesdropping, cross-site scripting, cross-site request forgery. What we are going to focus on is the cross-site request forgery.
Then first, what is cross-site anyways? What is cross-site and cross-site request anyways? Let's consult the specifications. First, we need to define what an origin is. An origin is defined as a tuple of three elements, a scheme, host, and port.
Below on the left side are URLs, and the right side are the origins. A URL like hpsexample.com slash page becomes a tuple of hpsexample.com 443 as its origin. If the host is different, like sub.example.com,
or if the port is different, port is custom, like 8443, the tuple changes, hence the origin is different. Then we need to define what a registrable domain is. It is a domain name the internet users can or historically could directly register.
A registrable domain is defined as a public suffix of host plus one additional label. A public suffix list is a database of effective top-level domains. It contains names like com, co.uk, git,
github.io, ng-rock.io, et cetera. If the host is example.com, then its public suffix becomes just com, and plus one additional label becomes example.com, which is its registrable domain.
For sub.example.com, its public suffix is com, so registrable domain is example.com. The same as the last one. The registrable domain of a subdomain is the same as its parent domain. Next, we define what a site is.
A site is defined as a tuple of two elements. Scheme, registrable domain. We are using the same examples again, and the schemes are the same, but the host and ports are slightly different. But since their registrable domains are all example.com,
the site, a tuple of https, example.com, are all the same. Hence, the origins are all same site with each other. Then we need to choose which one should be the site for cookies for a given request.
This varies under different circumstances, but for the usual browsing cases, it is the document's top-level origin, also known as the URI displayed in the address bar. Simple as that. We need to define one more thing,
the client object of a request. This is an object that holds, among other things, contacts for JavaScript and browsing and an origin used in security checks. We'll denote it with curly brackets, like so.
With this information, we can finally define what a same site request is. A request is same site when it satisfies, A, the request URL's origin is same site with its client's site for cookies. B, the origin's URL's origin, the request URL's origin is same site
with every URL in its URL list. C, the request is not triggered by a user pressing the reload button. Moving forward, we will only focus on the first criteria. Then, a cross-site request is defined to be a request that is not same site.
That was quite formal, but let me give you a basic example to make it more digestible. As we discussed, a site has a tuple of scheme and registerable domain, and the typical site for cookies is the user agent's address bar.
Below is a document hosted at https.example.com, and this is our site for cookies for the rest of the example. Its site is a tuple of https.example.com. On this page, there's a link to https.example.com,
whose site is a tuple of https.example.com. So, clicking the link will spawn a request since the request URL's origin site is the same as that of site for cookies, the address bar. It is a same site request. Also, clicking a link will make the browser
navigate to the URL. It's a navigation request, and also top-level, since the document does not have a parent document. It is also an HTTP safe request, since as per the HTTP semantic specification,
the GET method is considered safe. Embedding an image will also spawn a request. The URL is https.example.com slash image.gif. Like the previous link, it is a same site request. If a request is for an image, audio, script, style, et cetera,
we call it a sub-resource request. It is an HTTP GET safe request, just like clicking a link. Now, there is a different link to https.victim.com. Its site becomes a tuple of https and victim.com,
which is different from that of site for cookies, the address bar. It is a cross-site request. Just like the previous link, it is a top-level navigation request and an HTTP safe request.
Finally, we have a form for httpsvictim.com. Like the previous link, it is a cross-site request. Submitting the form will make the browser navigate to the URL, so it is a top-level navigation request. But since the form's method attribute is POST, it is now an HTTP unsafe request.
Now we know what a cross-site request is, we can define what a CSRF is. It is when an attacker forces a victim to execute forged cross-site requests. It is possible because, as we discussed, user agents send the cookies with matching requests.
Below is an example. Let's say user is visiting example.com and it has a forged form, which has malicious inputs like deletes and all. Submitting the form to victim.com
by making the user click on it or using JavaScript will spawn a cross-site request, also a top-level navigation request, and an HTTP unsafe request. Crucially, the user's cookie will be sent. The server will recognize and authorize it as the user
to run the malicious commands. It is indeed possible to perform a CSRF with the HTTP GET method, like clicking a link or embedding an image, as long as the server is susceptible. There are many known methods to mitigate CSRF.
These are the mitigations, and one of them is to adopt the same-site policy. The same-site attribute introduces an additional attribute to the matching process. If the attribute is set to strict,
then the browser sends the cookie only for same-site requests. The browser does not send the cookie for clicking a cross-site link nor embedding a cross-site image nor submitting a cross-site form. But same-site lacks is like strict, but there is an exception. It allows cross-site top-level navigation safe requests.
What it means is that the browser will send the cookie only when clicking a cross-site link, with our previous examples, but not when embedding a cross-site image, but not when submitting a cross-site form.
If it is set to none, the browser will behave just like before. The same-site is ignored and the cookie is sent. Let me show you how it's all supposed to work. Right, we start with the victim page here,
where it echoes the cookie header of the browser's request. It shows three cookies, non-cookie one, lacks cookie two, strict cookie three. And you can see the same-site attributes are set respectively.
Let's navigate to the attacker page and then navigate back to the victim page by simply clicking a link using get method. And now the page echoes only two cookies, none and lacks, meaning the browser did not send a strict cookie
since it was a cross-site request. Even though they are actually properly stored. We tried the same by submitting a form using post request. Now the page shows only one cookie, none, meaning the browser did not send lacks cookie either
since the request was HTTP unsafe. So, let's go back to the victim page. Since the request was HTTP unsafe. That is a quick look at how it's all supposed to work with same-site attribute.
The same-site attribute has been widely supported by browsers, but the adoption has been kind of slow. Then in 2020, Chrome started trading cookies without same-site attribute as lacks by default. The cookie draft standard has been also revised
so there has been some sentiment that CSRF dead is really dead and same-site cookies alone are sufficient to protect against CSRF. We're gonna leave it at that and move on to a different topic. And the next is, our next topic is Service Worker, PWA's network layer.
So, Service Worker is one of the core technologies of Progressive Web App. Progressive Web App in a nutshell is a type of web application that is native-like, installable, and offline. Service Worker runs in a worker context,
meaning it is running in a different thread from the main JavaScript execution context. Service Worker can act as a proxy server between web application, browser, and network. It can intercept network requests then respond with custom responses. Together with cache storage API,
it makes offline fallback pages possible in a web application. Service Worker is mainly used for custom caching strategies by a front-end web developer, just like Twitter on the right example. So, this is how caching works with Service Worker.
There are roughly four layers to consider, the HTML, CSS, JavaScript layer on the top, in the middle, the Service Worker, interacting with cache storage API, the browser with its HTTP cache, and the server.
So, let's say the HTML layer requests an asset. It could be an image, a video, a CSS file, a JS file, et cetera. Then, a fetch event is dispatched to the Service Worker, which intercepts the request. Here, a web developer can implement
their custom caching strategy using JavaScript and cache storage API. In case of a cache miss or an expiration or else, the Service Worker can call the fetch API using the event's request. Then, the browser makes use of its HTTP cache strategy
based on the headers, cache control, expires, et cetera. If that also encounters a cache miss or else, excuse me, the browser makes an HTTP request to the server over the network and receives an HTTP response.
The HTTP response is handled, then passed to the Service Worker in the form of a response class object. The Service Worker then receives the object, stores it in cache storage, then passes it into the event.
So, let's implement this flow with a code example from the perspective of a web developer. Before anything, we register the Service Worker. This does not require any user interaction and the developer will typically just want it to happen as soon as possible. The Service Worker looks like this.
It adds an event listener on the fetch, then starts intercepting all requests. If there is a request, in this example, it's an image, it triggers and dispatches a request it triggers and dispatches a fetch event to the listener.
Now, the callback function is called to handle the request. The function checks if there is an existing match in the cache storage. If there is a match, it is returned to the event.respondWith method. We call this the cache flow.
If there is no match, we call the fetch API simply using the event's request as is. We call this the path through flow. The browser returns a response object which also gets returned untouched. Then finally, the image receives the response object.
The cached flow isn't required. The Service Worker can simply pass through every request. Then we will revisit this example later. For more complex or caching strategies,
most web developers rely on an open source library to make the job easier. One of the most popular ones is Workbox. Created and maintained by Google Chrome, it is a collection of JavaScript libraries for PWAs. Importantly, it includes Service Worker routing and caching modules.
We simply import the Workbox-sw.js. Then the developer can specify whatever custom cache strategy they want. In this Workbox example number one, a route is registered such that if the request is for an image, it uses a strategy
that looks up the cache first, effectively caching all the image assets. Then a default handler is registered so that all the other unspecified requests will only go to the network. Something like this is also possible. Workbox example number two,
a route is registered such that if the request URL has a path name of slash transfer, it should only go to the network but will time out after 600 seconds for post requests. The intention is to make the request wait a little bit more because the server is slow somehow.
For React development, Create React app provides a built-in support for Service Worker and caching. Inside index.js, there is a line that can be changed. Changing the method to register
registers service-worker.js as soon as the page is loaded. It is also using Workbox under the hood and customizable. We'll also revisit these examples later. Now that we learned SameSite cookie and Service Worker,
let's get into the main part, the collaboration that leads to CSRF. We will take a deep dive into the specifications to spot the smells radiating from them. This is going to be a bit crude and simplified read, so bear that in mind. The scene is familiar.
It's the same example. We have a Service Worker registered, intercepting and passing the request through. Then we have an image element waiting to be loaded by the browser. As per the HTTP specification, when updating the image data,
the browser should simply fetch the image request. But how does that actually happen? As per the fetch specification, the browser should perform fetching, main fetch, scheme fetch, and HTTP fetch. In the middle of the process, since we have an active Service Worker, 5.4,
set response to the result of invoking handleFetch for request for Service Worker. Then what is handleFetch? We look at the Service Worker specification. Appendix A, handleFetch.
Let E be the result of creating an event with fetch event. Dispatch E at activeWorker's global object. Now the event is created and is passed on to the Service Worker. The Service Worker code is the same.
It is a path through. Simply fetch the event's request, then return the response as is. But then what does that fetch method do? As per the fetch spec, the fetch input init method steps are
let request object be the result of invoking the initial value of request as constructor with input init as arguments. That is one sentence. Then let's see what the constructor does. For the request class,
6.2, set request to input's request. Then 12, set request to a new request with the following properties. What's happening here is that the browser creates a new request object out of the input request object, copying one property at a time.
But then there's something peculiar. The origin of the new request is set to client, a placeholder string, meaning it will follow its client object. Then what's the client object? It's this object's context and origin.
This is context and origin. At this point, this points to the service worker rather than the requested image or the HTML. This is a red flag.
Then how is the cookie header computed? Let's continue fetching. Let cookies be the result of running the cookie string algorithm. See section 5.4 of cookies. Cookies refers to the IETF spec, so let's look into that. In that spec, there is a separate section for service workers.
It reads, quote, requests which are initiated by service worker itself via direct call to fetch, for instance. On the other hand, we'll have a client which is a service worker global scope. Its site for cookies will be the registerable domain of the service worker's URI, unquote. Some familiar words here,
but the point is that site for cookies is always the service worker's URI rather than that of the image request. Same situation, same red flag. What happens if attacker.com comes into play, which embeds a cross-site request to victim.com?
The service worker is hosted at victim.com. Site for cookies is always the service worker's URI. The request URL, victim.com slash image.give, is same site with its site for cookies. Victim.com slash sw.js.
Suddenly, it is a same site request by definition. Looks like CSRF is back, so let's put it to the test. We are going to revisit the previous code examples, but this time we are adding an attacker. The attacker has two elements, a link and a form.
Activating other will trigger a fetch event, then pass through to the server. Let me show you. Okay.
We have the same setup, a victim page which echoes the cookie header. None lacks strict cookie set in the browser. Let's register the service worker. Now, it is listening for fetch events and the code itself is a simple pass-through.
Let's navigate to the attacker page, then navigate back to the victim page with a link. As per the same site policy, the strict cookie should not be sent. It is a cross-site request. Only none and lacks should be sent.
But now it shows all three cookies. The browser simply disregarded the same site policy. Let's try the same with the form. Now, as per the same site policy, now only the non-cookie should be sent.
But all three cookies again. Thank you. Now, let's try that again without the service worker. Then it's resend and it's non-cookie alone. So.
Then let's look at the workbox example number one again. The device that we're going to use is the workbox example number one. So, let's look at the workbox example number one again. The device that we're going to use is the service worker. Then let's look at the workbox example number one again. The developer's intention was simply
to cache all image assets and then for all the other requests to go over the network only. Then we will add an attacker to the mix. The attacker has two elements, a link and a form to the endpoint slash transfer.
We have the victim page here with login and logout functionality. A session cookie with same site strict. You can see the page shows we are authorized. Let's navigate to the attacker page and navigate back to the victim with link. As per the same site policy,
the strict cookie is not sent, so we are unauthorized. Trying again with the form, still unauthorized. So, let's go back and let's register the service worker. It's listening for fetch events
and the code looks as we discussed. Let's try the same and click on the cross site request and we are now authorized. However, submitting the form does not get us authorized.
So, what happens is that setting the default handler itself makes the service worker pass through all the requests.
Doing so only applies to HP GET requests, hence only affecting the link, not the form. However, workbook example number two can make things worse. The intention was to simply change the timeout setting. Let's add the same attacker to the mix.
We have the same victim here, but there is an additional logic in the service worker. A prolonged network timeout for post requests transfer. Let's try the form on the attacker page. And now we are authorized.
Thank you. Let's revisit the create react app example. The attacker has one element, a link to the victim's secret image.
We navigate to the victim, like so. Immediately there is a service worker registered to the browser, listening for fetch events. Clicking the login button would set a strict session cookie.
You can see we are logged in. Let's navigate to the attacker page and navigate back to the victim page. You can see the image here, despite the same site policy. Just to be sure, let's deregister the service worker and try again.
Now it returns forbidden, as intended. And there is one more thing here. We are using this first example again. We are doing the same demo one more time, but this time with Firefox developer tools opened
on the network tab on the right. So register a service worker, listening for fetch events, navigate to the attacker. Then the client wrongly sends all three cookies, right? So it's the same. But when we examine that specific request
in the developer tools, and the request, it says only none cookie was sent. This was supposed to happen, but what actually happened is quite opposite of this.
Okay, let's talk about patches and updates, yet issues still remain. So I reported this problem in the August of 2020. As it impacted both Firefox and Chrome, two separate reports were sent.
And then in October, an issue on the cookie specification was independently opened. It was about the site for cookies problem that we previously discussed. Then in the August of 2021, a security bug on the origin header was independently reported.
It was related to the origin problem we previously discussed. In September, other researchers started noticing the same problem, so duplicate bugs were independently reported. In October, a service worker spec meeting
on the CSRF issue was held. It spawned about 14 separate issues in different specs, including service worker, fetch, and storage partitioning. Three weeks later, Chromium patched both bugs. And in December, the fetch spec was amended.
As we discussed, the problem was that the origin of the new request was client, and the client was pointing to this, which was the service worker itself. Now the amended spec properly propagated
the original request origin information. In the January of 2022, finally, Chrome 97 was released to the stable channel with the fix. And the next day, Google assigned a CVE.
On July 27th, the cookie spec was amended so that the site for cookies logic is handled by the server. By the service worker spec. So no update for Firefox is available as of now. The affected versions are Google Chrome from 51 up to 97,
and Mozilla Firefox from 60, and for the developer tools bug is from 74. There still is one remaining issue, and that is a nested frame. In case of victim one, let's say news.com,
and embeds an attacker, ad.com, which also embeds victim two, news.com. Since the nested frame and its parent, the nested attacker frame, are cross site,
you cannot set nor send a strict cookie. But if we register the path through a service worker, listening to fetch events, now access is possible. This particular problem is to be addressed
with the client-side storage partitioning spec. So then, conclusions. To recap, in this talk we discussed same site of HTTP cookie, centered around cross site request and CSRF. We discussed the same set attribute and legs by default as a defense against CSRF.
Next, we discussed service worker or PWA. We discussed the caching flow with service worker and code examples. Then, we moved on to the CSRF version. We discussed the caching flow with service worker and code examples. Then, we moved on to the CSRF resurrections. We took a deep dive into the related web specs
and how same site defense gets bypassed when a service worker is involved. Using the code examples, I showed how a developer can easily introduce this problem. Finally, we talked about the aftermath. The disclosure timeline was mentioned, and we discussed the remaining issue of a nested frame.
Let's finish up with takeaways. Firstly, do not use same site cookies as the sole defense mechanism against CSRF. They should be considered as an additional layer of defense. As plainly written in the cookie specification,
quote, developers are strongly encouraged to deploy the usual server side defenses to mitigate the risk more fully, unquote. Also, double check network interactions with other tools whenever possible. The browser's developer tools might not actually depict the reality in some cases.
Since it is still an ongoing problem with Firefox and the nested frame, this is a new attack factor until the issues are fully resolved. Finally, is there any other subtle complex specification level problem that has been overlooked,
and how do we find it? That said, I'd like to thank Oscar and Louis for their feedback, and Betty for proofreading. Thank you all very much for listening. Thank you.