We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

How to cache and load data without even trying

00:00

Formal Metadata

Title
How to cache and load data without even trying
Title of Series
Number of Parts
52
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Process (computing)Physical systemRotationLibrary (computing)Constructor (object-oriented programming)BitPoint (geometry)Data storage deviceConnected spaceVideo gameParsingDataflowSource codeLink (knot theory)Goodness of fitSoftwareSequelObject (grammar)Multiplication signComputer configurationOpen sourceCache (computing)MultilaterationLecture/Conference
MathematicsRotationConfiguration spaceView (database)Survival analysisPresentation of a groupData storage deviceCycle (graph theory)Video gameDiagramXMLComputer animation
Configuration spaceStandard deviationException handlingException handlingMultiplication signCache (computing)Configuration spaceGoodness of fitXMLComputer animation
Multiplication signCASE <Informatik>Goodness of fitOnline helpBitMultiplication signMiniDiscCASE <Informatik>Mobile appContent (media)Computer animationXML
Number theoryHookingDependent and independent variablesJava appletSystem callMultiplicationSoftwareXMLComputer animation
ParsingGame theoryPattern languageFlow separationView (database)ParsingObject (grammar)Pattern languageRepository (publishing)Data storage deviceMappingLogicView (database)XMLUMLComputer animation
Repository (publishing)Rule of inferenceConsistencyCodeLogicInformation retrievalTranslation (relic)QuicksortTransformation (genetics)ConsistencyLevel (video gaming)Rule of inferenceSource codeDifferent (Kate Ryan album)Data storage deviceComputer animationXMLUML
ImplementationParsingCache (computing)ImplementationData storage deviceParsingSocial classXMLUMLComputer animation
DataflowParsingKey (cryptography)Data storage deviceDataflowSet (mathematics)Streaming mediaInterface (computing)XMLComputer animationUML
String (computer science)OvalData storage deviceView (database)CASE <Informatik>Key (cryptography)String (computer science)Computer animationXML
String (computer science)MathematicsKey (cryptography)Database transactionConfiguration spaceSet (mathematics)Key (cryptography)MathematicsData storage devicePresentation of a groupState of matterXMLUML
Computer networkString (computer science)OvalSoftwareConcurrency (computer science)Loop (music)Key (cryptography)System callCASE <Informatik>Multiplication signXMLUML
String (computer science)OvalStreaming mediaEvent horizonData storage deviceRight angleDataflowSemiconductor memoryCache (computing)Streaming mediaSoftwareView (database)Program flowchartXMLUML
Raw image formatQuantum stateInterface (computing)Key (cryptography)String (computer science)Client (computing)Interface (computing)Data storage deviceInheritance (object-oriented programming)Client (computing)Right anglePoint (geometry)Java appletParsingComputer animationXMLUML
ParsingView (database)Endliche ModelltheorieSinguläres IntegralSocial classParsingSource codeElectronic visual displayTransformation (genetics)QuicksortData storage deviceParsingView (database)Endliche ModelltheorieMultiplication signCuboidStreaming mediaType theoryoutputParsingCASE <Informatik>XMLComputer animation
MiddlewareParsingParsingLibrary (computing)Mechanism designParsingMiddlewareFactory (trading post)Logic gate
Maxima and minimaParsingSlide ruleDifferent (Kate Ryan album)DataflowMixed realitySoftwareSemiconductor memoryProgram flowchart
Row (database)Installable File SystemExecution unitFile systemMultiplication signConnected spaceData storage deviceFile systemRow (database)Computer animationXML
Row (database)Installable File SystemExecution unitFile systemString (computer science)OvalSource codeData storage deviceCompilerInterface (computing)Cache (computing)File systemQuicksortRight angleRow (database)MiniDiscKey (cryptography)Data storage deviceXMLComputer animation
Interior (topology)Key (cryptography)Raw image formatInterface (computing)File systemRaw image formatSequelInterface (computing)State of matterRow (database)Reading (process)Right angleXMLComputer animationUML
Maxima and minimaConnectivity (graph theory)Semiconductor memorySoftwareMiniDiscGraph (mathematics)Mixed realityCache (computing)Computer animation
ParsingExecution unitFile systemConnectivity (graph theory)Data storage deviceOrder (biology)CuboidParsingMereologyElectronic mailing listStreaming mediaParsingProcess (computing)File systemFunction (mathematics)Key (cryptography)Row (database)Server (computing)Partial derivativeOpen setComputer animation
RAIDRing (mathematics)Read-only memoryString (computer science)Open setComputer networkConfiguration spaceMultiplication signSoftware repositoryCache (computing)SoftwareSemiconductor memoryCASE <Informatik>Computer animationXMLUML
Interface (computing)Multiplication signData storage deviceCartesian coordinate systemQuicksortImplementationElectronic mailing listBitProjective planeCategory of beingGoodness of fitComputer animationXML
Open setFile systemData storage deviceCodeRight angleFile systemSource codeSoftwareMultiplication signType theoryObject (grammar)ParsingComputer animation
System callTouchscreenMultiplication signService (economics)Mobile appXMLComputer animation
OvalString (computer science)Object (grammar)System callHookingSet (mathematics)Data storage deviceData structureJava appletResultant
System callLevel (video gaming)String (computer science)Set (mathematics)System callConfiguration spaceStreaming mediaData storage deviceInformationResultantComputer animationXMLUML
VideoconferencingLogicData storage deviceVideoconferencingFlow separationSingle-precision floating-point formatXMLComputer animation
Social classVideoconferencingExtension (kinesiology)Inheritance (object-oriented programming)Data storage deviceVideoconferencingMereologySlide ruleMessage passingAdditionXMLUML
Inheritance (object-oriented programming)VideoconferencingData storage deviceExtension (kinesiology)VideoconferencingTransformation (genetics)Data storage deviceQuicksort1 (number)MathematicsFlow separationElectronic mailing listXML
String (computer science)Digital filterStreaming mediaData storage deviceMathematicsDifferent (Kate Ryan album)Client (computing)InformationView (database)Musical ensembleVideoconferencingPercolationStreaming mediaSheaf (mathematics)Computer animationXMLUML
Dependent and independent variablesComputer networkData modelCache-SpeicherMiniDiscRevision controlData storage deviceAdditionDependent and independent variablesSystem callSemiconductor memoryState of matterResultantKey (cryptography)Multiplication signMiniDiscSoftwareXML
FeedbackCache (computing)DatabaseMiddlewareForm (programming)Semiconductor memorySoftwareData storage deviceMiniDiscLibrary (computing)FeedbackEndliche ModelltheorieSoftware repositoryMereologyMultiplication signType theoryDisk read-and-write headSystem callSheaf (mathematics)Arithmetic meanServer (computing)Object (grammar)Focus (optics)Internet service providerStandard deviationTranslation (relic)Complex (psychology)Control flowFlow separationRight angleParsingDataflowMechanism designLecture/ConferenceMeeting/Interview
Transcript: English(auto-generated)
And, you know, why do we love open source? It really, really makes our life easier. I've been an Android developer for about seven years now, and this is really true. I remember earlier on it really was just the HTTP URL connection, and that's really all we had to use. Oh, I'm so sorry.
There we go. There we go. So sorry. And, you know, nowadays we have a lot of really good libraries. And we have a really, you know, networking is a lot easier because of it. And the same goes with storage. No one likes to do raw SQL, no one, you know, shared preferences is a bad idea.
And nowadays we have Firebase, Realm, things on top of SQL like SQL Delight, SQL Bright, and they really make our lives easier. And same with parsing. I have hand parsed JSON objects myself, it's horrible. We have to do it sometimes, but nowadays there are a lot of modern parsers, we really
have a lot of options, and it's very uncommon nowadays to be doing this by hand. So fetching and persisting, and parsing has become really, really easy for us. There's still some gaps, though, and what's not easy, and that's data loading.
Everyone does it differently. Everyone you talk to has a different method, a different flow, how they handle offline, how they handle their caching, and, you know, what exactly is data loading? It's the act of getting data from an external system, your source, whatever that may be, to the actual user's screen.
The whole flow. Now, anyone, everyone in here, raise your hand if you think the person next to you does this process, this flow, the same way you do. Yeah, yeah, it's really uncommon. But we all use Jackson or Moshi or Jison, we all use OKHDP, we all use Volley or Retrofit, we all use these libraries, but how we connect them together is very, very different.
Everyone does it differently. And you know, it's complicated, and it's made even more so complicated by rotation, and it's its own little special snowflake. Things are getting a little bit better nowadays with the introduction of live data and some other constructs, but still, this is a big issue, this is a big pain point. You know, do we serialize our data on rotation, which is a bad idea?
You know, how do we handle this? And you know, we face these challenges, and we decided to build a store at the Times. And you'll see this again, you know, a little bit later on, our link. And we wanted to simplify this process, and we wanted to make it easy.
So let's talk about our goals. We set out to do this. What are our goals? And that's one of the things I just mentioned, was data should survive configuration change. We should be agnostic of where it comes from. You're in an activity, you're in a fragment, you're in a view, you just want the data to show. You don't really care where it comes from, where we, you know, coming from a rotation,
where we're not, we just want the data. And activities and presenters should stop retaining the megabytes of data. Activities should be doing activity things, presenters should be doing presenter things. You know, the store, a store should really hold the data, and we shouldn't really have to worry about these things with life cycles attached to them, retaining large amounts
of data. An offline should be as a configuration, caching as a standard, not an exception. You know, one of the things, good things about the Times is we pride ourselves in as being able to allow our readers to read the news offline. And we really feel that offline should be done first and not as an afterthought.
And the API should be really simple, simple enough for an intern to use, yet really robust enough to meet all of our needs. We don't want anything really complicated. And so, how do we work with data at the Times? And this is really gonna help drive our solution. And we looked at it and we thought about it, and after a good bit of time, we realized
that 80% of the time, we just want data. We don't care if it's fresh or cached on the disk. We just want to get it and show it. And that's pretty much most of the time your use case. And the other use case is when you want fresh data, it's either through a background update, through an alarm, we want to fetch something at a certain time.
The New York Times app has two alarms in the morning and in the evening where we refresh our content, but also we want to facilitate users pull to refresh. Maybe the user wants fresh data. Well, we also want to handle that case as well. And the request, our request to this thing, they have to be asynchronous and reactive,
for obvious reasons. You don't want to block and call, and we use reactive Java, so it should be reactive. And data is dependent on each other and data loading should be too. And performance is also important. We don't want to slam the disk or slam the network for multiple requests for the same piece of data.
It should be smart. If we make multiple requests, it should utilize the thunder and herd principle. One gets through, satisfies the request, and then fulfills the other. So we shouldn't be doing multiple things inefficiently. And the same goes with parsing. We should parse something once. Once we hydrate the object, we should cache it.
We shouldn't be rehydrating the same data over and over and over again. It's really inefficient. And so we decided on to use the repository pattern. And we did this by creating reactive and persistent data stores. I'm not sure how many of you have heard about the repository pattern, but it's a pattern that was done by someone at Microsoft, I want to say 10 or 15 years ago, thought
it up and wrote it down. Maybe the idea is a little older. But basically it separates the logic that retrieves your data and maps it from the logic that acts on your data. So it basically separates some of your business logic and the fetching of it. And it basically mediates between the data layer and your view layer.
And well, why the repository? Well, it maximizes the amount of code that can be tested by isolating the data layer and your transformation stuff. If we're able to abstract the fetching and the retrieving and all that, then we can test that separately from the actual logic that acts on any sort of translation that we do.
And also, it allows us to pull data from different sources with consistent access rules and logic. You know, someone talking to the store may not care what the source of the data is. They may not care about the caching policy, and all that can be defined at a very, very lower level.
And so this is our implementation. As I mentioned, you can see it again and check it out. And so what is a store? I keep saying store over and over. So what is a store? It's a class that manages the fetching, parsing, and storage of a specific data. Your data. And so basically, all we really want to do is tell a store what to fetch, where
to cache, and how to parse. And it should be that simple, and the store should kind of take care of everything else for us. And yeah, as I mentioned, the store should handle the flow. And it should be observable. And we want to kind of implement these interfaces.
So a get would maybe be, we don't care about the data, just give it to us. A fetch, we want fresh data. And of course, we want the ability to stream. We want to say, I want to listen for updates on a particular data set and get notified if I get new data. And also clear.
So how did stores help us achieve our goals? And we're going to check it out by loading a currywurst. So this covers the 80% case. And again, we want to do something really simple. So we have a store here. We get a currywurst out of it. And we have a string. We specify, you know, the key.
And maybe in this case, it's a topping, ketchup. So we say store.getKetchup, we subscribe, and then we get a currywurst out of it. And then of course, we can pass it to our view layer to show or whatever you want to do with your currywurst. And so on configuration change, all you really have to do is store your key.
You don't have to actually serialize the data set or anything like that. And here you can see in this example, yeah, we're doing this. And so what do we gain? Well, the fragments, presenters, and activities, they don't have to retain this data.
They don't have to retain state. They only really need to retain the key. You don't have to serialize your data. Store should be doing the store things, and the activities and the fragments can be doing their thing. And also efficiency is important. As I mentioned, we should fold in multiple concurrent requests for one key, as I mentioned
the thunder and hurt. So if you were to do something like this, where you have a tight loop with 20, where you do the same get call over and over, you would certainly expect if you don't have the data to where you don't make 20 concurrent network calls, you only want to make one network recall that comes back and satisfies all of these requests.
And so as I mentioned previously, that was the 80% of the case. What about the times where you really, really want fresh data? The user initiates a pull to refresh, you're in a background update, you want new data. Well, we do that with a fetch command here. And you see it's very similar to exactly what you saw before, except instead of a get, you have a fetch.
And I apologize, you'll see a couple more of these getting complex and complex. But this is kind of the flow right here, and it's unidirectional. You start at the top of the store. Is it in the memory cache? No. We fetch it from network, return cache data to the store, we put it in the memory cache,
and we return it. And okay. Streams. As I mentioned previously, we maybe also want to listen for updates. You could have this in a view, for example, a view that wants to display a piece of data, and if that piece of data changed, it can be notified about it. So we can listen for things, we can have disconnected pieces or entities listening
for updates on this data and react to it and handle it appropriately. So how do we build a store? So I'm gonna start by walking you through a couple of basic examples. And we have interfaces. So I mentioned we want to be able to tell the store how to fetch, how to cache it,
and then how to parse or transform our data. And this is a very basic one right here. The fetcher defines how a store will get the new data, and in this example we've kind of got a retrofit endpoint, curry worst API, and it's that simple. You declare your fetcher, you override the fetch method, and that's it.
And if you're not using reactive Java, you can easily become observable using this observable from callable, and it's super easy if anyone's not using reactive Java and they're nervous about trying it out, you can do something like this, or I guess the point is if your
client is not reactive or doesn't implement any reactive interfaces. And so, parsers help with fetchers that don't return view models. A lot of the times, the data we get from our endpoints is exactly the data we're gonna display. Sometimes it's not, and we have to do some sort of data manipulation or transformation
on that. And again, we'd really love our store to be able to handle that for us. We don't have to do, we don't want to have to do any additional work. And so here's an example of a parser transform. We want to box the curry worst. We're given a curry worst, and we wrap it in a box.
And this parser is really simple. You can see it just encapsulates the curry worst in a box, but yours might be more complicated. And others read from streams. So the base case of getting the curry worst to start with, it's a stream. And this is kind of what's done. It's really simple. We have the input stream, and we read it, and then we transform it using GSON.
We've all seen this type of thing. We've all written it multiple times. And we provide some of these parsing mechanisms as a convenience. We have these included libraries. Out of the gate, we have GSON, but if you use Jackson or Moshi, you can also have
those as well. And like I said, it's a convenience. If you include this middleware, you can have some nicety methods, or you can write your own. And you can see right here, we have this GSON parse factory, and it does some of the work for you. And again, another horrible slide.
My apologies. So this is what it looks like with a parser in the mix. And the only difference is it sits on top. You fetch the data from that work. Does it have a parser? Yes. We parse the data using the specified parser. We save it to end memory, and of course, we return it on up to the user. And again, unidirectional flow.
Everything is flowing this way. So, as I mentioned previously, we want our store to have offline capabilities. A lot of times, we don't know if our user is going to be in the subway, in an elevator. It should just work. If we have data, if we have articles, we should show them to the user, regardless
of their connectivity, regardless of where they are. We can achieve this by adding persisters. As I mentioned before, the persisters are kind of agnostic of where you do it. In this example, we are going to use a file system record persister. And it's pretty straightforward. We...
Oh. There we go. So sorry. And this is something else we provide. Really simple, but you can have your own persister. We say file system record persister create. We give it a key, which is kind of like a prefix, and then any sort of key. So before, this would have been a topping, and this just kind of helps denote where
it's stored in the file system. And you see the caching policy right here. One day. With this data, we say it's only going to live on the disk for a day. And here, we can see some of the basic methods that we implement. We have a write and a delete.
And you guys are welcome to use it. It's a KISS storage. Keep it simple. Stupid. And again, the file system persistence may not be what you want, but it's something we use, and we figure maybe it might be a convenience to others. So like I said, if you don't like our persisters, no problem.
You can implement your own. It doesn't matter if it's a room, realm, SQLite, SQL to bright, raw SQL, file system shared prefs, which is a horrible idea, but anything wherever you want it. And these are the interfaces that you would want to implement if you wanted to write your own persister.
We have a read, a write, and a clear, and a get record state. So it's pretty straightforward. And again, I apologize, another really long graph, but you can see this is how it is with a persister thrown in the mix. So we say get. Is it in the memory cache?
No. Do we have it on disk? No. We fetch it from the network. We save it. Fetch the data from disk. So parser, yes, no. We parse it. We put it in memory, and we return it. And it's just an additional component. So we have our components. Now let's see a real-world example of putting together a quick store.
So let's start with the parsing. So as I mentioned earlier, maybe we want to have a couple of steps to this. We get the JSON. It's a currywurst, but we really want to box the currywurst. So we want to have two steps. The first part of the parsing is we hydrate the JSON, the stream for the server, into
our currywurst, and then we want to add that box parser that we showed you later. So now we have a list of parsers. So let's declare our store, and you can see it here. This is our key. This is what we're going to be getting from our fetcher, which is an OK HTTP, and this is going to be the output of the whole process.
We want boxed currywurst. And so we declare our fetcher, dot fetcher, and here we go from our retrofit endpoint, dot fetch, passing our topping. This is our persister. Again, we're using our baked-in file system record persister, but you can use whatever persister you want.
And then we add our parsers in, our list of parsers. And the parsers, I should say, are ordered. The order is specific. We do the first one first, and so on and so on. The second one second, you get the idea. And we say dot open. We declare a variable, currywurst store, dot open, and it's that easy.
So let's do some configuration. Now that we've seen that we can declare it, we have something basic, let's look at the configuration. We can configure some of the memory policies. And this is the actual memory cache. We can say, have a size, expire after and specify a unit. In this case, it's 24 hours. The size is megabits.
And it uses a Guava cache under the hood, which is something else you guys are welcome to use. We spent time ripping it out of Guava and isolating it into a jar, and we actually provide that, I want to say, as a separate repo. There we go. And also, what about stale data?
Well, there's a couple things we can do here. We can say refresh on stale. And if our data is stale, it'll give you the stale data while it'll back-filling the cache automatically. But let's say you don't want stale data. Well, you can also specify a network before stale, and it'll try and attempt to do the fetcher before returning the data to you.
And of course, if the network fails, then you'll get stale data. So let's check out some stories in the wild. And what I mean by that is, at the Times, we typically have a good bit of interns, two to three interns every summer. And so our team grows by 30% to 50% for those 10 weeks in summer.
And a couple summers ago, we tasked our interns with doing a project, a bestsellers list. The New York Times is famous for our bestsellers list. And there wasn't a native feature in the application that really showcased our bestsellers and allowed users to check them out. So they started with a store.
Oh, I didn't want to say that. The implementation was through a store. And this is how they started. And of course, here's the retrofit endpoint. We want to get books. We have a path, a category, nonfiction, fiction, that sort of thing. And at Git, we've all seen this before. And this was the store that they made.
And so you can see right here, books. That's going to be what we get out of it. And the bar code is what we pass in. And the bar code is just a key. It's just a data object where we can encapsulate that category and make it really easy for us. It's almost like a pear, if you will.
And you can see right here in the fetcher, we call our endpoint that we just showed you. The persister, here we have a create another file system. And our parser, create source parser, Gson books. We're not doing anything fancy. We just want books. And so in the activity, all the interns had to do was something like this.
They didn't have to overthink where the data was coming from. They didn't have to think about any caching policies. They didn't have to think about network retries. All they had to do was just type this. Bookstore.Git, their category, and subscribe.
And well, how do books get updated? As I mentioned, we have background updates, which happened a couple of times in a course. This would be hooked in as well. Oh, bookstore.fetch. It's not the Git. The fetch is a fresh call. We want fresh data. And you can see that's happening in our background updater.
So data is available when the screen needs it. The UI gets the data, and the background services call fresh. And this may not conform 100% to your needs all the time. This is what we use. And someone I was speaking to yesterday wanted fresh data when the app started.
And maybe in that situation, you can call fresh on application, on create, or something like that. Just whatever you need. Also, with live data, that's no problem either. I was checking this out the other day, and I'm going to show you a couple small examples of live data. Hope some people have seen some of the live data examples already. This might be bad otherwise.
So one of the things with live data, we know we have two methods. If we wanted to extend a live data object ourselves, the two big methods in this guy that you're supposed to override are on active. And I want to say on unactive or deactive, I want to say. And on active, you would kind of do the same thing here.
You queryware store.get, passing it whatever key you want. You subscribe, and then when you actually get the result, you set the value. You know, the live data works that way. You set the value. And of course, you know, in the real world, using reactive Java, you, of course, would want to hook this up to a composite disposable.
And on the deactive piece, you would want to clear the disposable. And the same goes with if you wanted to have this outside of a live data object. You could implement it something similar to here, to where you have your store, your .get. Oh, I'm sorry. Let me back up. You have a method which returns your live data of your querywurst, and, you know, you pass in your topping.
And it creates the mutable live data structure. It makes a call to your store, and then on a successful return of your data, you set the value on the live data. So any caller method to this method, you know, would get the data set on the live data object asynchronously.
Oh, that was a mouthful. So what about dependent calls? We all know sometimes our data can be dependent on each other, and we want to chain some of these data sets. Well, we can totally do that. We can map one store to another. They can be inside the store or outside the store.
So for example, here we go. A feedstore.get. A feedstore, maybe it's a configuration we want to know. We want to get our configuration piece of data. And we can map that to some more data, you know. The result of that goes into the... So sorry.
So we get the feed, we get information about the feed, and then we get some of that information, and we map it to a result of another store. So you can chain these stores together in a reactive stream if that's what is needed. Also, as I mentioned before, you can do it from the inside as well and override these stores.
You can extend a store, encapsulate some of your business logic within side, and override the get and fetch methods yourself. I really don't recommend this. It's encapsulating the business logic within a store, and it's a little strange, but sometimes we all need to do things for a certain reason.
So here's an example. A video returns single videos, and a playlist store returns a playlist. But we really want a playlist with all of the videos inside of it, and not having two separate pieces of data. So one of the ways we can do this, we start with a video playlist store, and here it is right here.
Pretty basic. Have our fetcher, persister, and again, we specify the ID of the video, and we get a video out. And then the next part is we have the playlist store. And this guy, and he extends real store, we get a playlist out, and we pass along in.
And we basically, here's our video store that we showed in the previous slide. We create our store, passing in all the usual stuff that we want in addition to our video store.
And then later, in that video store, we can override the get method. And as you can see here, we do some data transformation. When we say videoStore.get, we're given a playlist ID. We fetch the playlist.
I'm so sorry. We get playlist.videos, and we do some data transformation here. FlatMap, playlist, playlist videos. We get the store videos here. We map it to a list, and we return a playlist that contains the videos.
And so the person calling this get call will get essentially a hydrated list, a playlist with all the videos inside of it, instead of two separate pieces of data. And, you know, if you were to do this and you see we're overriding our get method here, you would probably want to override the other methods, the fetch and some of the other ones.
The fetch would want to implement the same sort of data transformation as well. What about listening for changes? I mentioned streaming earlier, and maybe you want to have different listeners, different clients, subscribe to the store listening for changes. Well, we support that with a dot stream.
And here we can say, so step one, we subscribe to the store and we filter what you need. So here you can see where we're subscribing with a dot stream, and we're saying filter. Is this the section we care about? The stream will give us any updates on the store, but we only really want to care about the data that we're interested in.
So maybe it's the video store, like I mentioned, but we're only showing one particular video. You have a view that's showing information about one of those videos. You only want to update that view if that particular video has changed.
And we can do that with a filter here. And as you can see, it'll percolate down to the subscribe if that's what we want. And here we go. So this is how we handle it. And for those, we have some newer features with the store, store version three, I want to say,
in addition to some of the things I've already mentioned. One of them is the get refreshing key. So for example, when you just do a get, it gives you the data once. But let's say we want to get the data and listen for any updates to it.
And it'll stay subscribed. And any time you call store clear, anyone subscribed to this will resubscribe and get the new network response. So this is a way to subscribe the data and keep listening if things have changed, kind of wrapped into one call. The other one is get with result. Let's say you get your data, but you also want to know where it came from.
Did it come from memory or did it come from disk? Maybe that's important to you. This call encapsulates the result state with the data. And that's it. Love contributions and feedback. If you use it and you hate it, I'd love to know why.
If you use it and like it, would like us to add features, we would love to hear from you. And thank you very much. Thank you. We are now open for questions.
So if anyone has a question, come over here. Usually with this type of the news reader apps, the biggest problem is syncing data. Because for example, some news has been removed from back end, but then you have to remove it from your local.
How does it happen with the store? I didn't see anything. Oh, that was part of the fresh call. So with us, we typically initiate a fresh, a blowing through of the cache when the user does a pull to refresh, or our twice daily update. We have an alarm which goes off in the morning and in the evening, and that's when we refresh our news.
So you remove all the cache and then replace it somehow? Yes, we use the fetch call. Through when the alarm goes off in the morning, we say, you know, section front store dot fetch. And it fetches the section front. Or if the user initiates a pull to refresh, we'll do the same thing.
We tell the store, go to the network, give us fresh data. Because the other 80% of the time, we don't care, we just want to get the data and show it. Yeah, I have a question. In the HTTP specifications, there's already a lot about caching.
So basically, there's a server-side cache and a client-side cache. And the server can control how long the client-side cache will cache the data, for example. And is your library leveraging this? Because, okay, HTTP already uses all this standard and is caching requests.
And also, if there are multiple requests, multiplexing them to one and all this. So why did you build the library to make a completely own caching mechanism? Is it because you found some issues that the standard mechanism isn't enough?
Or what are your experiences? Well, the okay HTTP, that's a really, really great question. You're absolutely right, it does do some of that for us. But what it doesn't really do is some of the data translation, as I mentioned before. So maybe what you get back from the network is not the data in its form that's going to be displayed in the UI. There's going to be multiple steps to parsing it as well.
That takes place outside of okay HTTP. And we want to be smart about that as well. But unfortunately, we can't leverage a lot of the stuff that's built into okay HTTP outside of that with a hydration as well. Let's see. And also, maybe we want to store it in a different way.
We don't want to rely on okay HTTP's internal cache, whatever that may be. Maybe we want to keep our data in a room database. Maybe we want to keep our data locally in a realm database. And this layer sits on top of that and really facilitates it. I mean, if you can get away with just using okay HTTP for all your caching needs in your offline, then that's perfect.
It's reduced complexity. It's whatever works for you. Thank you. Thank you. So, first of all, thank you for the great talk. You didn't look so nervous, as you said. Thank you. Just a simple question. Do you know how big the library is without customizations and so on?
So, we try to keep it small. I don't know it off the top of my head, but it's really, really tiny. And we've tried to break it up and make it as granular as possible. So, if you just want the store and the in-memory cache, then that's one thing. And I think the persister and some of the other pieces are broken out. And as you saw, the middleware was broken out.
So, if you don't use Jackson or don't use Moshi, you don't have to include it. And even the cache itself is broken out into a separate repo. So, it's really, really tiny and we've tried to keep it tiny. Because, you know, that affects all of us. No one likes hitting that limit. I'm sorry. I wish I knew off the top of my head. I'll be prepared next time.
And what about saving the data? Is it somehow possible to do through store or I should, let's say, kick the request through other means and then just call fresh to get the updated data from the server? That's a really great question. And you're talking about specifically mutating the data locally
and then how do you do that, right? So, there's a couple of ways you can do that. You can either do your post request and you can invalidate the locally and say fresh. Or you also have access to your persisters as well. So, you can also push up to the network and you can also persist using your persister locally
and you can invalidate your memory cache. You know, one example of that is our comment API. I hate to say this, but it's not a really great API. And there's a time lag in between. So, if someone likes or recommends a comment and we post it to the server and then we do a get request immediately afterwards,
that data is still not going to be updated. So, that's what we have to do is we take that comment. If someone liked it, we have the persister. We have a handle on that. We do our post request. We get a successful OKHDP. We mark our data, persist it, and we clear the memory. And so, any subsequent gets will get the updated object from the disk.
Okay, thanks. Thank you. I'm just trying to wrap my head around the concept. The concept is basically to provide cache solution, right? Still, it's called store.
So, some might confuse it and actually use it as a model layer, which is okay. But that's not the model, right? The focus is on caching. And the emphasis is on providing data to you, the data that you want, and removing you caring about where it comes from
or how it has gotten to you. And it really isolates the fetching, the persisting, and the parsing, and allows you to implement that however you want. And then the layer around all of that, gluing it together in the flow, you really become unburdened by it.
As far as the name goes, store, maybe it is a bad name. It could be named something else. I'm horrible at naming things. But yeah, I hope I answered your question. You wanted to... Yeah, yeah, sure. Thank you. Okay, sorry. We can talk later. More if you want to grab me. I'm a chatter.
And again, we love, love feedback. So, if you use it, you hate it, we'd love to know why. If there's something you would like, please let us know. We love feedback. So, thank you.