We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Cooking up a new search system: Recipe search at Cookpad

00:00

Formal Metadata

Title
Cooking up a new search system: Recipe search at Cookpad
Title of Series
Number of Parts
60
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Cookpad is the largest recipe sharing platform in the world. Our mission is to make everyday cooking fun, and central to that is our search product. Our search engine helps cooks everywhere find tasty dishes to cook within our ever-growing catalogue of five million recipes created by everyday cooks. As a global recipe search – available in 70+ countries, 30+ languages, and to over 40 million monthly users – delivering this is no small feat. In order to prepare for a substantial new iteration of our search product, we realised that our existing legacy search system was not suited to our goals. We embarked on a transition to a new system, along with new team structures and team composition. Over two years we delivered a new system, without halting product development along the way, and without disruption to the user experience. Our starting point was a team and system with capacity limited to legacy system maintenance and bug fixes, where relevance enhancement was delivered through incremental knowledge base tuning by SMEs (non-engineer subject matter experts). Our end point was multiple search teams who have direct ownership over the search experience and relevance enhancement, supported by SMEs, and following rigorously tested data-driven experimentation. This change involved transition to a new event-driven architecture, along with technologies that were new to Cookpad search, such as Kubernetes, Kafka, Python, Elasticsearch, and machine learning. In addition – and just as importantly – it also involved a transformation in team structures and team composition, for which we borrowed many concepts and practices learned from the search community and the Team Topologies movement. This talk will cover our journey, why we did it, as well as the trials, tribulations, successes along the way. Hopefully, it will give others ideas on how to reinvent their own search system and search function, while minimising disruption to product delivery, in order to move faster.
Musical ensembleMultiplication signGoodness of fitDiagramLecture/ConferenceMeeting/Interview
Physical systemMultiplication signStatement (computer science)Computer animationLecture/Conference
Statement (computer science)Product (business)Phase transitionFocus (optics)Lecture/Conference
Decision theoryINTEGRALProduct (business)Group actionShift operatorInformationInformation retrievalAreaLecture/ConferenceMeeting/Interview
Complete metric spacePhysical systemProgrammer (hardware)Computer networkLevel (video gaming)FrequencyContext awarenessLimit (category theory)Axiom of choiceBitPhysical systemGraphics tabletSimilarity (geometry)PhysicalismOnlinecommunityData structurePosition operatorMathematicsComputer animation
Decision theoryMobile appIntegrated development environmentMathematicsAuthorizationLecture/Conference
VacuumBuildingProduct (business)Computing platformData structureQuery languageFeedbackBit rateSoftware testingGraph (mathematics)Mathematical optimizationDistribution (mathematics)BlogEmpennageDisk read-and-write headQuery languageMetric systemSoftware bugDisk read-and-write headDistribution (mathematics)Graph (mathematics)Expert systemCore dumpSpacetimeData managementProduct (business)MultilaterationData structureBitFormal languageNumberoutputGroup actionStrategy gameComputing platformCognitionMultiplication signUser interfaceLocal ringMathematics10 (number)Projective planeResultantPoint (geometry)Queue (abstract data type)Interface (computing)Expected valueMathematical optimizationMathematical analysisBit rateFocus (optics)TunisRankingInformation retrievalExterior algebraMereologyDependent and independent variablesSemantics (computer science)OnlinecommunityData dictionaryCapability Maturity ModelSearch engine (computing)Software testingDrop (liquid)Calculus of variationsInformationFrequencyStructural loadHypothesisRight angleData conversionPerspective (visual)Statement (computer science)Scaling (geometry)Physical systemLimit (category theory)Graph (mathematics)Noise (electronics)Domain nameSweep line algorithmDialect1 (number)Graphics tabletSpring (hydrology)Level (video gaming)Context awarenessCompilation albumBuildingLogicComputer animationLecture/ConferenceDiagram
Single-precision floating-point formatQuery languageEmpennageCovering spaceSelf-organizationGroup actionTime domainTouch typingPlastikkarteData conversionBit rateMathematical analysisElectric currentVector potentialPerformance appraisalSound effectAreaProduct (business)Metric systemSoftware bugFeedbackUser interfaceFormal languageUniqueness quantificationService (economics)Gamma functionMaxima and minimaMathematicsTouchscreenDomain nameBitDependent and independent variablesGoodness of fitReal numberWeb pageResultantForm (programming)Data managementCoefficient of determinationConnected spaceProcess (computing)CoroutinePunched cardQuery languageSearch engine (computing)PlastikkarteDesign of experimentsError messageSingle-precision floating-point formatTracing (software)Self-organizationComplex (psychology)Multiplication signFrequencyTerm (mathematics)PlanningComputer configurationProduct (business)Point (geometry)CASE <Informatik>Cartesian coordinate systemSampling (statistics)NumberVolume (thermodynamics)Formal languageGroup actionTouch typingTopologyShift operatorDisk read-and-write headConnectivity (graph theory)Distribution (mathematics)Elasticity (physics)Descriptive statisticsTelecommunicationCategory of beingMetric systemMathematical analysisData structureCognitionStructural loadFeedbackSoftware bugIdeal (ethics)Stack (abstract data type)Network topologyLecture/ConferenceMeeting/InterviewComputer animation
Computing platformTopologyProduct (business)Streaming mediaComputing platformMaxima and minimaFocus (optics)CognitionReduction of orderAsynchronous Transfer ModeCollaborationismPhysical systemCore dumpEvent horizonInferenceHost Identity ProtocolTowerComputing platformData structureStructural loadProduct (business)Streaming mediaFocus (optics)Multiplication signBitPhysical systemDependent and independent variablesBuildingOnline helpSoftware developerRotationMereologyGroup actionInterface (computing)Boundary value problemPersonal digital assistantPoint (geometry)System administratorOrder (biology)Connectivity (graph theory)Mobile appFilter <Stochastik>Data managementResultantProjective planeFrequencyWebsiteService (economics)Office suiteStack (abstract data type)Object (grammar)Level (video gaming)Source codeSearch engine (computing)Web pageGraphics tabletFront and back endsLecture/ConferenceComputer animation
RotationInstance (computer science)Response time (technology)Multiplication signWeb pageFilter <Stochastik>Lecture/Conference
Search engine (computing)Machine learningAreaBlogLecture/ConferenceMeeting/Interview
Acoustic shadowPhysical systemPhysical systemHuman migrationSource codeQuery languageSoftware bugCore dumpGroup actionAcoustic shadowSoftware developerMathematical optimizationDataflowGreen's functionBitProcess (computing)Product (business)Pattern languageWordData managementFrequencyInterface (computing)Term (mathematics)WeightStructural loadPhase transitionLecture/Conference
Digital photographyBitContext awarenessGroup actionQR codeTouch typingAndroid (robot)Mobile appComputer animationLecture/ConferenceMeeting/Interview
HTTP cookieWebsiteMobile appQR codeoutputBlogWeb applicationComputer animationLecture/Conference
Musical ensembleLecture/ConferenceDiagram
Transcript: English(auto-generated)
Thank you. Thank you very much. Hello everyone. Good afternoon. This is my first time to attend Berlin Buzzwords in person, so it's really nice to be here.
And even more so that I'm also getting the chance to talk here as well. To introduce myself, I'm Matt. I work on search and discovery at Cookpad Global. I'm going to talk about our recent journey of rebuilding our recipe search system. And it's probably a good thing that we just had lunch, as Search at Cookpad is pretty hungry work.
We just spent a lot of our time looking at recipes. And I will apologise in advance that there's some accidental, not accidental, bad food puns and jokes in this talk. So don't be afraid to groan or cheer, I don't know, as they come along.
So to begin, I want to start with a seemingly simple statement. A search team should be responsible for optimising search relevance and improving the search experience, which seems obvious. And our original motivation back when we started this journey was not really anything to do with this,
but more about technology and hiring as we were about to prepare for a new phase of product focus in Search at Cookpad. We'd made then the decision to consolidate on a Python-centric stack, so that we could reduce the integration gap for techniques in modern information retrieval, ML, NLP and so on.
And also to access the global community around that and talent in that area. But when we began the transition, we soon realised that this was probably the most important thing to tackle, because it wasn't actually true for us back in 2020. So the transition was not just technological, it was also an engineering and product culture shift as well.
We began, like I said, in 2020, at the start of 2020 by spinning up an experimental search team that iterated quickly to prove out a new tech stack that solved and targeted some of the previous systems' limitations. And then we began rolling that system out through 2020 into 2021,
did lots of experimentation in that period with team structures and our approach to relevance improvement. I'm going to try and touch on a few of these topics throughout this talk. It's a little bit high level, it might be quite specific to Cookpad, but it might connect to other search teams or companies doing search out there who might have been or are in a similar position.
And I'm also kind of eager to hear from other people at the conference as well about their own journeys and what they're doing, trying to adapt how they do search. So first, I haven't actually talked about Cookpad that much, so let me give you some context. Cookpad is the largest online community for food lovers, well, specifically home cooking, lovers of home cooking.
Our mission is to make everyday cooking fun. Why do we care about that? Well, we believe the act of eating has a major impact on everyone's physical and mental health.
And the choices we make when we cook has a big impact on our planet. And with those two things in mind, we believe there is a distinction between creators and consumers. When you're creating, for example, through cooking, suddenly your awareness starts to grow, you start to care more about where your ingredients come from or how the taste changes if those ingredients are in season or not.
And when people start caring, they tend to make informed decisions that impact not only their health but also the environment. In the app, you can browse recipes from ingredients that are in season to get inspired, or you can follow amazing authors like Kate here who does great vegetarian recipes.
But importantly for this talk, you can search by ingredients, search by dish, search by all kinds of things to find recipes. We're a global online community that is available in more than 70 countries that supports more than 30 languages and has an ever-growing catalogue of over 6 million recipes all created by everyday chefs.
Anyone, maybe so many here, has authored a recipe on Cookpad. We're visited by over 50 million users each month, and that scale is important for understanding the search problem that we have.
We serve over a million searches per day, receive over 200,000 unique queries and a non-trivial amount of HTTP load. So our end goal with this transition was to try and scale up to two or three cross-functional search product teams.
This is roughly the mission statement for how we wanted those teams to work. Search teams and search engineers will identify impactful opportunities to optimize search, build solutions to solve those problems, and rigorously evaluate and validate the impact. There's a few key ingredients in making this happen, and I'm going to split this talk more into those two ingredients.
First was understanding and assigning appropriate responsibility and ownership over the search experience, including relevance, the really important bit. And also that teams should have autonomy and empowerment to build solutions
while trying to navigate and manage the right amount of cognitive load to take on within the team. So starting with responsibility and ownership over the search experience. So before 2020, our situation looked a little like this.
We had a search tools team that was reactively supporting these search managers. Managers are people whose day-to-day work is to tune search using internal UIs and tools built by the tools team. This was manageable in the early days, but after rapidly expanding the number of countries and languages that we supported,
we had a scaling problem. We reached 30 managers, all needing support and input from the search tools team. And the tools team themselves was unable to make deeper improvements to the core search engine. And the destination team structures we wanted to get to looked a bit like this.
So two cross-functional search teams directly engaged with user problems capable of driving fundamental improvements to the search experience and search relevance and empowered to build technologies and solutions they needed on top of a search platform.
I'll talk a bit about the platform transition a bit later. But for now, back to the search managers. So this is kind of a summary of how they interact with Cookpad. They are effectively the relevant brain for recipe search. They're local cooking experts and community managers who work for Cookpad.
They're essentially basically doing knowledge graph tuning for search. And the tools team supports through that UI. They do some of their own tuning with the core search engine, things like bug fixes, tweaking text analyzers, that kind of thing. And the managers are responsible for the search metrics. And they are the direct interface with the users and the queries.
Their workflow is a per-query one, a greedy, poorly performing query approach. They look at this queue of frequently searched queries from the last week, specifically the poorly performing ones. They investigate why the results might be poor using their own local knowledge of the domain, of cooking,
understanding what the users are expecting, and understanding their own understanding of the search logic. They make some changes to optimize the query, making updates to the knowledge graph. They then see how the results have changed, immediately go back again and tweak if needed.
At some point stop, and then move on to the next query. They might also make a note to check again on that query in a few days, just to see how it's performing. So this enabled, over many years, rapid improvement in search quality. But by 2020, a lot of our regions, our key regions, were quite mature with that graph.
And we'd reached some common pitfalls with this approach. First, obviously, by comparing online metrics, sorry, if you're doing consecutive testing, you're just comparing apples and oranges. There's all kinds of reasons why that CTR might have changed one week to the next, so the poor search manager doesn't have all that information available.
So no A-B testing, and in fact, on the old system, A-B testing was really hard. Also, like I said a bit earlier, we'd reached diminishing returns, really, with a lot of those countries and languages. We had very mature dictionaries and knowledge graphs, and so the per-change value is quite small.
And because the managers are operating with limited information, they don't know about the side effects that they might be having by optimizing one query, but the impacts on other queries. And of course, that approach prioritizes the head of the volume, because they're always picking the most frequent, poorly performing stuff,
and it's hard for them to have enough time to get into the middle or the tail of the distribution. Unlike most search projects where we're very heavy-tailed, over 50% of our searches are outside the top tens of thousands of queries, so it's hard for a manager to get into those in a typical week.
And there's lots of infrequent variations like drop spaces or semantic queries that they could be tackling or the team could be tackling, but they're unable to get into. So our first step was to move responsibility as part of this transition to the new search teams.
And that responsibility was around the metrics, so owning the search metrics and trying to move them became the responsibility of the search product team. Search managers are still working, but the ownership is over here. And that enabled a change in perspective as well, and greater alignment with what we wanted to do with the product.
So we could redefine the focus and instead shift, for example, to conversions rather than clicks. A lot of the approach is about clicks. Click is still important, but we wanted to move more towards mission-aligned metrics, higher-level things. I mean, as a cooking company running a recipe search, we wanted our users to be cooking recipes, not just clicking them.
So we kind of moved to start thinking about cook-through rate rather than click-through rate, and also think about session-level metrics as well, moving people along the whole search session to find a recipe to cook. Yeah, so, yeah, cooking quite important.
So we also wanted to get the best of both. But to start, on the search product team side, we wanted to introduce the opportunity analysis so that we could size potential opportunities and also A-B test any experiments that will be released, and that's something that the product teams are great at being able to do.
They can also explore hypotheses through data and kind of offline evaluate things, explore things that way. But we also wanted to maintain contact with the search managers. They have their own strengths and ways that they can help us.
They have deep knowledge of local culture and cooking. They're fluent in local languages. They have frequent contact with the users, and even though we managed to get to two search teams, those teams are still vastly outnumbered by the number of countries and languages. But one of the challenges, actually, how do we coordinate between these two?
We still want search managers to continue their optimization workflows, but we also wanted to give space for the search teams to make more sweeping improvements to the core engine. And we were beginning to experiment with alternative retrieval and ranking strategies, which could potentially conflict with the expectation of search managers.
Segments. So we found over a bit of time that one effective strategy was to use query segments as a driver for change. So just like how oranges are best enjoyed segmented, so is relevance improvement. So sometimes these are more concretely called cohorts or categories, but I kind of wanted to have orange segments on a screen.
So to make this change, this practice more tangible, a segment in our domain is, for example, single ingredient queries. So we bucketed up all our queries like this and targeted these for improvement efforts.
There's some nice benefits of this approach. You can divide and conquer with segments, assign them to different teams potentially, stop people treading on each other's shoes. And we can also get into the head, torso and tail of the distribution. And most importantly, or most pleasingly, it was also a touch point for communication.
So we could say to the search managers or the rest of the organization, hey, we're going to do something here, stand back. If there's anything wrong, please come to us, but yeah, we're experimenting. And these are kind of very domain aligned concepts like dishes or ingredients or whatever, so everyone could understand them.
To make it even more concrete, here's an example of summarizing it all as like a segment description card. So a good component of when you're describing a segment is to have the name of it, short name like single ingredient queries. And have a kind of grounded understanding of the intent.
So for example, in this case, the searcher wants to cook with this ingredient to find something to cook. You need a way to reliably classify, ideally automatically classify those queries as belonging to a segment or not. For example, through NER. And have an idea of which countries and languages you're targeting.
And we also, like I said, wanted to build an opportunity analysis, an impact analysis into this process. So before building anything, we would look at how big the opportunity might be. For example, in this case, over 15% of the search volume.
Other things like number of queries, number of users. And measure the current quality, so have an idea of where we are right now. And then maybe through prototyping or best efforts, maybe a hunch, decide how much of an uplift we might have on the metric. In this case, CTR.
And that's quite useful for sample size planning as well. The idealized workflow for product teams and search engineers became something like this. Nothing groundbreaking, of course, but a lot of it was new to us. And the key bit for this routine was having the steps of having the hunch and then trying to capture it as a segment.
Or if we already had an existing segment, we're working on using that. And then that drives the opportunity analysis, impact analysis. And also decisions about whether we should progress or not to online experimentation. And at the end of that, have confidence about whether we should adopt, abandon or iterate with that experiment or feature.
One thing we had to deal with, one question we had was where do those hunches come from? How do we know what's worth considering? And sometimes there's a critique with a process like this that maybe it moves the pendulum far into abstract metrics.
And becomes a bit more disconnected from user problems. I really believe that's actually quite avoidable. The good hunches come from teams and engineers knowing their users. Also facilitated, of course, by product managers and user research.
But to be effective at this, we wanted that ongoing connection between engineers and the domain. And the more members that had that, the better. So ultimately it was about a quantitative approach and also having that qualitative connection. So you have the models, the methods and the metrics.
The opportunity size, impact analysis, experiment design. And also we set ourselves a target of launching a new online experiment every week. And to keep that qualitative connection, we had got into the routine of doing things like weekly query triage with search managers. Checking in with the search managers.
Running user interviews. Dogfooding, so us being the user and coming up with our own ideas of what we'd like. And we also built a qualitative feedback form on the search results page. So we could get all kinds of crazy, wild, amazing feedback from users about why our results are good. Or quite often they complained about lots of things.
And we really wanted each member of our search teams to see real queries every week. And looking at recipe search queries can be a real joy.
Look at this wonderful one from Indonesia. Which says, it's a query someone is looking for the easiest method to make a cheese stick that is proven and has the dominant taste of cheese. It's beautiful. It's got everything. It's got a dish. It's got a difficulty. It's got a taste. So I think in this domain it's actually quite exciting to be trawling through queries.
So that's a little bit of a tour of responsibility and ownership and moving that. On the other side of the coin, I'm going to talk a bit about autonomy and empowerment to build solutions to the problems that we were identifying. So before 2020, we had that team structure I showed earlier with an SRE team down here supporting the tools team.
And the SRE team owned most of the infrastructure components. And any changes to those were done by request to the team. Which was manageable at first, but you started scaling. And also those SREs were eventually managing eight, or supporting I should say, eight product teams.
They became very stretched and a very long backlog. Limited time to maintain all the infrastructure for a global application. Including maintaining then an aging Elasticsearch cluster. And there was also a gap between search engineers and them being able to add and manage new components to help them build solutions.
And they were also less familiar with the operational and performance considerations of those components too. And finally, the SRE team were also owning the SLOs, which is not a great place for them. So they're really busy and also responsible for a lot of alerts.
So at the time, there was also a lot of technological shifts at Cookpads. We started changing a lot of the deployment layer. But also for search, we started transitioning the application layer, like I said, towards a Python-centric stack. Kickstarting that change, on the search side, we assembled an experimental search team.
Put a few people together with capabilities or an interest in the infrastructure side, as well as the application. And they moved quickly to try out a few things. Derisk options, autonomously build what they needed.
It was a very productive period in terms of trying out lots of stuff. But it did add some premature complexity, which we retreated from on a few occasions. One example, we briefly adopted Apache Spark for some compute workflows. Turns out it was definitely premature for what we needed. When you're in one of these teams, busy doing a new stack,
and some of these people trying Python for the first time, one thing they did want to see is all their errors turning into JVM stack trace modal mysteries. So we retreated from that. On a more genuine note also, we took on a lot of infrastructure complexity within that team. Moving to Kubernetes meant there was a lot more responsibility for the infrastructure.
And we realized that not everyone wanted to become a full-time YAML engineer. It was great for growing our engineers in DevOps and infrastructure, which has been very useful for them since then, going into now and through the transition.
But it did take up too much cognitive load. And for most of our engineers, they wanted to be focused on solving user problems. And there's a lot of overlap between what people were doing. So basically we'd reached the point where we had too many cooks in the kitchen and needed to find a new way of working. Has anyone come across the Team Topologies book?
I'm sure lots of you have, yeah, awesome. So we borrowed a load of ideas from there, particularly the insights around team structures and platform teams, the relationship with platform teams. And one of the compelling ideas was to maximize the amount of time that product teams focus on the user value stream,
or whatever the product value stream is for them, and minimize the load from all the other sources. So with that as a reference, we transitioned to team structures like this. We set up goals around teams and ownership boundaries.
Product teams obviously need to focus on the user problems. And they have the autonomy to introduce new technologies and solutions on top of this, at the time, emerging internal development platform. And they also take on the responsibility of building it, running it, and owning it. That kind of philosophy. And we had them define and own product-relevant service-level objectives
and participate in an on-call rotation to support those out of hours. From the platform team side, their mission is to try and continuously help reduce the extraneous cognitive load from those product teams and support ownership up there,
and doing that through that internal development platform in the middle. At the beginning, this was quite specific to the search stack. But over time, we had a few other platform teams and brought things together to align on one IDP. And getting there was quite another transition. In order to make that happen and mature those team structures,
we realized we needed help for a bit of time from an SRE. So we lured one of those away from the SRE team into the experimental search team, mostly with the promise of frequent lunches cooked in the office. They shared their expertise in site reliability engineering and in building platforms,
so that both product and platform engineers were ready to support the systems they own when we split them out. And we used this transition to divide that stack and establish appropriate responsibilities. And then the final step, of course, once that's been set up with one of the product search teams, and we got to a point where the old system
wasn't needing to be maintained and wasn't live anymore, we moved the search tools team to become its own. Well, realigned things a bit and then made them into these two search product teams. And that's the final arrangement. After that transition, we've got the user problems coming in
and the recipe search team and search experience team focused on those, and they get support from the search platform team and build things on top of the development platform. An example of one of the projects in that period was building the replacement search admin UI.
We had one of the teams managing that. They ran sessions with the search managers to understand their requirements. They developed their own system design, deciding on a Django app with a Postgres backend, and they deployed the relevant components themselves on the developer platform. If they ever needed assistance from the platform team,
I don't know if they niched things about AWS they didn't understand or doing something in Terraform or with Kubernetes or customized manifests, they got the support they needed. And they defined the interfaces with the rest of the system to find appropriate monitoring SLOs and so on, again, using tools from the platform layer
and made those part of appropriate SLOs and alerting as part of that rotation, the on-call rotations. Okay, so that's a brief outline, again, of the whole journey that we went on at Cookpads with search engineering, focusing on fostering responsibility and ownership for search relevance
and also autonomy and empowerment on building solutions and owning them. And since then we've run loads of projects, launched many great new features, including a redesign of the search results page, the introduction of search filters for the first time, made significant and continuous improvements to segments
that were a priority to us, and have the evidence along with that to prove that those improvements were genuine and reliable. We did other things like improve response times. We've got people on pager duty rotation, for instance, which the good news is for me and my engineers
that they're very, very rare, only a few a year, so we all get a lot of sleep. And we've also hired and trained almost 10 really, really talented search engineers and search scientists. Before finishing, I'll give a quick plug to our engineering blog. On there you can find some articles about search machine learning
and things we're doing in that area, including personalised recommendations, learning to boost, and so on, it's called source diving, sourcediving.com. There's a post on there, because I haven't really talked about actually doing the system and traffic migration from the old search to the new one.
I won't go into details now, it's in the post. But we used Strangler Fig pattern, and we had the luxury of having two systems, which I know is kind of rare, not everyone gets that. And that was kind of useful because it meant we could go through phases of shadow traffic and having live load on both systems to check that.
It meant also we had a reference system for getting into the process of doing relevance optimisations. You've got a baseline that you're trying to attack and improve. We were able to do progressive rollout, which means we can move live traffic bit by bit onto the new system.
And we had the lovely safety net of auto-retry onto the old system as well, where the interfaces made sense. When we were doing this transition, we wanted to avoid stopping product development, so eventually there were things on the new system that weren't on the old one. In terms of managing that relationship with search managers,
for the core recipe search features, we moved those over country by country, which meant we could have a group of search managers and inform them that things are going to change. That was a great period for doing query triage, so we worked with them and they were like, I learnt the word disastrouso, which in Spanish I think means disastrous,
which is how they regarded one of the queries on the new system. So that was a nice process to get into the flow of doing relevance improvement. Thank you very much for listening. I'm sorry if all these photos of food might have made you a bit hungrier.
I'm aware that this is maybe quite particular to Cookpad. We're not the first company to transition the ways of working in teams like this. Some things that I'd like to learn from other people here at Buzzwords as well. So don't hesitate to get in touch. I was going to find QR codes for the Android app and the iOS apps,
but I found this instead, which is a really nice QR code cookie by a Cookpader. I don't know if it works. I don't know where it will take you. So don't try it. Or maybe do and let me know. There's our website as well. There's a web app as well as the native apps.
Our blog's there. Thank you for listening. I'm eager to take any questions.