We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Modernizing Wikipedia

00:00

Formal Metadata

Title
Modernizing Wikipedia
Title of Series
Number of Parts
254
Author
License
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
What does Wikimedia do to modernize WIkipedia's user experience, and why does it take so long? Editing Wikipedia feels like a blast from the past, and even just reading articles feels a bit dusty. Why is that? And how can it be fixed? And how can you help?
Keywords
47
72
Thumbnail
1:02:13
82
Thumbnail
1:02:15
99
144
157
162
175
179
187
246
253
Level (video gaming)BitWebsiteSlide ruleSet (mathematics)Computer animationJSONLecture/Conference
Annihilator (ring theory)FamilyDataflowWordWeb pageFormal languageBitWebsiteView (database)Online helpContrast (vision)SoftwareVisualization (computer graphics)Electronic visual displayProcess (computing)WikiLocal ringProjective planeComputer animation
WordHypermediaInstallation artInstance (computer science)WikiSoftwareBitComputer animation
Structural loadDataflowTemplate (C++)Markup languageGreen's functionMarkup languageHypermediaTemplate (C++)Data conversionFile formatWeb pageAtomic numberBitWikiDifferent (Kate Ryan album)SineComplex (psychology)Computer animation
DataflowTemplate (C++)String (computer science)Data conversionThread (computing)Multiplication signECosInstance (computer science)Computer animation
DataflowWikiHypermediaDefault (computer science)Physical systemDataflowSoftwareBitData conversionThread (computing)Data structureWebsiteVideo gameInstance (computer science)WikiComputer animation
Template (C++)DataflowMultiplicationString (computer science)Interface (computing)Control flowVirtual machineExterior algebraMarkup languageHypermediaCuboidRight angleWeb pageDatabaseLine (geometry)WikiTemplate (C++)Parameter (computer programming)Endliche ModelltheorieSystem callComplex (psychology)Data modelData structureNatural languageComputer animation
Strukturierte DatenTemplate (C++)MultiplicationScripting languageWebsiteInternationalization and localizationDataflowOcean currentTemplate (C++)Sheaf (mathematics)Web pageDilution (equation)WikiData structureDifferent (Kate Ryan album)SoftwareMathematicsPower (physics)Physical systemInteractive televisionHacker (term)Markup language
MultiplicationTemplate (C++)Physical systemMobile WebExpert systemFlow separation
String (computer science)Interface (computing)MultiplicationSoftwareRootWeb 2.0Maxima and minimaIntegrated development environmentFlow separationExtension (kinesiology)Different (Kate Ryan album)Moment (mathematics)Front and back endsPhysical systemMoment of inertiaMachine codeWebsiteComputing platformSpacetimeScaling (geometry)WeightEndliche ModelltheorieComputer architectureoutputRoutingRight angle
Extension (kinesiology)Query languageMarkup languageOcean currentWikiRight angleHypermedia
Boom (sailing)GoogolFacebookString (computer science)Server (computing)Interface (computing)ArmStaff (military)Virtual machineOffice suiteCloud computingScaling (geometry)Computer hardwareService (economics)SoftwareServer (computing)Software developerWebsiteRight angle
Computer hardwareScalabilityDivisorCASE <Informatik>Scaling (geometry)Lecture/ConferenceMeeting/Interview
Interface (computing)Inclusion mapService (economics)CountingMobile appCommutatorScalabilityScaling (geometry)Moment (mathematics)Execution unitInstance (computer science)Flow separationSelf-organizationMechanism designSoftware developerBitCloud computingExtension (kinesiology)Point cloudCodeAuthenticationMathematicsMultiplication signProduct (business)System administratorNumberPlastikkarteWorkstation <Musikinstrument>Right angleVirtual machineMassHypermediaSystem callComputer architectureHoaxWikiLecture/Conference
Interface (computing)Link (knot theory)Point (geometry)Event horizonCodeOnline chatHacker (term)Web pageComputer configurationReverse engineeringGroup actionRevision controlRollback (data management)Key (cryptography)Message passingArray data structureComputer iconObject (grammar)Normal (geometry)Sheaf (mathematics)Escape characterFunction (mathematics)Cache (computing)File formatTemplate (C++)String (computer science)Attribute grammarParameter (computer programming)Uniform resource locatorTask (computing)Information securityToken ringWechselseitige InformationExclusive orSimilarity (geometry)Extension (kinesiology)ImplementationEntire functionService (economics)System callFlagPhysical systemSoftware developerCore dumpMathematicsMereologyMixed realityMusical ensembleInternet service providerVariable (mathematics)SoftwareModule (mathematics)NumberLink (knot theory)System callSpeech synthesisSpacetimeBitStability theorySocial classPoint (geometry)Data structureInterface (computing)HookingComputer architectureSoftware testingMereologyFunctional (mathematics)Computer fileOrder (biology)CASE <Informatik>SoftwareExtension (kinesiology)Physical systemElectronic signatureConstructor (object-oriented programming)Mechanism designObject-oriented programmingIntegrated development environmentMathematicsCodeMultiplication signCausalityCircleRepetitionObject (grammar)HoaxDifferent (Kate Ryan album)Computer programmingOffice suiteInjektivitätComputer animation
WikiLatent heatComputer architectureFunction (mathematics)Different (Kate Ryan album)BackupService (economics)Software architectureUser interfaceLevel (video gaming)Helmholtz decompositionBit rateView (database)2 (number)DiagramComplex (psychology)Functional (mathematics)Cartesian coordinate systemFlow separationVirtual machineObject (grammar)SoftwareWeb pageQuicksortPhysical systemPoint (geometry)Endliche ModelltheorieSource codeProgram flowchart
Interface (computing)Template (C++)Multiplication signPresentation of a groupLevel (video gaming)
Metric systemComputing platformBitCombinational logicInstance (computer science)Total S.A.HypermediaComputer wormNumberCycle (graph theory)Graph (mathematics)PlanningDampingQuicksortDecision theoryRoutingConstraint (mathematics)Multiplication signClient (computing)Projective planeGoodness of fitLogicLibrary (computing)Endliche ModelltheoriePhysical systemServer (computing)Virtual machineProcess (computing)Scripting languageSoftware engineeringUnit testingWebsiteRaw image formatExtension (kinesiology)Functional (mathematics)ConcentricInterface (computing)WikiSoftware testingMereologyINTEGRALCodeLevel (video gaming)CASE <Informatik>SoftwareTable (information)Core dumpConfidence intervalMenu (computing)Spherical capPresentation of a groupPoint (geometry)MathematicsSystem callPairwise comparisonBefehlsprozessorHeuristicOntologySoftware developerFront and back endsSuite (music)Cartesian coordinate systemLecture/Conference
Metropolitan area networkComputer animation
Transcript: English(auto-generated)
here this early on the last day. I know it can't be easy. It wasn't easy for me. I have to warn you that the way I prepared for this talk is a bit experimental. I didn't make a slide set. I just made a mind map and I will just click through it while I talk
to you. So this talk is about modernizing Wikipedia. As you probably have noticed, visiting Wikipedia can feel a bit like visiting a website from 10, 15 years ago. But before I talk about any problems or things to improve, I first want to revisit that the software and
the infrastructure we build around it has been running Wikipedia and its sister sites for the last nearly 19 years now and it's extremely successful. We serve 17 billion page views a month.
Yes? You make it louder? Is this better? If I speak up, I will lose my voice in 10 minutes.
It's already a bit... No, it's fine. We have technology for this. I can... The light doesn't help. Yeah, the contrast could be better. Is it better like this? Okay, cool. All right. So
yeah, we are serving 17 billion page views a month, which is quite a lot. Wikipedia exists in about 100 languages. If you attended the talk about the Wikimedia infrastructure yesterday,
we talked about 300 languages. We actually supported 300 languages for localization, but we have Wikipedia in about 100 if I'm not completely off. I find this picture quite fascinating. This is a visualization of all the places in the world that are described on Wikipedia and sister projects, and I find this quite impressive,
though it also is a nice display of cultural bias, of course. We, that is the Wikimedia Foundation, run about 900 to 1,000 wikis, depending on how you count,
but there are many, many more media wiki installations out there. Some of them big, and many, many of them small. Actually, we have no idea how many small instances there are. So it's a very powerful, very flexible and versatile piece of software.
But sometimes it can feel like you can do a lot of things with it, right? But sometimes it feels like it's a bit overburdened, and maybe you should look at improving the foundations.
So one of the things that make media wiki great, but also sometimes hard to use, is that kind of everything is text, everything is markup, everything is done with wiki text, which has grown in complexity over the years. So if you look at the autonomy of a wiki page, it can be a bit daunting.
You have different syntax for markup and different kinds of transclusion or templates and media, and some things actually get displayed in place, some things
show up in a completely different place on the page. It can be rather confusing and daunting for newcomers. And also things like having a conversation, just talking to people, having a conversation thread looks like this. You open the page, you look through the markup, and you indent to make a conversation thread. And then you get confused about the indenting,
and someone messes with the formatting, and it's all excellent. There have been many attempts over the years to improve the situation. We have things like echo, which notifies you, for instance, when someone mentions your name,
or someone, yeah, it is also used to welcome people and do this kind of achievement, unlock notifications, hey, you did your first edit, this is great, welcome, right, to make people a bit more engaged with the system. But it's really mostly improvements around the fringes.
We have had a system called Flow for a while to improve the way conversations work. So you have more like a thread structure that the software actually knows about. But then there are many, well, quite a few people who have been around for a while are
very used to the manual system, and also there's a lot of tools to support this manual system, which of course are incompatible with making things more modern. So we use this, for instance, on mediawiki.org, which is a site, basically a self-documentation site of media wiki, but on most Wikipedia's this is not enabled, or at least not used per default everywhere.
Yeah, the biggest attempt to move away from the text-only approach is Wikidata, which we started in 2012. The idea of Wikidata, of course, if you didn't attend many great
talks we had about it here over the course of the congress, is a way to model, basically model the world using structured data, using a semantic approach instead of
natural language, which, well, it has its own complexities, but at least it's a way to represent the knowledge of the world in a way that machines can understand. So this would be an alternative to Wikitext, but still the vast majority of things, especially on Wikipedia, are
just markup. And this markup is pretty powerful, and there's lots of ways to extend it and to do things with it, so a lot of things on media wiki are just DIY, just do it yourself.
Templates are a great example of this. Infoboxes, of course, the nice blue boxes you have on the right side of pages are done using templates, but these templates are just for formatting, they're not data processing, there's no database or structured data backing
them, it's just basically, it's still just markup, you have a predefined layout, but you're still feeding a text, not data. You have parameters, but the values of the
parameters are still, again, maybe templates or links, or you have markup in them, like, you know, HTML line breaks and stuff, so it's kind of semi-structured. And this, of course, is also used to do things like workflow, so the template I just, oh, no, this was actually
an infobox, wrong picture, wrong caption. This is also used to do workflows, so if a page on Wikipedia gets nominated for deletion, you manually put a template on the page that defines why this is supposed to be deleted, and then you have to go to a different page and put
a different template there, giving more explanation, and this, again, is used for discussion, and it's a lot of, you know, structure created by the community and maintained by the community using conventions and tools built on top of essentially, what is essentially just a pile
of markup. And because doing all this manually is kind of painful, early on, we created a system to allow people to add JavaScript to the site, which is then maintained on Wiki pages by the community, and it can tweak and automate, but again, this,
it doesn't really have much to work with, right? It basically messes with whatever it can, it directly interacts with the DOM of the page, whenever the layout of the software changes, things break, so this is not great for compatibility, but it's used a lot, and it's
it is very important for the community to have this power. Sorry, I wish there was a better way to show these pictures. Okay, yeah, that's just to give you an idea of what kind of thing is implemented that way and maintained by the community
on their site. One of the problems we have with that is, these are bound to a Wiki, and I just told you that we run over nine thousand, no, nine hundred of these, not over nine thousand, and it would be great if you could share them between Wikis, but we can't, and again, there have
been, we have been talking about it a lot, and it seems like it shouldn't be so hard, but you kind of need to write these tools differently if you want to share them across sites, because different sites use different conventions, they use different templates,
then it just doesn't work, and you actually have to write decent software that uses internationalization if you want to use it across Wikis, while these are usually just, you know, one-off hacks with everything hard-coded, we would have to put in place an internationalization system, and it's actually a lot of effort, and there's a lot of
a lot of things that are actually unclear about it. So before I dive more deeply into the different things that, well, make it hard to improve on the current situation, and the things that we are doing to improve it, do we have any questions, or do you have any other,
do you have any things you may find particularly, well, annoying, or particularly outdated when interacting with Wikipedia? Any thoughts on that, beyond what I just said?
The strict separation, just in Wikipedia, between mobile layout and desktop layout. Yeah, so actually having a reactive layout system that would just work for mobile and desktop in the same way, and allowing the designers and UX experts who work
on the system to just do this once and not two or maybe even three times, because of course we also have native applications for different platforms, would be great, and it's something
that we are looking into at the moment. But it's not, you know, it's not that easy. We could build a completely new system that does this, but then again, you would be telling people you can no longer use the old system, but no, they have built all these tools that rely on how the old system works, and you have to port all of this over, so there's a lot of inertia.
Any other thoughts? Everyone is still asleep. That's excellent, so I can continue. So
another thing that makes it difficult to change how MediaWiki works or to improve it is that we are trying to do, well, to be at least two things at once. On the one hand, we are running a top five website and serving over 100,000 requests per second using the system,
and on the other hand, at least until now, we have always made sure that you can just download MediaWiki and install it on a shared hosting platform. You don't even need root on the system, right? You don't even need administrative privileges. You can just set it up and run it in your web space, and it will work, and having the same piece of software do both,
run in a minimal environment and run at scale, is rather difficult, and it also means that there's a lot of things that we can't easily do, right? All this modern microservice architecture, separate front-end and back-end systems, all of that means that it's a lot
more complicated to set up and needs more knowledge or more infrastructure to set up, and so far that meant we can't do it, because so far there was this requirement that you should really be able to just run it on your shared hosting, and we are currently
considering to what extent we can continue this. I mean, container-based hosting is picking up. Maybe this is an alternative. It's still unclear, but it seems like this is something that we need to reconsider. Yeah, but if we make this harder to do, then a lot of current uses of
MediaWiki would maybe not, well, maybe no longer exist, or at least would not exist as they do now, right? You probably have seen this nice MediaWiki instance, the Congress wiki,
which is with a completely customized skin and a lot of extensions installed to allow people to define their sessions there and making sure these sessions automatically get listed and get put into a calendar. This is all done using extensions like Semantic MediaWiki that
allow you to basically define queries in the Wikitext markup. Yeah, another thing that of course slows down development is that Wikimedia does engineering on a comparatively shoestring budget,
right? The budget of the Wikimedia Foundation, the annual budget is something like 100 million dollars. That sounds like a lot of money, but if you compare it to other companies running a top 5 or top 10 website, it's like 2% of their budget or something like that, right?
It's really, I mean 100 million is not peanuts, but compared to what other companies invest to achieve this kind of goal, it kind of is. So what this budget translates into is something like 300, depending on how you count, between 300 and 400 staff. So this is the people who
run all of this, including all the community outreach, all the social aspects, all the less than half of these are the engineers who do all this.
And we have something like 1,500 servers, bare metal, which is not a lot for this kind of thing. Which also means that we have to design the software to be not just scalable, but also quite efficient. The modern approach to scaling is usually scale horizontally,
make it so you can just spin up another virtual machine in some cloud service. But yeah, we run our own servers, so we can design to scale horizontally, but it means ordering
hardware and setting it up, and it's going to take half a year or so. And we don't actually have that many people who do this. So scalability and performance are also important factors when designing the software. Okay, before I dive into what we
are actually doing, any questions? There's one in the back. Wait for the mic, please.
Hi. So you said you don't have that many people, but how many do you actually have?
It's something like 150 engineers worldwide. It always depends on what you count. Do you count engineers who work on the native apps? Do you count engineers who work on the Wikimedia cloud services? Actually, we do have cloud services. We offer them to the community to run their own things, but we don't run our stuff on other people's cloud.
Yeah, so depending on how you count is something, and whether you count the people working here in Germany for Wikimedia Germany, which is a separate organization technically, something like 150 engineers. Thanks. I'm interested, what are the reasons that you
don't run on other people's services, like on the cloud? I mean, then it will be easy to scale horizontally, right? Well, one reason is being independent,
right? Imagine we ran out all our stuff on Amazon's infrastructure, and then maybe Amazon doesn't like the way that the Wikipedia article about Amazon is written. What do we do, right? Maybe they shut us down. Maybe they make things very expensive.
Maybe they make things very painful for us. Maybe there is a at least like a self-censorship mechanism happening, and we want to avoid that. There are thoughts about this. There are thoughts like, maybe we can do this at least for development infrastructure and CI, not for production, or maybe we can make it so
that we run stuff in the cloud services by more than one vendor. So we spread out, so we are not reliant on a single company. We are thinking about these things, but so far the way to actually stay independent has been to run our own service.
You've been talking about scalability and changing the architecture. That kind of seems to imply that there is a problem with the scaling at the moment, or that it's foreseeable that
things are not going to work out if you just keep doing what you're doing at the moment. Can you maybe elaborate on that? I think there are two sides to this. On the one hand, the reason I mention it is just that a lot of things that are really easy to do, basically for me, works on my machine, are really hard to do if you want to do them at scale.
That's one aspect. The other aspect is MediaWiki is pretty much a PHP monolith, and that means scaling always means copying the monolith and breaking it down so you have
units that you can scale and just say, I don't know, I need more instances for authentication handling or something like that. That would be more efficient because you have higher granularity. You can just scale the things that you actually need,
but that of course needs re-architecting. It's not like things are going to explode if we don't do that very soon. There's not an urgent problem there. The reason for us to re-architect is to gain more flexibility in development because if you have a monolith that is pretty entangled,
code changes are risky and take a long time. How many people work on product design or user experience research to sit down with users and try to understand what their needs are and
from there proceed? I don't have an exact number, something like five. The question was whether it's sufficient. Probably not, but that's more people than we have for
administration and that's also not sufficient. Are there further questions?
Okay, so one of the things that holds us back a bit is that there's literally thousands of extensions for MediaWiki and the extension mechanism is heavily reliant on hooks, so
basically on callbacks. I don't have a picture, I have a link here. We have a great number of these, so you see each paragraph is basically documenting one
callback that you can use to modify the behavior of the software. I never counted, but something like a thousand. All of them are of course interfaces to
extra to software that is maintained externally, so they have to be kept stable. If you have a large chunk of software that you want to restructure but you have a thousand fixed points that you can change, things become rather difficult. These hook points
act like nails in the architecture and then you kind of have to wiggle around them. It's fun. We are working to change that. We want to architect it so the interface that is exposed to these hooks become much more narrow and the things that these
hooks or these callback functions can do is much more restricted. There's currently an RSC open for this. It has been open for a while actually. The problem is that in order to assess whether the
proposal is actually viable, you have to survey all the current uses of these hooks and make sure that the use case is still covered in the new system. We have a thousand hook points and we have a thousand extensions. That's quite a bit of work. Another thing that
I'm currently working on is establishing a stable interface policy. This may sound pretty obvious. It has a lot of pretty obvious things like if you have a class and there's a public method, then that's a stable interface.
It will not just change without notice. We have a deprecation policy and all that. If you have worked with extensible systems that rely on the mechanisms of object-oriented programming, you may have come across the question whether a protected method is part of the stable interface of the software or not. Or maybe the constructor. I don't know.
If you have worked in environments that use dependency injection, the idea is basically that the constructor signature should be able to change at any time. But then you have extensions that use subclassing and things break. This is why we are trying to establish a much more restrictive
stable interface policy that would make explicit things like constructor signatures actually not being stable. That gives us a lot more wiggle room to restructure the software.
MediaWiki itself has grown as a software for the last 18 years or so. At least in the by volunteers. In a monolithic architecture, there's a great tendency to just
find and grab the thing that you want to use and just use it, which leads to structures like this one. Everything depends on everything. If you change one bit of code,
everything else may or may not break. If you don't have great test coverage at the same time, this just makes it so that any change becomes very risky and you have to do a lot of manual testing, a lot of manual digging around, touching a lot of files. For the last
year, year and a half, we have started a concerted effort to cut the worst ties, to decouple these things that have the most impact. There's a few
objects in the software. For instance, one that represents a user and one that represents a title that are used everywhere and the way they're implemented currently also means that they depend on everything and that, of course, is not a good situation.
A similar idea on a higher level is decomposition of the software. The decoupling was about software architecture. This is about the system architecture. Breaking up the monolith itself into multiple services that serve different purposes. The specifics of this diagram are not
relevant to this talk. It's more to give you an impression of the complexity and the work we are doing there. The idea is that perhaps we could split out certain functionality
into its own service, into a separate application. Maybe move all the search functionality into something separate and self-contained. But then the question is how do you, again, compose this into the final user interface. At some point, these things have to
get composed together again. Again, this is a very trivial issue if you only want this to work on your machine or you only need to serve 100 users or something. Doing this at scale, doing it at the rate of something like 10,000 page views a second. I said 100,000 requests earlier,
but that includes resources, icons, CSS and all that. Then you have to think pretty hard about what you can cache and how you can recombine things without having to recompute everything. This is something that we are currently looking
into. Coming up with an architecture that allows us to compose and recombine the output of different backup services. Before I started this talk, I said I would
probably roughly use half of my time going through the presentation and I guess I just hit that spot on. This is all I have prepared, but I'm happy to talk to you more about the things I said or maybe any other aspects of this that you are maybe interested in.
If any comments or questions? All three already. First of all, thanks a lot for the presentation. Such a really interesting case of a legacy
system and thanks for honesty. It was really interesting as a software engineer to see how that works. I have a question about decoupling. Your system is enormous. How do you find the most evil parts which have to be decoupled?
Do you use other software with metrics? Or do you just know? Actually, this is quite interesting and maybe we can talk about it a bit more in depth later.
Very quickly, it's a combination. On the one hand, you just have the anecdotal experience of what is actually annoying when you work with a software and you try to fix it. On the other hand, I try to find good tooling for this and the existing tooling tends to die
when you just run it against our code base. One of the things that you are looking for is cyclic dependencies, but the number of possible cycles in a graph grows exponentially with a number of nodes. If you have a pretty tightly knit graph, that number quickly goes into the millions. The tool just goes to 100% CPU and never returns. I spent quite a bit of time trying to
find heuristics to get around that. It was a lot of fun. We can talk about that later if you like.
What exactly is this Wikidata you mentioned before? Is it like an extension or is it a completely different project? There is an extension called Wikibase that implements this ontological modeling interface for MediaWiki.
That is used to run a website called Wikidata, which has something like 30 million items modeled that describe the world and serve as a machine-readable data backend to other
Wikimedia projects. I used to work on that project for Wikimedia Germany. I moved on to do different things now for a couple of years. Lucas here in front is probably the person most knowledgeable about the latest and greatest
in Wikidata development. You talked about test coverage. You talked about test coverage. I would be interested in if you amped your efforts to help you modernize it and
how your current situation is with test coverage. Test coverage for MediaWiki Core is below 50%. In some parts it's below 10%, which is very worrying. One thing that we started to look into half a year ago is,
instead of writing unit tests for all the code that we actually want to throw away, before we touch it, we try to improve the test coverage using integration tests on the API level. We are currently in the process of writing a suite of
tests, not just for the API modules, but for all the functionality, all the application logic behind the API. That will hopefully cover most of the relevant code paths and will give us confidence when we refactor the code.
Are there further questions?
So you said that you have this legacy system and eventually you have to move away from it, but are there any plans for the near future? At some point you have to cut the current infrastructure to the extensions and so on,
and it's a hard cut, I see. But are there any plans to build it up from scratch, or what are the plans? Yeah, we are not going to rewrite from scratch. That's a pretty surefire way to just kill the system. We will have to make some tough decisions about backwards compatibility and
probably reconsider some of the requirements and constraints we have with respect to the platforms we run on and also the platforms we serve. One of the things that we have been very careful to do in the past, for instance, is to make sure that you can do pretty much everything with
no JavaScript on the client side, and that requirement is likely to drop. You will still be able to read, of course, without any JavaScript or anything, but the extent of functionality you will have without JavaScript on the client side
is likely to be greatly reduced, that kind of thing. Also, we will probably end up breaking compatibility to at least some of the user-created tools. Hopefully, we can offer good alternatives, good APIs, good libraries that people can actually port to that are
less brittle. I hope that will motivate people and maybe repay them a bit for the pain of having their tool broken if we can give them something that is more stable, more reliable, and hopefully even nicer to use. It's small increments in bits and pieces all over the system.
There's no great master plan, no big change to point to, really. Okay, further questions?
I plan to just sit outside here at the table later if you just want to come and chat. We can also do that there. Okay, so last call. Are there any other questions? It is not a PSO, so I'd like to ask for huge applause for Daniel for this talk.