We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Should we have a Rust 2021 edition?

00:00

Formal Metadata

Title
Should we have a Rust 2021 edition?
Title of Series
Number of Parts
8
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
In 2018, Rust adopted an "edition" system. This lets Rust evolve in ways that feel like breaking changes but are opt-in only, and that do not disturb the open-source ecosystem. Given that Rust 2018 happened three years after the initial 2015 release of Rust, this has everyone wondering: is 2021 the year we have our next edition? In this talk, Steve lays out his own feelings on this question, as well as talks about the history of the edition system, how it works, and what it might look like in 2021.
Metropolitan area networkComputerVirtual realitySource codeComputer programmingCore dumpServer (computing)Open sourceProcess (computing)Group actionRight angleProgramming languageNeuroinformatik
Web pageObservational studyVirtual realityCovering spaceAdditionPoint (geometry)Order (biology)QuicksortMultiplication signBitCASE <Informatik>Sheaf (mathematics)Web pageObservational studyCore dumpRandom matrix
Term (mathematics)Virtual realityLipschitz-StetigkeitCodeCompilerAdditionKey (cryptography)Default (computer science)QuicksortType theoryDuality (mathematics)PlanningComputer programmingIntegerInterface (computing)Drop (liquid)Dot productData managementMathematicsBitArithmetic meanFormal languageProjective planeRevision controlTerm (mathematics)Point (geometry)Core dumpDifferent (Kate Ryan album)CodeArithmetic progressionMultiplication signMechanism designSinc functionSoftware developerSpacetime2 (number)Natural numberLevel (video gaming)Control flowSimilarity (geometry)Functional (mathematics)Message passingError messageNear-ringTrailFunction (mathematics)Computer animation
VacuumVirtual realitySummierbarkeitRule of inferenceLibrary (computing)Lipschitz-StetigkeitError messageStandard deviationSmoothingMereologyLibrary (computing)Electronic mailing listType theoryLatent heatCuboidArc (geometry)Slide ruleMechanism designRule of inferenceMathematicsCompilerRevision controlBitMessage passingControl flowFunctional (mathematics)Maxima and minimaCodeProjective planeOnline helpIdentifiabilitySystem callWebsiteDeclarative programmingQuicksortEscape characterLimit (category theory)Different (Kate Ryan album)Computer programmingFormal languageFrequencyImplementationString (computer science)Decision theorySerial portObject (grammar)MultilaterationSweep line algorithmModulare ProgrammierungAdditionMultiplication signCompact spaceSign (mathematics)Condition numberLevel (video gaming)Sheaf (mathematics)Heegaard splittingLine (geometry)2 (number)CASE <Informatik>Point (geometry)Boundary value problemComputer animation
AbstractionAbstract syntax treeFunction (mathematics)Binary fileCodeCompileroutputSource codeVirtual realityCurve fittingPhase transitionCompilerMathematical analysisCode generationTransformation (genetics)Mathematical optimizationMountain passBuildingData structureRepresentation (politics)IterationMultiplicationoutputComputer architectureBinary codeLevel (video gaming)Compilation albumMessage passingQuicksortFormal languageSpacetimeVariable (mathematics)CodeCompilerSheaf (mathematics)Social classMultiplication signNetwork topologyWordVisualization (computer graphics)Function (mathematics)Form (programming)Operator (mathematics)MathematicsFunctional (mathematics)Different (Kate Ryan album)Electric generatorSoftware developerReflexive spaceAnalogyChord (peer-to-peer)Formal grammarComputer programmingRule of inferenceData structureIntermediate languageMathematical optimizationVarianceExpressionLoop (music)Sampling (statistics)Latent heatLetterpress printingBitStatement (computer science)ParsingContext-free grammarRepresentation (politics)Type theoryCryptographyCompilerProgrammschleifeControl flowRevision controlSource codeBranch (computer science)Abstract syntax treeView (database)Presentation of a groupPhase transition1 (number)NumberProcess (computing)AdditionScaling (geometry)Transformation (genetics)Block (periodic table)Order (biology)Semantics (computer science)Modulare ProgrammierungRootClassical physicsMechanism designUniverse (mathematics)Computer animation
Virtual realityTypprüfungType theoryCore dumpControl flowLipschitz-StetigkeitCodeCompilerAdditionPoint (geometry)View (database)MereologyProgrammer (hardware)ProgrammschleifeControl flowEvent horizonGraph (mathematics)Computer programmingStatement (computer science)Order (biology)SpacetimeAreaDataflowParameter (computer programming)Network topologyAttribute grammarLoop (music)Message passingMatching (graph theory)Multiplication signIterationImplementationWave packetRight angleConstructor (object-oriented programming)MathematicsIntegerFunctional (mathematics)Arithmetic meanTheoryNumberLatent heatData conversionVector spaceQuicksortRevision controlPlanningInformation overloadLevel (video gaming)CASE <Informatik>Transformation (genetics)Dynamical systemWeightForm (programming)Formal languageBitTerm (mathematics)Type theoryDifferent (Kate Ryan album)Mathematical optimizationWordRepresentation (politics)Virtual machineNeuroinformatikDegree (graph theory)Game controllerCore dumpMetropolitan area network1 (number)Flow separationBlock (periodic table)Data storage deviceAbstract syntax treeCycle (graph theory)System callGroup actionArmTypprüfungPrimitive (album)Data structureCompilation albumCompilerProjective planeVirtualizationCodeElectric generatorFunction (mathematics)Mathematical analysisProcess (computing)Variable (mathematics)CuboidRegular graphConnectivity (graph theory)TelecommunicationReal numberScheduling (computing)Library (computing)Structured programmingCausalityExistenceDrop (liquid)Feasibility studyControl flow graphComputer animation
Query languageSystem callCompilerFunction (mathematics)Compilation albumData modelMereologyVirtual realityLipschitz-StetigkeitRevision controlCompilerSoftware maintenanceFormal languageBeta functionAddress spaceEndliche ModelltheorieCompilation albumCore dumpLatent heatQuery languageFormal languageQuicksortIntegrated development environmentFunctional (mathematics)CompilerCodeMultiplication signControl flowLevel (video gaming)BitGreatest elementSource codeMathematicsRun time (program lifecycle phase)Different (Kate Ryan album)AdditionCycle (graph theory)Interface (computing)Arithmetic meanRevision controlBeta functionScheduling (computing)Macro (computer science)Computer architectureWeb serviceMechanism designText editorProjective planeComputer programmingGroup actionRight angleEntire functionEmailSoftwareService (economics)InformationChainHand fanMereologyFigurate numberResultantPrice indexOnline helpWindowAbstract syntax treeStructural loadSynchronizationState of matterFuzzy logicAreaNumberLibrary (computing)Data managementMessage passingType theoryProduct (business)Perspective (visual)GradientMicrocontrollerRandom matrixSpeech synthesisExistenceComputer animation
Virtual realityBroadcast programmingCore dumpFrequencyRevision controlBit6 (number)Stability theoryMultiplication signQuicksortMereologyMathematicsDifferent (Kate Ryan album)Mechanism designPoint (geometry)Film editingProjective planeImplementationGoodness of fitScheduling (computing)Moment (mathematics)4 (number)Total S.A.Functional (mathematics)Domain nameWebsiteBlogModulare ProgrammierungOrder (biology)Library (computing)Sign (mathematics)CompilerPhysical systemLine (geometry)AdditionFrequencyChemical equationPattern languageLogical constantGroup actionComputer animation
Virtual realityMechanism designParity (mathematics)IcosahedronRegular graphBroadcast programmingProcess (computing)Wave packetQuicksortMultiplication signNormal (geometry)MereologyChemical equationProjective planeEndliche ModelltheorieScheduling (computing)Computer programmingBasis <Mathematik>MathematicsPhysical systemMechanism designProduct (business)Compiler1 (number)NP-hardData managementPlanningForcing (mathematics)Modulare ProgrammierungStress (mechanics)Validity (statistics)Element (mathematics)Perspective (visual)Degree (graph theory)AdditionFeedbackSet (mathematics)Open sourceCuboidWorkstation <Musikinstrument>Software testingExpected valueComputer animation
Negative numberData modelVirtual realityWave packetMessage passingScheduling (computing)ConsistencyParameter (computer programming)Standard deviationDecimalLatent heatDecision theoryCore dumpFormal languageHardware-in-the-loop simulationPhysical systemSystem identificationWaveError messageTrailScale (map)BitPlanningQuicksortTheoryMereologyProcess (computing)Level (video gaming)Latent heatInternet forumAdditionNumberDifferent (Kate Ryan album)Stability theoryPrice indexMultiplication signBasis <Mathematik>Numbering schemeRevision controlWave packetPhysical lawInheritance (object-oriented programming)Slide ruleLengthPhysical systemEndliche ModelltheorieControl flowScheduling (computing)MathematicsMessage passingInternetworkingFormal languageOpen sourceException handlingParameter (computer programming)CodeSoftwareCore dumpRegular graphProjective planeCASE <Informatik>Mechanism designFrequencyStandard deviationConsistencyAngleDecision theorySet (mathematics)Data conversionRandomizationOnline chatRight angle2 (number)Scaling (geometry)Modulare ProgrammierungPower (physics)FeedbackFunctional (mathematics)Block (periodic table)Order (biology)
Virtual realityMultilaterationMultiplication signSinc functionImplementationWave packetMathematicsEmailQuicksortInformation overloadMechanism designOpen setFunctional (mathematics)Right angleTerm (mathematics)AdditionBlog1 (number)NumberControl flowScheduling (computing)MereologyEndliche ModelltheoriePoint (geometry)Run time (program lifecycle phase)Connectivity (graph theory)Parameter (computer programming)Formal languageBitComputer configurationCore dumpSpacetimeArithmetic meanWeightRegular graphDifferent (Kate Ryan album)PlanningLatent heatWindowCycle (graph theory)Perspective (visual)AreaSynchronizationLibrary (computing)Degree (graph theory)Web 2.0MicrocontrollerTelecommunicationProjective planeHand fanSoftwareService (economics)Computer animation
Transcript: English(auto-generated)
All right. Hey, everybody. I'm Steve. Thanks for coming to my talk. This talk is titled, should we have a 2021 edition? And today is the 27th.
So for those of you who don't know, I am on the core team of Rust. I previously led the documentation team, and I wrote the Rust programming language, which is the introductory book on Rust that comes with Rust itself. Two other things, though, that you may or may not know. I figured I'd mention before I get into the meat of this talk.
I recently got a new job at a company called Oxide Computer Company. We're building new server computers for people to use, and we're doing everything in Rust. So these days, I'm writing embedded Rust, and it's awesome. And we've had an embedded working group working on it for a while, and I appreciated their work in the abstract sense.
But now that I'm actually doing their work as my job, it's fantastic. So that's pretty cool. And we are also hiring. So if you are interested in a job where you will write a lot of Rust, you may want to check that out. Finally, I have started actually working on open source Rust streaming stuff, so I'm doing my Twitch now. So if you want to watch me program on Tuesdays, that's the thing I'm doing.
OK, anyway, enough about me. You want to hear about additions. So today, we're going to cover these three major points in this order. So the first one is, what exactly are additions? I want to make sure that we're on the same page about the details, because the details do actually matter and matter a lot here.
Secondly, we've already done one edition release, which is Rust 2018. And so I want to take a look at that and how it went and some thoughts after some time has passed. So it's kind of like a little bit of a case study, maybe a little bit of retrospective. Let's talk about Rust 2018. And then finally, what should we actually do?
And this is phrased as, should we have Rust 2021? That's kind of like a big thing people have been talking about lately. I want to emphasize, especially with both of these second sections, that this is my personal opinion. While I am on the core team, I am only one person. And this is all phrased as, should we?
Because technically, nothing has been decided. So while I do have my own opinions on this, feel free to disagree and just like, this is me saying this. So take that as far as that goes. All right, so first up, this first section. What even are editions anyway?
If you don't know at all, thank you for coming to my talk even though you don't know the subject. This is exactly why I included this section. But even if you do, there's some details that maybe some people don't always think about. So I want to talk about a lot of those details. There's kind of two aspects to editions that I think are really important. And the first one is the sort of social aspect of Rust.
So an edition is kind of this point in time where we say, hey, Rust is significantly different now than it was in the past. And here is kind of like why. So Rust 1.0 came out in 2015. Rust 2018 was the first edition sort of release. And it came out in 2018. And we sort of said, hey, Rust has changed a lot
in the last three years. Let's talk about all the stuff we've accomplished and add in some other things. And so that's largely a social thing. So this is a way to sort of reflect on the longer-term progress of the language and the project as a whole. Because we release every six weeks, that is a really great thing for a lot of reasons.
But it's really hard to remember how much work we've done. Releases come out all the time. And they're in small little chunks. And so it can be really, really easy to forget just how far along things came. Half the time, I forget that it's been almost five years since Rust 1.0, because when things are happening
at such a rapid pace, it's really easy to lose track of how far we've come. And so that's really important. A second one is a point on why editions matter, are that they are a way to get new users into Rust. So this sort of is kind of similar to the first point,
but a little bit different. Basically, because we release every six weeks, there's a lot of people who don't pay attention to Rust releases, because they see them so often, and they have not that much in them. And so they're kind of like, hey, a new Rust release happens, I don't care. With languages that release once a year or once every couple of years, it's a really big deal.
And so a lot of people will hear about, oh, there's a new version of C++ out, or there's a new version of Ruby out, or the new C-sharp has these cool new things in it. And that's a way for people who don't currently program in the language, just sort of like it signals to them, hey, I should check this out. Maybe I looked at C-sharp 3.0, and I didn't really
like what was going on, so I didn't use it. Maybe C-sharp 4, time for me to check in again. Maybe I want to start programming in it. And so it's kind of a nice way to signal to people outside of the Rust world as well that, hey, a lot has gone on, and maybe if you didn't use Rust in the past, you would want to use it again. And then finally, kind of for actually development of Rust,
sort of like a nice rallying cry, I guess, basically. Not only is it about reflecting on what we've done, but also if there's sort of big things we want to do, it's nice to be able to point in the near future and say, hey, it's 2020 now. We want to do a Rust 2021, and we
want to get people excited. So let's start working on some projects that will really generate that excitement. And there's some back and forth here, and we're going to talk about all those details in a minute. But it can be a really great way to sort of get everybody excited about the future of Rust because we need those check-in kind of points, I think. But also, additions are not purely a social mechanism.
They're also a technical mechanism, and this really matters. And so on a technical level, additions are a way to kind of make breaking changes to the Rust language without actually making breaking changes. And so how this kind of works on a technical level is that additions are opt-in.
So you say what addition your code is in, and if you don't update that, then you don't update stuff. So if new things are added in a way that's technically breaking, then it doesn't break you because if you didn't opt in to the new changes, then your code still works forever. So this is very different than Python 2 to Python 3,
for example, where you can't run your code kind of together and similar kind of things like that. So we can make it a way to opt in to breaking changes, and that's really, really important for compatibility reasons. And then finally, additions are not allowed to change everything about Rust.
This is a technical problem as much as it is a social problem, and there's some interplay back and forth between the two. But basically, major, major changes are not actually allowed. So some kinds of breaking changes are fine, but there are some kinds of changes that cannot be made in the addition mechanism, and that's due to the technical details of how all of this works.
It is also kind of a social thing in the sense that it's also useful for humans. Like if tomorrow, like say Rust 2021 was going to be like garbage collected and use significant white space instead of curly braces, it would effectively be a whole different language, and that would make it a really big challenge for people to update to. But because we can only make certain kinds of breaking
changes and additions, it's significantly better for people to be able to upgrade, not only because of the opt-in nature of things, but also just like Rust is still going to be Rust, even if there are some tweaks to how it works, and some changes, like the core idea of Rust will be the same. So let's talk a little bit about what
I mean by breaking changes in an addition. So I think one of the best examples of a breaking change was the async keyword. So Rust has, as I said, two additions right now, 2015 and 2018. And in Rust 2015, code is, async is not a keyword, but in 2018, it is.
And so what that means is if you look at this code that I have here on the left, we have a function named async that just takes an integer and returns it. It doesn't do anything fancy. But we call it passing in a five and use the underscore s as the name, since we never use the variable to get rid of that kind of warning. So if I run this code on Rust 28.15, as you can see,
this is the Playpen interface on play.rustlang.org. You can choose this little dropdown with the three dots and then pick which edition you're in. So if you choose Rust 2015, this code will compile. And it runs, doesn't do anything, because we just pass an integer around. So there's no output. But if we change that to edition 2018, we will get an error. And this will say, expected identifier found keyword async.
So we're allowed to name a function async in Rust 2015, because it's not a keyword. But we're not allowed to name it in async in Rust 2018, because it is a keyword. So this is an example of, if we purely made this change in the language, code that existed and ran just fine before would end up breaking.
So this is a breaking change. But because you can choose whether or not to upgrade, it's not a breaking change. And so this kind of duality, it was very controversial around coming up with the plan of editions. And it was also one of the big challenges of communicating this to users. Like, technically it's breaking, but it's also not breaking, is very interesting and kind of unusual.
Is my mic still working? All right, cool. Dropped out there for a second. So the way that you opt into this change, if you're not on the Playpen, because most serious Rust programs are not running in the Playpen, that would be ridiculous, is in the cargo.toml. So you get to choose an edition key
by putting 2018 in there. And if this does not exist at all, then it defaults to 2015. So all of the code that was generated before we had this idea of editions, it is able to stay on the default 2015 edition. But if you opt in, then you're able to do this.
And cargo new will start generating 2018. So I just type cargo new to generate this, and it defaults me to the latest. And so that way, we're able to, new projects start in the latest edition, but older projects stay on the edition they were created with, until their creators explicitly opt in. So that's kind of like the way that this happens in most real projects.
And one thing that's really important, though, about editions, is that editions can interoperate just fine. So let's say that I had packaged up my useless async function into a crate, and I wanted to use it from some code that was in Rust 2018. Well, one of the features that was added in Rust 2018
was the ability to use a raw keyword. So I showed you the error message in the 2018 code before, but right below the error is a little help message that says, hey, you can escape keywords to use them as identifiers. And so here, with this R pound sign, both at the call site and at the actual declaration site, you're able to still define and call a function
named async, even though it's a keyword. So this would matter if you could imagine that the async function lived in a 2015 crate, then I would still be able to call it from a 2018 crate, and vice versa, if there was some other way of doing it. So this is one example of the ways that code can interoperate, but just in general, the idea is that
you don't have to be worried about your dependencies at all. Other code is allowed to be in any edition, and they'll compile into your project just fine, and there's no worrying about this kind of interoperability compatibility other than if some keywords are named a certain way, and then you have to know this escape mechanism. But what this means is we don't have this situation
whereby when a new edition comes out, everyone is forced to upgrade all at once, because that doesn't happen. It takes a long time, and there's always some people that are going to prefer older things, and we don't want to bifurcate the ecosystem. And so this way, these versions can live in total harmony, and your dependencies can upgrade at their leisure,
and you can upgrade as fast or as slow as you'd like, and you're not locked out of the rest of Rust world. So that's a really helpful way to make sure that all this operates smoothly. And so specifically, I had mentioned before, part of the way that this works is that editions are not allowed to change everything.
So these are the rules that were set out in the original RFC, and they're not complete, so this is not necessarily a full list of everything that can and can't change, but I wanted to point out some specifics. So an example of a thing that can change, as I've already showed you, is new keywords. So we're allowed to define new keywords and new editions,
and that's totally fine. It's like I said, they interop across the boundary, no problems. The second thing that editions are allowed to change is they're allowed to repurpose existing syntax. And I say repurpose because you can remove existing syntax. It's like how it gets repurposed, but it generally shouldn't only remove.
It should replace with something that does something slightly different. So an example of this is if you have the usage of a trait, in this case, I'm just calling it trait, but if you're using this as a trait object, you would just use the name of the trait originally. So you'd have box trait or arc trait, something like that.
And so what we did in the 2018 edition was we deprecated that usage of just trait on its own, and we introduced dine trait. Unfortunately, my slides split this on the two lines. Normally, it would be just one line, but this helped you know that, hey, this is a trait object because we're doing dynamic dispatch. And so we replaced that existing syntax
for doing a thing with a new syntax instead. And so you're able to use this new syntax, and it's more clear to people that dynamic dispatch is what's happening. So we're allowed to change some existing stuff and add some new things and tweak things. Another example of something that changed
in the 2018 edition was the module system, which is a pretty big change. I want to talk about all the changes we made a little bit later. But there's also some things that we can't do. Earlier, I mentioned that we couldn't make really big sweeping changes, but there's actually slightly more specific things. Some of them are based on practical limitations, and some of them are based on decisions that we've made.
So for example, the coherence rules can't change across editions. If you're not familiar, a coherence rule says if a type implements a trait, is it allowed to implement that trait or not? Like, do you get a compiler error when you try to compile a trait implementation for a type? So a common example of this rule being broken
is that if I had, say, a type that's defined in the standard library, like string, and I have a type that's defined in a third-party package, it's not mine, say maybe like serde, I can't define serde serialize for a string because it's not my trait, it's not my type, so that doesn't work. If I had made my own string type, then I'd be allowed to implement serde's trait for it.
And if I implemented my own trait, I'd be allowed to implement it on the standard library string, but I have to own either one of the traits or the types. And the rules are a little more complicated than that, but that's the gist of it, that's the biggest thing. So that's not allowed to change. We couldn't say relax them in one edition, but not in the other. And that's because coherence rules are kind of global,
and those rules happen across different crates. So if we had different rules for different parts of your program, that would be extremely hard to implement, very confusing, and possibly just unsound. Like it's not kind of a thing that you're able to do for all of those sort of reasons. So the coherence rules have to act globally, and so therefore they kind of have to stay the same
in every edition, no matter what, so that's a trick. Secondly, another thing that people sort of don't always appreciate is that we can't make breaking changes to the standard library. And intuitively, you're like, wait, if I can change crates, standard library is just a crate, like why not? Well, I mean, it is just a crate, but it's also kind of not just a crate.
You get one copy of the standard library for your whole program. And so if you had a dependency that used Rust 2018, and then you use Rust 2015, say, then you would need both copies. So it doesn't like really work that way exactly. So we're able to sort of deprecate things, but we're not able to remove them.
And there's some talk of maybe having some sort of visibility situation where they exist, but there's visibility rules where this is only allowed to be seen in 2018 versus 2015. Those are proposals, and they're not real yet. So this is kind of half a true technical limitation and half a sort of social limitation.
Somebody asked a question on Twitch. How does a deprecated feature move from deprecated to removed, and can that be done between editions? So one interesting thing about this is that some of this stuff is kind of up in the air policy-wise-ish. Basically, things cannot actually error
on the same edition. So usually the way you think about it is something becomes a warning in the old edition, and then an error in the new edition. And the original RFC talks a little bit about code that's warning-free on the first edition should compile. Maybe it'll have new warnings,
but it won't break on the second edition. But if there's warnings in your first edition code, maybe the second edition will break. I'm gonna talk a little bit more about this at the end, because the exact policy is a thing that we're talking about. So yeah, the rules are slightly up in the air, but originally that's what the intention was, was that you introduce warnings in the first edition,
you remove them in the second edition. There's some questions about whether or not that's too fast, or maybe we should require a whole edition to go by. We require language features to wait for a whole release nightly before we're able to stabilize them, for example. There's kind of a mandatory minimum waiting period. So maybe there would be a good mandatory waiting period
for deprecations. But yeah, so I hope that answers enough of your question. Hey, Jared. But I'll talk about it a little more. Okay, so why do we have these restrictions on some level? Like I talked a little bit about some of them with like there's one crate everywhere, but let's get into some details.
And I think this matters because this also really helps you understand like why certain things are allowed or why certain things aren't allowed. And it's also just kind of fun to talk about the compiler and how it works. So this next section is gonna be about editions, I swear, but it's also kind of how the compiler works because I think that's interesting and kind of matters.
So Rust is currently sorta kinda, there's an asterisk here, I'm gonna get to it later, what's called a multi-pass compiler. And so this is like a classic architecture for compilers. If you take a compiler class in your university, this is how the teach you compilers work. A lot of compilers in the world are implemented this way. Basically what happens is there are this concept of passes.
So you can, you take in a source code and then you spit out something. A lot of older compilers are called one-pass compilers because they directly turn the code into the actual like binary code. So if you think about like you used to have to be a lot of older languages required you to define a variable at the start of functions, for example,
that was because they had one-pass compilers. And so they needed to emit the stack space for those variables first. Okay, it seems to be back, cool. I don't know why that's happening. There's probably something on my end. But anyway, so yeah, so older compilers are one-pass.
That's also why a lot of them were super fast. It's because they didn't do a lot of this stuff. But over time, we needed things to be more complicated. And so then people develop multi-pass compilers where you would do multiple steps. So you'd iterate over the source code in multiple ways and that's how it's produced. So in the Rust compiler, it's like multi-pass architecture
as it traditionally existed and is similar to many other compilers. Basically you have input being source code and it kind of goes through each of these steps in turn. So the first one takes the source code and it creates an AST out of it, which is abstract syntax tree. And then it takes that AST and it produces here, which is high level IR. And it takes the high level RIR and it produces mirror,
which is the mid-level IR. And then it takes the mid-level IR and it produces MLVM IR. And then MLVM takes that and produces the final binary. So there's a bunch of these steps. And within these steps, there's kind of smaller steps and all this sort of happens. We're going to talk about this in a little more detail. So compilers traditionally are kind of produced in these three different phases.
That's why you have these passes. You do a full pass and a full pass and a full pass. All three steps. First one is like a lexical or syntactic analysis. So is your code well-formed? This is like grammar rules. So, you know, does the sentence I'm saying follow proper grammar or not? Does your program follow the language's grammar or not? Secondly is semantic analysis,
which does this code make sense? So for example, you know, I could say like, you know, this sentence is false and that sentence is grammatically correct. So it passes lexical or syntactic analysis, but semantically it's very unclear what it means because if it's true, then it's saying it's false, which means it's false, which means it's not true.
So there's some reflexivity there. It's just the first example I could come up with. But like, you know, you could imagine gibberish, for example, that uses all real words. Maybe it's structurally correct, but it doesn't actually make any sense. So semantic analysis makes sure that the thing that you've said is sensical. And then finally, code generation, this does not really have an analog in existing languages.
I guess this is the vocal chords turning it into sound. Maybe would be the analogy. I'm stretching this a little too far, but once we verify that everything works and makes sense, we actually produce the binary finally out of it. This step in itself has a bunch of different passes. For example, we talk about optimization passes that sort of run and generate code, you know, faster.
So that all happens in this kind of stage. But you know, these are kind of the three big, giant steps. If you've ever wondered how cargo check works, for example, cargo check will run the lexical and syntactic analysis and the semantic analysis, but won't generate any code. So this is an example of how understanding this can help you practically as a REST developer. Use cargo check to check your code makes sense,
but you don't actually want to run it. You can save yourself a lot of time by not making the compiler do code generation. So this is like one example of how this architecture lets you do less sometimes. So it's kind of like two sorts of these steps between these situations. And this is kind of some compiler jargon that you may be interested in.
So one of them is called lowering. And so that's the word that you use when you talk about going from one form to the other. So for example, Mir is lowered into LLVM IR, or the AST is lowered into here. And the reason it's called lowering is that at every step along the way, things get simpler. And we throw away things that we've already validated,
which makes future steps easier. So for example, Mir does not have the concept of complicated fancy loops. So like for loops, for example, don't exist in Mir. What happens is the previous steps in the compiler take your for loop and they turn it into a while loop with a break in it.
And so Mir is able to understand a while loop with a break. So we've sort of removed a construct from the language when we've gotten to the lower step. And that means each successive step is simpler. And this definitely really matters. And that's why it's called lowering is because you're kind of like breaking down what is happening into simpler and simpler things. And then finally is a step called a pass,
which basically means it's some sort of check that validates that your program is well formed and does not necessarily do a transformation, although technically it can. So for example, type checking is a pass on your code. So it runs over your code and it makes sure everything makes sense. A lot of semantic analysis are kind of passes. And sometimes they will do transformations. This is like whenever basically like, okay,
I'm going to take your for loop and I'm gonna rewrite it as a while loop. That happens first as a pass because inside of here, I believe, and then the lowering step is what turns that simpler form of here into mirror. So they kind of like work together. Okay, so this is some example of actual code
going through these steps. And I did this about a year ago. So the samples are a little dated because of the fact that the compiler output changes all the time, but conceptually it's the same. So I figured it being outdated is actually a little better because you shouldn't get hung up on the specifics. So for example, let's talk about taking code and producing an AST. If we have this function called plus one,
takes an i32, adds one to it and returns it, we can actually ask the compiler to print out the JSON version of the AST. Because again, these are all data structures inside of the compiler. So they don't really have a text representation, but you can like print them out and things like JSON. So for example, plus one, a little bit of it looks like this. So there's a statements and that's an array of nodes.
And inside of there, each node has a variance. So this is like an expression. And then, you know, it's like a binary expression. That's add. You can kind of see how these little bits all fit together. And then, you know, we're adding X and it keeps going on to talk about adding one and stuff. So you kind of get this data structure view of your code and that's what rest AST looks like.
So here, you know, we take our statements and the statements refers to an expression, expression refers to a binary expression. The binary expression says, hey, we have an add expression that adds X and one together. So this is kind of like why it's called a tree is because if you see there's like this root, which is the statements, and then there's the branches and leaves that happen to build a tree.
So this is like what an AST kind of looks like visually. And so, yeah, fundamentally the AST is a data structure and it's the way that our code looks written in words, but is a data structure. This means it's easier to manipulate. Like, you know, if you just have a data structure, you can just manipulate it. Like that's what they're there for. But if you had to do it on the like textual
representation of your code, it would be much harder. So the idea is that we break the text down into a data structure, and then we do all our operations on those data structures. So from the AST, we move on to the here. And so here is short for high level intermediate representation. And this basically is like doing form this sort of checks. What I mean by form this is like, you know,
have you imported all the stuff that you've used? Things like that. And some things are simplified. So for example, in my understanding here does not have, you know, like use statements that get turned into the elaborated, like full versions of all the types. So an example of what, you know, kind of like the sort of transformations
that happens here, this is a very simple for loop. There's a reason I referenced for loop several times earlier today, where we take a vector of five integers and we loop over it, printing them all out. So the AST would take that literal code as written and represent it that way. But when it gets lowered into here, it ends up being something more like this. So this is like still Rust code
that you could write in theory, but you'll notice the for loop is totally gone. And we now have a loop with a match statement inside and where we turn the thing that we're iterating over into an iterator and we, you know, call next on it repeatedly until the body of the loop happens and all these things. So you can see how it's kind of like simpler in the sense that there are less language constructs
when it's more complicated in the sense that there's more code. Cause like, that's the reason that we write in the higher level stuff in the first place is because it's easier to understand for humans, but the computer, you know, having less things makes the analysis much simpler. So we do that kind of stuff. Most checks in the compiler today are done on here,
at least in my understanding. So here was the original Rust IR that existed. So the first version of the compiler or maybe first is a little strong, but like for a very long time, the compiler turned everything into here and then it went to LLVM IR from there. So, you know, it was kind of like the OG thing. So a lot of things are written in terms of here.
So two things that are still written in type check or in here are type check, which is like, do all the types make sense? And have you not made any type errors? And then finally method lookup. So that's done at compile time instead of in some dynamic languages, method lookup is like a dynamic process, but basically, you know, kind of look at like, you know, what trait are you actually calling
when you call a method? Or is there a trait as an inherent method? Those kinds of things. These are all done on here. Then we move from here to Mir. And Mir became the subject of a lot of discussion in the Rust world. So you may have heard about it over the last couple of years. Mir is ultimately about control flow. So here is kind of represented our code in the way that we wrote it, but Mir totally rewrites it to be a simpler form,
but also one that's based on a control flow graph is the term rather than an AST, which is the tree. So going from a tree to a graph is, you know, helpful for certain kinds of analyses, specifically like the graph represents the way that the control goes through your programs, like which statement executes in which order.
And that matters because for example, non-lexical lifetimes, it needs to know how the execution flows within your program to work. And so it was very difficult to write that pass on here. So like, this is a practical example of why this stuff matters is that we had to invent Mir to make non-lexical lifetimes feasible
because we needed to be able to encode that flow control to be able to make it actually happen. And I say we, because I'm on the team, but like I did none of this work to be absolutely clear about it. A lot of other really great people made it happen. Man, I don't know what's up with my audio dropping. Anyway, so another interesting thing about Mir
beyond the control flow move is that it's kind of the core of Rust. So it's kind of like everything that's rusty about Rust without any superfluous extra concepts. So like I said earlier, fancy loops are gone. Everything is purely loops and breaks. I think I might even have go-tos if I remember correctly, but borrow checking is done here because like, as I said,
non-lexical lifetimes, but kind of Mir is kind of like the core way that, you know, what makes Rust, Rust, and it's kind of like the representation of the computer of what Rust is like. And as example of what Mir looks like, if you dump out an example, again, this is a little old and this like pseudo Rust is not actually Rust code, but it kind of looks like it.
Again, we're printing a data structure that doesn't have a real text representation. So here you can see our addOne function. It declares two variables, zero and two, or one is the first parameter. And in a bb0 stands for basic block. So to pay attention to control flow, we have this idea of blocks. And so it'll say, hey, storage live means that the second variable is live in this area,
which is an analysis happens about control flow. It's too complicated for me to get into and I'm too far in the weeds already here. You can read the docs in the compiler if you want to see all what's happening, but you can see our call to add, you know, adds a constant one and our number two and then declares the storage is dead and then returns.
This is kind of what Mir looks like. Again, not anything you ever have to worry about as a Rust programmer, but if you want to see how the compiler sees your code, looking at Mir could be really useful. And then finally, Mir gets lowered to LLVM IR. LLVM is a compiler toolkit that you can use to kind of like build stuff. It's a VM in like the technical sense of VM, but not in the way that everyone uses the word VM.
So it used to stand for low-level virtual machine, but the LLVM project changed its name so that it no longer references virtual machines because that got too confusing to many people. So basically, what we do is optimizations and code generation are done by LLVM for the most part. There are a couple optimizations that happen in Mir, and we hope to do more of them in the future,
but Rust would not be anywhere where it is today without LLVM, so it's very important to us. And it kind of like is the lowest level of the compiler, sort of operates, you know, LLVM takes the compiler's output and produces the binaries, so we kind of like hand it off to that library as the last couple steps. Andrew Leverette, sorry if I mispronounced your name,
is asking, what is the timeline if there's one to get SIMD support unstable? So SIMD is actually already unstable for Rust, but only the x86 versions. So it's also the low-level unsafe primitives. So I think maybe you either, one, don't know that's true,
which is totally fine, but that does exist today, or two, you're talking about higher-level SIMD, which is like what you would want to write as a regular programmer instead of the intrinsics. I don't actually know that there's a group working on the higher-level stuff right now. The low-level stuff does work though on x86 at least, and given ARM's recent interest in Rust, I'm assuming that ARM assembly stuff
will be soon to follow. I think maybe some of it already works, but I'm not actually 100% sure. So yeah, still more work to do there. Don't know the exact timeline yet. Question by Jam1Garner, what would go-tos be used for in Mir? Just more complicated breaks.
Basically, structured programming is useful as the programmer. We write loops because we don't want to have to think in go-tos, but go-tos are conceptually simpler because they are allowed to do whatever. And so it's actually easier for the compiler to understand than the higher-level constructs. I actually want to emphasize, I don't work on the compiler myself, so I believe that go-tos exist,
but I don't actually 100% remember, so I might be a little wrong there. So if you care about this topic, you should look at the details a little more, but I believe that that's the case, but I'm not 100% sure. Finally, there's a question about working with Rust and Fuchsia. I will talk about that towards the end
because it's not relevant to this part of the conversation. So I'll get back to you, SpaceX Jedi. Don't worry about it. Thank you for the question. OK, if you're curious about what LLVM IR looks like, this is an example of the text version of LLVM IR. So you can see that the function's name is mangled, so it's add1, but with a bunch of other shenanigans
on top of it. And we add a number to that and return it. It's very straightforward, and there's a whole bunch of attributes and other stuff. So this is kind of like what we hand off to LLVM. It does optimization passes. There shouldn't be a lot of optimizing to do in this code other than maybe inlining it into other code we've written somewhere else. So all that kind of thing.
OK, the last thing I need to mention on the compiler architecture before we go back to how this works with additions, I hope you'll forgive my little compiler tutorial here, is that I said before we were a multi-pass compiler, and that's true, but we're also working on making Rust query-based. So a lot of the compiler is already query-based.
And what that means is instead of these kind of passes where you take the whole source code and you turn it into one AST, you take the whole AST and you turn it into here, vice versa, or continuing down that strategy, Rust instead uses this concept called query to create an executable. So what happens instead is the Rust compiler will say something like, hey, what type is this function?
What's the body of this function? And then the compiler will operate in that fashion instead. So instead of it being like, here's the source code of your program and turn it all into one way, the compiler will say, where is this function? And then the internals, the compiler will say, oh, we don't have this function yet, but let's load up the source code of where we think it is,
and then we'll look at the body and do all that work, and then figure out that way. And so it'll do all the steps, but in smaller chunks instead. And the reason this is useful is, first of all, memoization. So you're able to reuse results of these queries across different invocations of things. And so that's helpful for compiler speed.
But also, more importantly, it fits incremental compilation. I couldn't say that. Incremental compilation, it fits it much better. And so if we're able to just say, oh, the body of this one function changed, that would map directly to one compiler query of saying, get me the body of this function. And so then you would only do that part of the work instead
of saying, OK, you changed the body of this function. Now we have to redo the whole AST, and the whole higher level IR, and the whole mid-level IR. And so this is the way that production-grade compilers are written today rather than the way that they're taught in school. This was started by C Sharp with their Roslyn compiler.
So if you're more interested in how this works, you should look into Roslyn or Rust C as we continue to make it happen this way. All the Mir stuff is written like this, in my understanding. Some of the older code is not yet converted. But this would be how this stuff works. Also, if you've been following Rust Analyzer lately,
it has this model of highly incremental and not traditional architecture that compilers have. So that's all how that goes. OK, so editions aren't allowed to break everything. But how does that mean? What's that mean? So for compilers, basically, editions
aren't allowed to differ by the time you get to Mir. And so what that means is a little fuzzy from the outside. Basically, what it means is the core of Rust, Mir, stays the same no matter what. And that really matters for a number of different reasons. The first thing is that because it becomes a common language for all the editions,
it's much easier for the compiler team to manage changes that are brought on by editions. So all of the stuff that happens, differences in editions, happen at earlier stages than Mir. And so what that means is we can assume, no matter what, going forward, that Mir is relatively stable. Obviously, they're changing the interface. But I mean we don't have to switch on things
by the time you get to that step. And so everything is sorted out earlier. And this happens to make interoperating between editions easier, because your 2015 code and my 2018 code will both compile into the same Mir at the end. So that's how we can guarantee interoperability, because they're both speaking the same language at the bottom of it, because this is the primary mechanism by which
things become interoperable. And it's also the way in which we can control the amount of breaking changes, because if you need something that would change things very fundamentally, then we can't do it, because we need to have this interoperability layer. On a human side, not being able to break things in Mir means that things can change per edition, but not that much,
because the core understanding of Rust and what it is is going to be the same, no matter what the high-level details are. And so that's really important. It also means, because we are compiling down the same thing, we have this interoperability, we also put the ecosystem, and that really matters. Hey, Jared has another question. Would adding more query-based compiler features
to Rust C make Rust Analyzer and other analyses tools more robust? Yeah, basically, that's exactly the reason that C-Sharp underwent the Roslyn project, is IDEs have very different needs for languages than the traditional compilers do. And so they kind of reoriented around what happens in an IDE. That's exactly why I phrased it as kind of like, oh, I changed the body of this function.
Let's not recompile the entire world, because that's what happens when you're actually in your editor programming, is you change little bits. Most of the program stays the same, so we kind of want to reuse that work rather than throwing it all away every single time. And so, yeah, it's definitely one of the reasons why those tools are better, is because they want this kind of architecture. So, yeah.
OK. And then another thing about, like, what is editions towards the end, you kind of think of editions as sort of like a bigger release cycle. So Rust already has three different release cycles. There's stable, beta, and nightly. And there's different cadences to those releases. So nightly is every night. Stable and beta happen every six weeks, OK?
So, you know, editions kind of are like a bigger release cycle that happens broader than versions. But we don't have a cadence for editions yet. This is kind of why this talk exists. And we're working on an RFC that I'll get to here at the end. But basically, like, we didn't actually decide whether or not this would happen on a schedule
or not when we decided to do it in 2018. So there's no policy in the initial RFC about when editions should be used, as I sort of just mentioned. It didn't talk about when. It just mostly was talking about how. And that way, we could focus on shipping Rust 2018 and not worry about it until later, because, you know, there's a lot of stuff going on.
And we didn't want to set that policy immediately. We wanted some experience with editions. We wanted to get 2018 going. And we didn't want to think about it, basically. So now, it's been a couple of years. It's time to start thinking about it. So 2018 was sort of the first edition. But we kind of retconned it to be the second one. So Rust 1.0 and Rust 2015 were the very first editions, sort of speaking. 2018 was the first one that kind of, like, changed things.
And so what I want to talk about next is, like, a little look back about how that happened, because I think that really informs what we should do in the future. And this is also, again, kind of, like, why we didn't pick a policy back then, because we wanted to be able to see how it went and then think about the problem later, rather than trying to invent it beforehand.
So let's talk a little bit about Rust 2018 and kind of how it went. So I think, overall, Rust 2018 was a success. We achieved our goals, even though it was a ton of work. But we did ship the edition. It did happen. People did manage to understand this was different than Rust 2.0, and they didn't, like, run away, thinking that we had totally destroyed our stability
guarantees. Obviously, there were some people that were kind of not happy that we did any sort of changes. But you can't please everyone all the time. But the fact that it was a different kind of mechanism helped people understand what we were trying to accomplish. And I would say that there are not any real major issues with the edition system itself at this point. There's some tweaks and things, but, like, you know,
the actual rollout of the implementation went smoothly for the most part. It was a lot of work. It was a really big project for all the teams. We never really had undertaken such a big project before, other than maybe Rust 1.0, which is, again, sort of kind of like an edition. And so we managed to do it, and that's positive. And that, I think, is worth celebrating.
However, Rust 2018 was also not a complete success, in my opinion. So there's kind of two different ways that I think that it really struggled. The first one was the schedule, and the second one was the team. So we didn't ship everything that we wanted to ship in the 2018 edition. Some things didn't actually get finished. Some things changed significantly,
and scope had to be cut drastically in order to make it under the release line. And so while we did ship the mechanism, and it was successful, it just barely happened. Like, the release, as I'm going to talk about schedules in a second, happened in December of 2018. We actually did not have another Rust release. It happened literally at the last moment.
And so that also is, you know, a sign that things weren't necessarily as ideal as they could have been. Secondly, it was the human cost of the edition, the team. Tons of people put in tons of work for a really long time to make this happen. And it was extremely high stakes, because this was a really big thing, and it was the first time we'd ever done it. And so that contributed to a lot of burnout amongst contributors, I believe.
I can only truly speak for myself. I was a total freaking mess by the time the 2018 edition actually happened, and I wasn't even the one implementing a lot of this stuff. I was just trying to, like, keep the book going and do some other work. So other people worked a lot longer and harder than I did, and I felt terrible, honestly. So I can only imagine with other people, you know, how they felt about the schedule.
So we did it, but, like, at what cost? And so, yeah, the 2018 edition shipped on December 6th, 2018, with Rust 1.31. And what's kind of funny about the half-shipping thing, too, is, like, some actual changes didn't ship until 1.32, but the edition self-shipped in 1.31. So, yeah, we shipped a bunch of different things.
This is a screenshot from the blog post. So non-technical lifetimes happened in 2018. Module system changes got simpler in 2018. We did some more Elysian stuff. Constant functions became a thing. There was things like Rust fix got added, a whole bunch of lints, like Clippy was able to do its job unstable, and that was a big deal.
Documentation got updated. We had the new domain working groups, a new website. Tons of things happened in the standard library. Lots of new cargo features. There was just, like, all sorts of stuff. Like, 2018 was huge. It's a really big deal, and we shipped a lot of it, and that deserved celebrating. But as I said, it was behind schedule. So the initial RFC had sort of this schedule. There's this idea of a preview period,
which is kind of like you can think about the edition being unstable. So the idea was that, like, okay, Rust 1.23 is gonna start shipping a preview of the edition, and then in 1.27, we're gonna, like, nail everything down and actually ship it, and that'll be when it comes out. So to put some dates on that, 1.23 was January 4th in 2018,
and the final 1.27 release was gonna be June 21st, 2018, so halfway through the year. And if you're paying attention a moment earlier, that was very different than what actually happened. So what actually happened was the changes landed in the nightly compiler on the 6th of February,
so that was almost one month later than the initial release was planned, and then finally, the actual release was in December, which is, like, six months, June to December, something like that, a long time later, and we kept thinking it was gonna be in, like, October, and then that slipped, and then November, and then slipped, and then December, we finally got it out the door.
So we almost missed the date itself, which is intense. I think that this happened because we tried to do too much. Partially, that was because there was a lot of work to do. Like, Rust 1.0 was a really small release, and there was a lot of things that really needed to be tweaked, and we had this opportunity to do it, so we committed to it, and we made it happen.
I think another part of sort of some of the problems was some of these ideas, we tried to move too far in front of the community, so, like, some of the things around, for example, the anonymous lifetime were added, and I'm not even sure that people, like, know about the anonymous lifetime or use it exactly, because it kinda got lost in the shuffle a little bit, but we had some ideas about some patterns, then it got scaled back,
and then some things got sort of, like, cut or barely shipped, and we tried to, like, move too far ahead and think about, like, what things should happen, rather than taking what happened and, like, getting the good parts of it. There's always a balance to be had there. I think that added some stress. The module system was another example of this, where we, like, kinda invented what we wanted, and then there's a lot of changes and a lot of feedback,
and it was really, really difficult to get that through. Finally, the other reason why I think we bit off a little more than we can chew, so we're really, really great that we have so many contributors to Rust, and we did at that time, but also, contributors are not employees, and it's much harder to plan a big initiative that spans a whole year when you can't be guaranteed that there's, like, these people
that have full-time amounts of work to work on it. Like, just from a project management perspective, you know, people are free to come and go as they please on the Rust project, but that also means that when you're trying to, like, get a huge thing shipped on a tight deadline, it's difficult to be able to rely on people having the ability to do stuff, and, you know, even if you are an employee, like I said, I got burned out. I didn't wanna do this anymore.
To some degree, and so, you know, it can just be really difficult with these kind of big initiatives in an open-source world, and I don't think that anyone has figured out how to really accomplish this yet. Another positive thing was we proved that the mechanism actually worked, so 2015 and 2018 interoperate just fine. The plan was good. It happened. You know, you don't have to worry about this thing.
We can have our cake and eat it too, which is the thing we always try to do in the Rust world. We didn't split the ecosystem. There's not a holdout of people that use 2015 that are sequestered from the rest of the community, and so that's really important. There are still people that program in the 2015 edition today, and they can do everything they need to do, and it works just fine, so that's, I think, a really important thing.
We've kept some coherence, and I think that it works so well because it's pretty much silent. Like, I don't think most users think about editions a whole ton, which is part of why I wanted to start this talk by putting out all the details, because it just kinda works. You don't really think about it, I think, for most people, and that's a testament to how good the thing that we shipped was.
But as a downside, we underestimated the cost of the edition to both the human element, but also to our users who needed to make these changes happen, so even though we put a lot of time and effort into making upgrading easy or simple, it wasn't actually so in the end, and that's just because there's a lot of moving pieces, so we had Rust fix, which was able to automatically upgrade your code, but it wasn't perfect,
and there were some things that couldn't necessarily update, some of the fancier features, and people still have to validate and test and do these kind of things, and so it just takes a lot of time. The compiler did not shift to Rust 2018 for a little while, neither did other big projects, and I think that's also okay. Like, expecting everyone to upgrade immediately is part of an unrealistic expectation,
and I don't think we totally had that, but there's definitely some people that thought that should happen overall, like in the community or on the teams, because they worked on smaller projects, but some production users reported that it took them a while to upgrade to 2018, and it was a significant, costly upgrade. Even if it's easy to do, it still takes time, and time is money to companies,
and companies are the ones that have the biggest Rust projects because they're paying people to work on things full-time, and we sort of put a big burden on a lot of our biggest users, and that's tough, so I think that's a challenge. I think another thing that happened with the 2018 edition is it became feature-driven instead of timeboxed,
so normal Rust releases are timeboxed. That is, there's a schedule, and the release happens, train leaves the station, regardless of whether you make it or not, and so that's kind of what happened, and Rust 2018 sort of became designed around features, so we sort of said, hey, what features do we want to happen, and then we figured out how to make them happen on the schedule, but then when they took longer to implement,
then maybe we thought that became kind of an issue, and so that was, I think, a lot of the struggle was we need to get these features out, and there's this time deadline, and we have to have it happen before the time. That doesn't happen with normal Rust releases because there's always a new train coming in six weeks. Like, part of the reason why we picked the train model in the first place is because we knew
that having a yearly or longer release schedule was a problem, so then we decided with the edition system that we were going to do that, and so I think that was, like, a big issue. Kind of felt like the lead-up to Rust 1.0, where there was this big giant release that happened, and everybody worked super hard. You know, this was a huge amount of hard work and Herculean efforts by folks,
and therefore, some burnout happened, and so I think that that was a big problem with 2018, was getting too caught up in, like, it means this set of features, and we're going to ship them on this day, and so a great example of how this did not happen and was very successful was async await. So, you know, we realized that async await was not going to make it in 2018,
and so we said, hey, we're going to reserve the keyword so that we have it in the edition, but we're not actually going to release the feature, and so what that meant was is that, you know, so 2018 December was when the edition shipped, but we actually shipped the, you know, async await in 2019 instead, almost a year later, and I think that worked out really well,
and it's a great model for how this should sort of work in the future is that we don't necessarily, like, you know, do, we don't force everything to happen, and we don't force the schedule of the edition to happen on a feature basis. We kind of just say, hey, the feature's going to be on this cadence, and, you know, we're going to ship the feature when it happens, so that's important. This is kind of blending into the last part of my talk here, what should we be doing?
So this is kind of like my opinion on where we should go from here. Basically, we should have a Rust 2021 edition. I know that every talk with a title in it is supposed to be no, but the answer is yes, gotcha, Betteridge's law of headlines, but I think that we should have a 2021 edition, and we should commit to a trained model for editions, and we should have one every three years, no matter what.
I think that this should be smaller than Rust 2018 was for a number of different reasons. First of all, I don't think we have as much need as we did in 2018, but secondly, like, you know, I think that it will just go smoother if we don't do as much, and I also think that a lot of people are sort of craving stability in the Rust world, and so I think that that's like a very positive thing.
So a smaller edition would be much smaller than 2018, but I still think we should do it anyway. There's a question from Emanuel Lima in the chat. Does Mozilla hire engineers to work on Rust C, and that should help to diminish burnout? No, Mozilla does pay some people, but it's a very small amount of the overall team,
and there's no indication that Mozilla is, like, gonna hire 10 more people, which is, like, what we would need. So yes, while they could, in theory, do that, I don't think that Mozilla is going to, and I don't always think that just hiring people is always the right solution either, because, you know, it depends. Like, we could hire the community, and that would be great, because they already know what's going on,
but that doesn't necessarily mean that that's the right call, because, like, not everybody wants to do Rust as their job, and, like, it's complicated, basically, but it's true that work helps diminish burnout, but on some level, work can also force you into burnout, because when it's your job, you have to do it even if you don't want to. So, like, deadlines are somehow weirdly more stressful with jobs than they are
with open-source projects, I think. So, you know, like I said, I was a Mozilla employee at the time of the 2018 edition, and I got burned out anyway, so I don't think it's a pure solution. I do think that can help in some cases, yeah. Okay, so release trains, like, the core of my argument with the 2020 edition, release trains are good,
and editions are a kind of release, so we should have a train, and that should just be the end of it. Like, we should no longer pick date-based releases. We should just always do trains, because they're just a far better way to ship software. If six weeks go by without a new release, a new feature in the regular code base, we still put out a new Rust C anyway. Some releases are small, some editions are big.
You know, that's just the way that it goes, and so I think that if there's three years go by, and we don't need any big, breaking changes, it's totally fine, but we should still release a new edition. Sprudalelle asked a question in the chat that I'm gonna be getting to very shortly, so stay tuned. But yeah, like, basically, I think the train model has been wonderful, and I think that we should do it with editions too.
There's a number of reasons. I'm not gonna get into all of them, but basically, like, the shortest version is it's a model that works, and as a model that we've demonstrated is feasible, so I think we should just do the same thing, but on a longer schedule. Some people argue that the social part of editions shouldn't matter, and that we should only make editions happen
when we have the feature need, but I think that that's wrong. Some of it is just the classic arguments back and forth that happen the first time an edition happens, like looking over the past three years matters, getting people outside of Rust to pay attention to stuff matters, and all that stuff still happens. I think that smaller editions are actually a nicer marketing message, so shipping them is somehow more important in some ways
than a big breaking change release, because it's like, hey, it's been three years with Rust, and we don't feel the need that there's any big breaking changes because we think the language is good enough. That is a very strong and powerful marketing message to people to be able to want to check things out. People are still, like, leaving comments on the internet all the time about, like, I don't know that Rust is actually stable because of the six-week cadence.
Like, people don't necessarily know how much time and effort we put into making sure those things are stable, and so, you know, every six weeks seems fast, so people get on the idea of, like, oh, I only deal with Rust every three years. It feels better to a large number of people, so I think that really matters. So, yeah, there's another question on Twitch
I will get to in one second. So, okay, Sprudlel, here is your question on the slide. So, what features should be in Rust 2021? I actually don't care, personally. There are some features I want, obviously, but I think that we should do it regardless, and I don't think that we need to have a specific feature to justify doing an addition.
In fact, I kind of think the argument is stronger, purely on the, like, release engineering and marketing angles and not on the feature-based things. Part of this is also because, like, consistency and scheduling overall is key. If we miss something, then it misses a train, it can get on the next train, and that's fine, but, like, if we release additions based on features,
then every time we have a new set of features, we have to re-litigate this entire conversation, and we already did it first with 2018 edition, and we're doing it now to set up this policy, and I would like to not sort of, like, have the Rust team be arguing every three years whether or not we should be releasing an addition or not, or arguing, like, even more often than that. Like, you know, if we do it on a feature basis,
then it happens at random kind of times. If we just say, like, hey, this is the mechanism, it happens on this schedule, period, then we can free our time actually working on making Rust a better language, rather than worrying about this policy, and so I think that that's actually the, like, sort of most important kind of thing.
I will answer that question, Predko, afterwards. So, why a three-year cadence? It's just, like, a nice length of time. C++ has a three-year cadence as of late, and yearly is too often, because, like, there's not enough stuff to sort of accumulate to make it a big enough deal every year. Five years would be far too long in my mind,
so I think three is kind of a nice compromise, and it fits similar to C++, and so I think that happens. Specifically, I bring C++ up, because C++0x, I think, is a cautionary tale that we didn't really learn enough for for the 2018 edition, and it feels very similar to the 2018 edition for me.
So, the first draft of this standard of C++ was in 2008. C++03 was the previous year of the standard, and so Bjarne and some other people were hoping that this would be C++08 or 09, so they named it C++0x. The problem was is that they made it on a feature basis, and it took them so many years to get through all the details.
It ended up shipping in 2011, and so people used to joke that it was becoming C++0a, because, like, they had missed, like, with the 08 or 09 scheme, it slipped to 10, so it should be C++10, but they had already said 0x, so, like, you know, this demonstrates some of this kind of, like, conflict here, and so that's when they decided to move to the train model for the C++ releases, was like, we're releasing every three years,
because, like, this was just not great, and so I think we had a similar kind of learning from 2018, so I think that that matters. I'm gonna answer some more of these questions afterwards, because I'm almost done with my slides, and then we can get into sort of the details. So, specifically, and this is kind of, like,
the second part of your questions, Brutalelle earlier, so there was this comment on the internals forum recently, like, hey, the roadmap for the year, what things are we actually gonna do in 2021? So even though I don't think it matters, should talk a little bit about maybe what I see happening. So this is the text from the original RFC, I'm not gonna read all of it, but if you wanna go read it, you can,
but what we talked about this year was when we prepare for an edition, so the goal should be any changes we make for 2021 are completed by October of 2020, so we should know what's going on. So, you know, it's the end of July right now, so that gives us a couple months, but we should have the plan sort of, like, in, and also we said, like, we had not decided whether we were doing 2020 or not, so we should also decide that we're gonna do it,
and so, like I said, this is my opinion, but technically the project has not decided, so that's also the work to do immediately, and part of deciding to do it is sort of deciding what should be in it too. So there were some things about what might happen in the RFC. We talked about maybe, you know, error handling might be a good thing to address
because it's a big topic lately, improvements on the trait system or improvements to unsafe code, but the goal is kind of, like, to figure out what that is, and so we should have some specifics on exact features kind of soon-ish. The language team has had a few discussions on this, and as far as I know, the goal is still to have a specific plan about additions in October.
But as I said, we still have to kind of also decide if we even want to do an addition. My audio cut out. OK, so it's not formally done yet. So we also need to plan that. And so I think that even the stuff the language team is talking about is nothing like 2018. The scale is very, very small overall. There are kind of smaller details about things.
This is an example of one thing that's been talked about. I don't know if this will happen or not or what the chances are even. But there's been some discussion that unsafe functions should be able to call unsafe code in their bodies without an unsafe block, or you should need an unsafe block. Forget which order it is. But there's things like that that are really tiny edge kind of things,
and nothing like the module system is being redone. So I think anything that happens in 2021 will be important and matter, but will be relatively small changes and not like we saw what happened in 2018. There is going to be an RFC on this question of policy. Nico and I were supposed to be working on this. But honestly, I got busy with my new job, and Nico did most of the work.
So we should have something published relatively soon to answer this question, and then we can have this discussion as a community overall, because it's not the teams do get to decide what they want to decide, but we want feedback from everyone to talk about all the details. So that stuff kind of matters, so you can expect to see an RFC in the near-ish future talking about the policy of whether we're having an addition
or not, and then also the language team. If we decide we're having one, the language team would be able to like what is actually an addition. So that's all going to come up in the next few months. So thank you for listening to me talk about that for an hour. That's the end of my presentation, and now there's some questions that were in the chat I definitely want to cover. So I'm going to go from the start to the end of the ones I did not answer already.
So OK, there was a question from someone whose name has a lot of numbers in it. I don't know how to pronounce it. I'm sorry. Does it seem reasonable to you to make an addition a snapshot of already shipped features, so keeping features opt-in until the next edition while making them shipped and stable as opt-in? I think that's nice in some ways. I think that doesn't always work in some other ways.
So what I would like to see is that we have these features that need an addition break. You're able to opt into the addition on unstable. So it's kind of like that preview idea that happened before. So basically, the feature exists, and we're able to try it out on nightly, but it's only about tweaking the addition rather than tweaking all the details.
And I think wherever possible, we shouldn't use the addition mechanism. Making things a breaking change should happen only if they have to. So you talked about the async keyword being a breaking change. But I think that the union keyword, for example, was a contextual keyword, and we didn't need to do that in addition, so we didn't. We could have, and maybe it would have been simpler.
But I think that where we can ship it on all editions, we should. And we should only do it when we actually need to. So that's the thing that matters. OK. So Pritko, Sylvester asked a question about why we don't have function overloading.
So is this a historical situation, or is some design background? Kind of not exactly related to my talk, other than in theory, this is a big feature that would make sense for editions, I guess. So I mean, one simple example of why Rust doesn't have function overloading is no one has ever written an RFC to suggest adding it. Some people have written pre-RFCs, and some people have done some of the work.
But on its core, most features in Rust don't exist because people didn't do the design work. Now, this is a little controversial with this specific feature, because like I said, some people have gotten up there. And I think there's actually an open RFC right now. So maybe what I want to say is there's no accepted RFC.
But I think that specifically function overloading is interesting, because Rust does have function overloading today in terms of traits, sort of. So it's not the same as when you think about, I write a function with different signatures, and you figure out how that works. But you can do that through traits, sort of, kind of. And that is, I think, enough for most people,
that the idea of regular overloading is a bit too much. I think also this is hard, because function overloading, optional arguments, and named arguments all are three features that are technically separate, but are something, as a language designer, you kind of want to consider holistically. And that's a really big design space.
And so there's also some people that are interested in those other two features as well. But you kind of really, in my opinion, and I'm not on the Lang team, so again, this is my personal opinion on this, these are a large amount of big features that I don't actually think buys you that much personally for me. So I don't really want to see these features added to Rust.
But that doesn't mean they won't happen. Like I said, I'm not on the relevant team. So my opinion is just contributing as a community member, just like anyone else's. But I think it's a huge design space. And I think that most of the benefit is already there. And so I personally don't think that it pulls its weight. One thing I will say on the named arguments front,
though, is with Rust Analyzer putting the names of arguments displayed in the text, I have found them more useful to see that than I would have thought. So it's actually made me feel a little better about named arguments than I would have otherwise. But I think that, again, Rust Analyzer doing that gives you all the benefit without changing the language. So putting in language seems like a lot of work and doesn't really get you a ton of benefit for me, personally.
I think the core challenge is that it's really hard to put all three of those features together and design them all at once. That's a huge space, and it's very difficult. So I don't blame anyone that cares about this feature that has put in some of the work. It's kind of like not made it happen, because it's a really big topic and a really big area. It's really hard. So Flora B is talking about being in the only make
additions happen when there's a need camp. A big part of that is that the social and technical aspect are conflated. Wouldn't it make sense to separate them more? The addition is a social event, and the addition is in cargo.toml. I do think that this is a coherent point of view, but I just personally disagree.
I think that part of that is because you need the social side to advertise the technical side. They're inherently intertwined. And I know as programmers, we really like to separate out things, and we want everything to be in its own little box. But I think that specifically, with big, major changes to the language and ones that are breaking
that you're going to have to learn about and stuff, there has to be some sort of social component of letting people know that those things exist. And so yeah, you could argue that we should celebrate what we've done on a different cadence than the release of the actual breaking changes. But that just means we have two major points where
we need a bunch of communication. And I think that it all just works better by leaning into the fact that they're actually not separate. I do know that there are some people that feel very strongly that I'm wrong on this, and that's also totally fine. Just for me personally, I think that it's not really possible to untwine them, and it's actually a good thing that they are together.
Because you need to be able to communicate those things. You need to make it work out. So I don't know. OK, Jan1Garner says, should the six-week cycle changes also be listed in the 2021 announcement to be closer to the other language release? So yeah, so what we did specifically was that. So that blog post that I showed,
the title of, it included the changes that were in the things all together. And you could argue that maybe they should be two separate ones. Kind of what we did last time was we blogged before the edition actually happened to talk about what was going to happen, and then the release post was more of describing the changes in detail.
So it's kind of a way to split that up a little bit to some degree. I think that when you start to separate it out too much, you get into the exact problem that we're talking about with the last question, which is I think they need to be around each other because you want to communicate them at the same time. So I do think that including them all in one thing or small, maybe in a month kind of area sort of thing.
But yeah, I hope that answers your question. And Jack asks, would it be, Jack asks, to be clear, would it be another three years till it's available or just available in the next RUSC? So basically, I think that we should be flexible a little bit about the exact time we stabilize
the edition. I don't think we should commit to a specific release because I think that's what introduces a lot of the problems. It should just be sometime in that year. And so I think this is part of also why it's like the Lang team has to have its stuff planned by October of this year because we want to be able to release it sometime next year. And so if we have stuff done in 2020,
then there's a whole year's worth of time to have the schedule happen rather than trying to say, OK, we're going to start thinking about this at the beginning of the year. We're going to ship it by the end of the year. Just like if we're ready in advance, that helps alleviate that schedule pain. Excuse me.
OK, also, oops, talking too long. Also, wow, OK, guess I just totally lost my voice. Sorry. All right, so also, I think that having the time before things are released matters
because putting something in edition that has a breaking change, as I said, it impacts people a lot. So we should do so very carefully. And so I think that one of the problems of 2018, too, was we had these grand plans. We threw them all in together at the last minute in some sense. Like, we should be more deliberate about what
goes in the edition. And therefore, we should have to plan it earlier in advance because that means that we only are willing to do it for things that really, really matter. And so I also think that things should be planned ahead of time because of that. So yeah, so it should land in Rusty at some point during the year of the three years. But I don't think the specifics matter that much.
And so another comment on that question was, what does it mean if something misses the train? Like, say, we had a GC, which is not a thing that could happen, as you say. But it missed the train. Would it be another three years? Yeah, so it would have to be another three years unless what missing the train would
mean would mean that the stuff that needs to be backwards and compatible would miss the release. So imagine if we did not make async a keyword in 2018. It would have to be a keyword in 2021. And that would mean that async await couldn't have shipped yet because the time wouldn't have happened. So I think the feature implementation missing the train is totally fine. That's what happened with async, right?
The breaking change made it in, but the implementation didn't happen until later. Great, I think that's totally fine. And in fact, I think is the model that should kind of happen is that the stuff that is breaking that's not ready yet, we don't worry about making sure that it's actually happened exactly by the time the release happens.
And some of this, I think, is because, too, it takes time for the ecosystem to catch up. So for example, with async, even if async await had landed on the day of Rust 2018, the ecosystem would not be ready yet. And so for most people, it's already been almost a year since async await happened. Time flies. And the ecosystem is really still only now finally getting
into a production-ready state. Async await wasn't really a 2018 feature. It was really a 2021 feature. And so I think that really a lot of the addition stuff should be looking back over the last three years rather than looking forward to the new stuff. And so in some sense, the stuff that we reserve in this edition gets used by folks
and is only really part of Rust world until the future. And I think there's some weird interplay there. And that's also why it's kind of hard to do these kind of bigger macro-level changes and thinking about things. So I hope that answers that question. OK. Andrew Lefebvre says, has there been any advances with having a unified async runtime instead of having conflicting runtimes like Tokio and Actix?
So yes and no. I think that there's two things that are tied up here. One is that I don't think it's possible to have a unified runtime for everything because Rust is used in too many places. So for example, I'm a big fan of both of those projects. But I also do embedded Rust work now.
And there's some stuff that maybe would be good as asynchronous. I'm not going to be running Tokio on my embedded device. So I would need an embedded executor that makes very different trade-offs than Tokio. Tokio is built for network services specifically. So it's made several trade-offs that make lots of sense for those things. But it wouldn't make sense for my microcontroller.
And the same token, if I was building a web service, I wouldn't want to use my microcontroller async await in bulk because it made the wrong trade-offs for me. So I think that it's not possible for what Rust is trying to accomplish to have a unified runtime. That said, it should be possible to make libraries runtime-agnostic so that it doesn't matter what runtime I'm using.
And that's kind of one of those open things that's still being worked on. And I don't think this is the kind of thing that happens from a big top-down perspective. I think that it just takes time in the ecosystem for folks to hash out what they need and what's going on. And we compare experiences, and we consolidate where possible. So there hasn't been any big news on that front lately.
But all the people involved are still working on it. And I think everyone cares about making things agnostic, at least conceptually, because nobody wants things to be locked down into one specific area. So it's just really tricky. But a lot of libraries are agnostic today, but not as many as there could be. And there's still more work to do there, for sure.
All right, another question from legleg11. Is there any intent to sync the addition cycle with major distros and or windows major release cycles? Because that might be useful for CentOS, Debian, et cetera. Not really, because the problem is that all of those things have different cycles, and you can't really
release them on the same. Like, you're going to leave some people out no matter what. So really, we kind of just got to pick our own schedule and just kind of be done with it. And that's sort of unfortunate. I definitely agree with what you're getting at. That would be really nice. But just like, it's not really possible, basically. And yeah, I am doing great now.
Thank you. It took a while to get it to burn out. OK, so best question. Basically, there's only 15 minutes left, and I almost lost my voice, as I said. So there's still more questions I have not answered. I apologize if I did not get onto them. But we pretty much don't really have a ton of time. So I'm going to now pick somebody
who asked a great question to sort of raffle off the book. So we're going to leave it sort of that. So let me think here. Let's go with Flora B's question, because I think that it was big, and it was on topic with the topic.
And it was against what I'm saying, which is always fun, asked in a nice way, and a thing that matters. So Flora B on Twitch, you're getting picked as my best question, although I do appreciate all of them. And I believe that means you get a copy of Rust in Action, if I remember the details right. So that's cool. The last couple questions I didn't really get into,
sorry about that. But if you want to email me, I'm happy to talk over email or whatever else. So thank you so much for listening, everybody. My voice is dying, so I got to go.