We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

A Historical Architectural Tour of PowerShell

00:00

Formal Metadata

Title
A Historical Architectural Tour of PowerShell
Title of Series
Number of Parts
60
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language
Producer
Production Year2018

Content Metadata

Subject Area
Genre
Abstract
Join one of original PowerShell team members for a look at how PowerShell has evolved over time. We'll look at both the technical and environmental drivers behind many of the decisions that made PowerShell what it is today.
ArchitectureTime evolutionForceInformationSoftware repositoryGastropod shellCodeArchaeological field surveyDevice driverStack (abstract data type)Server (computing)Group actionPrototypeData modelBuildingFunctional (mathematics)Maß <Mathematik>Formal languageExtension (kinesiology)Physical systemProcess (computing)BefehlsprozessorMonad (category theory)Address spaceCoding theoryWordScripting languageConsistencyRange (statistics)AerodynamicsFunction (mathematics)Elasticity (physics)Computer configurationExtreme programmingMiniDiscSpacetimeCoprocessorFrictionNegative numberParameter (computer programming)Declarative programmingService (economics)Run time (program lifecycle phase)String (computer science)Object (grammar)Operations researchMonad (category theory)2 (number)Different (Kate Ryan album)Computer configurationConsistencyFlow separationMultiplication signGreatest elementSet (mathematics)CASE <Informatik>Endliche ModelltheorieMereologyDiscrete groupPrototypeExtension (kinesiology)Line (geometry)Visualization (computer graphics)Task (computing)Address spaceArchaeological field surveyEvoluteProjective planeScripting languageFormal languageAutomationPoint (geometry)WindowLengthVirtual machineNumberGastropod shellFood energySource codeInformationDevice driverProduct (business)Revision controlRoundness (object)Content (media)Decision theoryFunctional (mathematics)Hermite polynomialsService (economics)Extreme programmingUtility softwareCodeInheritance (object-oriented programming)ResultantQuicksortDynamic rangeScaling (geometry)Type theoryPhysical systemCondensationBitAliasingBuildingUniversal product codeData centerComputer animationXML
Extreme programmingCoprocessorSpacetimeMiniDiscFrictionNegative numberParameter (computer programming)Declarative programmingProcess (computing)Service (economics)Run time (program lifecycle phase)Function (mathematics)String (computer science)Object (grammar)Operations researchArchitectureData centerDistribution (mathematics)Software testingData managementIdeal (ethics)PrototypeScripting languageElement (mathematics)FamilyGroup actionInferenceSimilarity (geometry)Interface (computing)Canonical ensembleType theoryGastropod shellParsingCodeConsistencyKeyboard shortcutRegular graphPhysical systemParsingAliasingOperator (mathematics)WritingSet (mathematics)Slide rulePoint (geometry)Analytic continuationDeclarative programmingTunisInteractive televisionFrictionNegative numberBitWordCharacteristic polynomialCalculusCanonical ensembleData structureDecision theoryVideoconferencingElement (mathematics)Type theoryRevision controlProblemorientierte ProgrammierspracheMultiplicationUniverse (mathematics)Latent heatParameter (computer programming)Multiplication signWindowEmailAttribute grammarProjective planeParsingCodeFrame problemInternet service providerSpacetimeService (economics)Forcing (mathematics)AnalogyProcess (computing)Run time (program lifecycle phase)LogicMathematicsMechanism designLevel (video gaming)MiniDiscIntegrated development environmentControl flowFormal languageData managementSoftware developerScripting languageDaylight saving timeCartesian coordinate systemCoprocessorDomain nameConstraint (mathematics)File formatFunction (mathematics)PhysicalismDistribution (mathematics)Virtual machineComputer animation
Gastropod shellParsingCodeType theoryParameter (computer programming)ConsistencyKeyboard shortcutRegular graphData managementArchitectureDeclarative programmingAttribute grammarConstraint (mathematics)Message passingError messageImplementationPattern languageWindowCategory of beingWindows RegistryOperations researchVariable (mathematics)Internet service providerNamespaceConfiguration spaceIntegrated development environmentData modelAbstractionExtension (kinesiology)Uniform convergencePhysical systemTypsystemWeightCountingObject (grammar)Social classPrototypeInstance (computer science)Latent heatInformationTheoryPhysical systemLatent heatData managementType theoryFunctional (mathematics)Computer configurationScripting languageObject (grammar)Table (information)CodePatch (Unix)Category of beingWeightExtension (kinesiology)Variety (linguistics)Social classBitCountingMereologyCASE <Informatik>Instance (computer science)Revision controlArray data structureInformationCasting (performing arts)Set (mathematics)Standard deviationError messageEndliche ModelltheorieQuicksortAttribute grammarDeclarative programmingProcess (computing)Parameter (computer programming)String (computer science)ConsistencyHierarchyInternet service providerSource codeData storage deviceVariable (mathematics)Moving averagePattern languageParsingComplex (psychology)Duality (mathematics)Centralizer and normalizerBlock (periodic table)SpacetimeDirectory serviceSoftware bugException handlingHash functionIntegrated development environmentRegulärer Ausdruck <Textverarbeitung>Validity (statistics)NumberHacker (term)SequenceCore dumpNamespaceWindows RegistryConstraint (mathematics)Programmer (hardware)Computer animation
Component-based software engineeringImplementationGastropod shellSpacetimeConfiguration spaceParsingFormal languageContext awarenessError messageCoprocessorBefehlsprozessorArchitectureObject (grammar)Process (computing)Service (economics)Instance (computer science)Local ringData storage deviceThread (computing)Complex (psychology)Internet service providerCodePattern languageFactory (trading post)Strategy gameFormal grammarData typeParameter (computer programming)Type theoryExtension (kinesiology)TheorySubstitute goodFile formatScripting languageFunction (mathematics)DecimalRootElement (mathematics)InternetworkingHash functionTable (information)Regular graphInheritance (object-oriented programming)Computer networkCAN busOperator (mathematics)Regulärer Ausdruck <Textverarbeitung>SpacetimeCalculusSocial classContext-free grammarWrapper (data mining)Operating systemSet (mathematics)NP-hardGastropod shellSource codeTurtle graphicsConnectivity (graph theory)Physical systemParameter (computer programming)File formatData managementState of matterImplementationSoftware design patternToken ringData storage deviceCodeFormal grammarBlock (periodic table)ParsingCoprocessorLevel (video gaming)Context awarenessThread (computing)Local ringMathematicsInterface (computing)Object (grammar)Formal languageInstance (computer science)MereologyFiber bundleDirectory serviceArithmetic meanRule of inferenceDifferent (Kate Ryan album)Revision controlInternet service providerDigital Equipment CorporationPattern languageQuicksortAdaptive behaviorProcess (computing)Extension (kinesiology)Portable communications deviceMaizeParsingType theoryPoint (geometry)SyntaxbaumError messageFactory (trading post)Strategy gameKeyboard shortcutWeightRun time (program lifecycle phase)Core dumpMultiplicationDomain nameMobile appComputer animation
Formal grammarGastropod shellParameter (computer programming)DecimalFormal languageElement (mathematics)ImplementationInternetworkingHash functionTable (information)Regular graphComputer networkInheritance (object-oriented programming)CAN busOperator (mathematics)Regulärer Ausdruck <Textverarbeitung>ArchitectureData modelPropositional formulaJava appletLine (geometry)Scripting languageParsingEntire functionMaß <Mathematik>Abstract syntax treeNetwork topologyType theoryString (computer science)outputArray data structureBlock (periodic table)CodeCore dumpFunction (mathematics)WeightForm (programming)Lambda calculusLengthGastropod shellMultiplication signComputer configurationElement (mathematics)InternetworkingExpressionConnectivity (graph theory)Scripting languagePoint (geometry)Operator (mathematics)Representation (politics)Endliche ModelltheorieNetwork topologyQuicksortType theorySystem callPropositional formulaValidity (statistics)Formal grammarGenderExecution unitThermal expansionRevision controlPosition operatorNoise (electronics)Variable (mathematics)SyntaxbaumSource codeInheritance (object-oriented programming)Formal languageRegulärer Ausdruck <Textverarbeitung>ParsingDifferent (Kate Ryan album)Symbol tableLine (geometry)String (computer science)Abstract syntax treeParameter (computer programming)Run time (program lifecycle phase)File archiverDoubling the cubeModule (mathematics)Proper mapSpacetime1 (number)Object (grammar)AliasingDot productComputer animation
Operator (mathematics)Scripting languageBlock (periodic table)CodeCore dumpFunction (mathematics)ParsingWeightForm (programming)Lambda calculusLengthArchitectureRegulärer Ausdruck <Textverarbeitung>Formal languageComputer configurationParameter (computer programming)Gastropod shellProgramming paradigmTape driveString (computer science)Inheritance (object-oriented programming)Data conversionAnalog-to-digital converterObject (grammar)Run time (program lifecycle phase)Personal digital assistantData typeAlgorithmMountain passChainType theoryModule (mathematics)Functional (mathematics)Parameter (computer programming)Scripting languageLeakCodeMechanism designSpacetimeBoolean algebraObject (grammar)Data conversionBitCharacteristic polynomialComputer configurationString (computer science)NumberCASE <Informatik>Interpreter (computing)MiniDiscStreaming mediaCodeProgramming languageRing (mathematics)HypermediaVariable (mathematics)Graph coloringFormal languageLambda calculusComputer programmingEquivalence relationWritingLine (geometry)Total S.A.Block (periodic table)Statement (computer science)Operator (mathematics)Different (Kate Ryan album)Range (statistics)ParsingGastropod shellComputer fileProcess (computing)Positional notationContext awarenessQuicksortMereologyDynamical systemAreaRevision controlComputer animation
String (computer science)Gastropod shellInheritance (object-oriented programming)Data conversionTape driveAnalog-to-digital converterFormal languageObject (grammar)Run time (program lifecycle phase)Data typePersonal digital assistantAlgorithmMountain passParameter (computer programming)ChainType theoryArchitectureIndependence (probability theory)Windows PowerShellPhysical systemEuclidean vectorProduct (business)Focus (optics)Server (computing)Key (cryptography)Inheritance (object-oriented programming)Physical systemType theoryWeightConnectivity (graph theory)Message passingData conversionGeneric programmingServer (computing)DistanceMatching (graph theory)Projective planeParameter (computer programming)Independence (probability theory)Gastropod shellKeyboard shortcutWindowCodeChainString (computer science)AlgorithmComputer architectureFocus (optics)Multiplication signRun time (program lifecycle phase)Software developerQuicksortCache (computing)Cycle (graph theory)Mechanism designResultantNeuroinformatikRevision controlMathematicsMereologyPoint (geometry)ExplosionRule of inferenceConstructor (object-oriented programming)FeedbackLogic gateBefehlsprozessorSource codePlug-in (computing)Process (computing)Constraint (mathematics)Crash (computing)Raster graphicsLevel (video gaming)Closed setDegree (graph theory)Computer animation
Gastropod shellArchitectureFormal languageSoftwareData modelSimilarity (geometry)Software frameworkSlide ruleMechanism designChainWindows PowerShellMonad (category theory)MathematicsProduct (business)Windows RegistryCodeObject (grammar)Revision controlWindowComputer fileGastropod shellInformationType theoryMechanism designCore dumpPlug-in (computing)Uniqueness quantificationSet (mathematics)MereologyBitProgrammer (hardware)Software frameworkObject (grammar)Single-precision floating-point formatSoftwareMultiplication signUtility softwareDivision (mathematics)QuicksortSlide ruleCodeLink (knot theory)Computer architectureMonad (category theory)Product (business)System administratorEnumerated typeComputer virusFood energyBuildingTheory of relativityNumberInformation securityFormal languageScripting languageVisualization (computer graphics)PressureConstructor (object-oriented programming)ExpressionLibrary (computing)ExistenceOpen setReflection (mathematics)Software developerCoprocessorEndliche ModelltheorieComputer animation
Formal languageTime evolutionGastropod shellParsingArray data structureArchitectureCategory of beingWindows RegistryVariable (mathematics)Internet service providerOperations researchNamespaceFunction (mathematics)Configuration spaceIntegrated development environmentAbstractionData modelImplementationCodeGroup actionFamilyInferenceSimilarity (geometry)Interface (computing)Canonical ensembleType theoryData centerDistribution (mathematics)Software testingData managementArchaeological field surveyDevice driverStack (abstract data type)Server (computing)Software developerMKS system of unitsWindows PowerShellEmailTwitterSystem programmingBefehlsprozessorExtreme programmingCoprocessorMiniDiscSpacetimeNegative numberFrictionParameter (computer programming)Declarative programmingProcess (computing)Run time (program lifecycle phase)Service (economics)Object (grammar)String (computer science)Ideal (ethics)PrototypeScripting languageConsistencyKeyboard shortcutRegular graphEmailTwitterComputer animation
Coma BerenicesXML
Transcript: English(auto-generated)
Hi, I'm Bruce Payette. In this talk, we're going to go through and try to trace the evolution of PowerShell from the very beginning up until I run out of energy and we run out of time,
because this is a 45 minute talk with three hours of content. I want to look at some of the cultural, technical, and some of the business forces. There are artifacts in the source code in particular that have been impacted by some business decisions in the past.
This is the first round of at least two rounds of this talk. The ultimate goal is to gather all this information together into a document that will go on GitHub so that when people come, it'll become part of the contributors package so that you can deeply understand what we're doing with PowerShell and why we're doing it that way.
And hopefully that will help people more easily evolve the language in a coherent way. So in the beginning, things always start there. PowerShell started in 2001. I joined Microsoft in December of 2001.
There was a project that was being funded out of the India Development Center, IDC, in Hyderabad, India, to investigate the idea of a new shell. The code name at the time was Kermit, because the person who was organizing this project had a child who had a
kid's book about Kermit the hermit crab who lost his shell. I think we kept the Kermit name for about six months. It does not show up in the source code anywhere. And the project was staffed by people coming from the services for Unix team. Part of services for Unix was a product called
Interix, which was an extended version of the POSIX subsystem for Windows that was almost fully Unix compliant. It made sense to have people who had some experience with the shell and utilities to actually do a new shell. Why did we do it? Given especially at the time Microsoft was very GUI-centric, the command line was pretty deprecated for
a lot of stuff. And so we had commissioned a survey that had gone out and asked people about doing automation on Windows. And the survey had returned the result that it took 10 times as much effort. It took 10 times as long to do an automation, to automate
something on Windows versus Unix, which is a bad thing. Because we started to care about Linux and Unix a lot at that time point. It was becoming a significant, it was clear that it was going to be a significant competitor in the future. And finally, there was an evolution in culture.
We were moving away from the era where you could have one machine in your data center and you were good. The number of machines in the data center was skyrocketing and automation was very important. So we have all of this going on. There's one thing that's really important that's missing and that's a certain gentleman sitting over there, Jeffrey. Where's Jeffrey in all of this? Because he wasn't actually involved in the shell project at the time.
However, he was over in building 42, madly prototyping a new type of command line based on some radically different set of concepts. A command line based on the composition of functional units, or FUs for short, with an extended beautiful type system.
So you could describe a command line as being FU, FU, FU and really get your anger out. This prototype used a different model composition than what eventually showed up in PowerShell. If you look at, you can see the line there, the second last code line from the bottom. I think we were still doing, I think we were doing a verb
noun at a time and the separator was a slash. But the idea, conceptually it was more a case you were doing extensions on existing commands rather than composing a set of discrete commands. When we looked at it as part of the design review, we
decided we would go with something that was more similar to the traditional Unix model. I don't know that it was, there are functional aspects that changed, but not so much from the prototype. And that's a discussion that's too long for now.
So, monad begins. First thing we had was a bunch of guiding principles. If you haven't read it, I'd recommend reading the monad manifesto. It dates from that time, but it's surprisingly still very, very accurate. And if you really want to understand what Jeffrey was thinking at the time, then this is a great document to read.
We had a bunch of principles. Easy to use is very important. Maybe more important than being simple. You needed to be able to get things done. You needed to be able to deal with sophisticated tasks in an effective way. And so if you needed C sharp, this is probably going to get me in trouble, C sharp versus Visual Basic.
Visual Basic is very simple, but you have to take a lot of steps to get something done. C sharp or PowerShell, you can get stuff done with fewer steps, but it requires a little bit more sophistication. So we wanted it to be easy to use for the sophisticated user.
But learning is also very important, and it's hard. So we need to facilitate learning and protect that investment. This is the sacred vow that Jeffrey talks about. If you learn something, you won't have to unlearn it. And one of the big drivers on this was consistency. So if you learned a principle once, then you'd be able to apply that principle over and over and over again.
The one thing we wanted to avoid was the foolish consistency where everything would look like itself. And there are still a couple of cases in, maybe a couple of cases in the language where we were a little foolish. We also wanted to address the tension between whip it up attitude, which is a Perl word, and production coding.
The survey had said that sort of ad hoc automation was very important for Windows. So the ability to write small scripts or even doing things on the command line was really important, which implies the ability to do a lot of stuff on the command line versus production coding where you want to be able to
support the code that you're producing down the road. This results in a very wide dynamic range in the language. And there's a lot of ands, right? So methods and cmdlets, simple functions and advanced functions, one of Jeffrey's favorites, command line and GUI.
We also have some features that let you sort of scale your syntax with things like aliases and short options. So you have the line at the top, which is the fully elaborated line. And it's very readable because everything's fully spelled out. And then the line at the bottom, which is I think a
little less than half the length of the top line, is the same command but condensed. So much quicker to type. The bottom line is good for the command line and doing quick things is not recommended for putting into a script. Reuse, super important, right?
We wanted to leverage reuse as much as possible, sort of like extreme reuse, if you will. So common argument processing for all commands, ubiquitous parameters, common runtime services like the way wildcards are processed, the way paths are resolved, and so forth, all of those services are made available by the powerful runtime to people who are writing
cmdlets or scripts. Common formatting and outputting. Nobody has to write formatting code in their cmdlet, we take care of that for you. Another one, people time is more important than processor or disk, processor time or disk space. At the time, everybody was reasonable to have a whole
system to yourself, so throwing away resources to make the user faster was a non-issue. That's changed a little bit with multiple VMs on a physical machine with IoT and resource constrained environments. So some of the earlier decisions that we made
regarding resource consumption are biting us a little bit now as we move into these more constrained spaces. And you can see some of PowerShell V6 were evolving and reducing some of the footprint. We wanted to provide friction on negative paths, to use a probably incorrect analogy.
We'll let you shoot yourself in the foot, but we won't hold the gun. So we have dash force on destructive operations, we have the whole dash confirm mechanism to confirm, again, operations that are going to make significant changes to your system. And just in general, we wanted to have aspects of the
system that let you do something but didn't make it easy if it was probably the wrong thing to do. You had to think about doing the wrong thing. We wanted to facilitate experimentation, it should be safe to try out commands. And so we have the what if ubiquitous parameter, and then we have declarative annotations for commands
and their parameters. So the logic for binding parameters and doing checking and so forth is done declaratively through a set of attributes rather than everybody writing their own null checks. The project begins and we acquire a team.
I said it started out of IDC in Hyderabad, the architectural owners were all here in Redmond, but the dev team was going to be in India because, well, for a bunch of reasons. But it turns out that radical experimentation does not flourish with this level of geographic distribution.
At the time, it's 12 hour time difference, so we were awake and there was 12 and a half, depending on daylight savings time. And we were just using mail at the time to communicate and occasionally video conferencing.
So it wasn't a great environment. I think it might be better now, now that we have better tools for communication, we have GitHub and so forth, that geographically distributed development might succeed where it didn't work for us then. So anyway, that didn't work.
First we pulled back development with the idea that tests could still be there, but then even that really wasn't working very well because of a day lag on everything, because things were changing so fast. So that didn't work out. We went shopping and we acquired another team. This one was built out of team members from the
Windows Management Pack team, which was great because they had a lot of management experience for Windows. Now we have a team, we can begin for real. Oh yeah, and be ready to ship in two weeks. This was all in the Longhorn time frame. Does anybody remember Longhorn?
Yeah. So my impression of the Longhorn time frame was that Microsoft was starting to dip into the idea of continuous delivery. I could be wrong, but that's sure what it looked like, because we were supposed to be ready to ship every two weeks. So it was exciting. We did not have the infrastructure to support this
kind of thing. I'm now lost in my slides. OK. Anyway, this was clearly not an ideal situation. We would have shipped a PowerShell that had almost no features, that would have been not much more than
Jeffrey's prototype, which was disturbingly functional, but still, it would have been missing a lot of the elements of a shell. You wouldn't be able to write scripts in it, which shell scripting language kind of needed to do that. So it didn't happen. That's good. We'll talk about that a little bit more later.
So now let's talk about some of the big ideas. We talked about the principles. They're very generic principles. The big ideas are more architectural characteristics of the engine, more concrete than abstract. So one big idea is the idea of domain-specific vocabularies. So PowerShell was going to have, you would basically
write a set of words. If anybody's familiar with the fourth language, there's this notion of domain-specific vocabularies where you would write a set of words for your domain environment. And this is different from a domain-specific language in that there are no syntactic elements that are different. It's just a set of nouns and verbs within your
application domain. So we encourage the use of a constraint set of verbs. We didn't enforce it at this point. We didn't start doing anything around enforcement until version two. Guidelines on which verbs to use. This made it very predictable. With a small set of verbs, the user can infer if I need to get something, I probably need to
use the word get instead of fetch or whatever. Yes, we did explore and the set of verbs is going slightly over the years, but not very much. I think we've only added a handful of verbs since the
very beginning, which is kind of amazing given how long ago it was that we were doing this. So we had verb pairings, start, stop, get, set. Again, prescriptive so that the user could, again, if the user knew that there was a start verb, then they can infer that there's a stop verb.
And then command aliases for interactive use against the whip it up a tune thing. There are two types of aliases. There are the canonical aliases like GCI, forget child item, which are specific to PowerShell. And then there are the convenience aliases like LS or dir.
This worked pretty well in Windows, but kind of got us into trouble when we moved to Mac OS and Linux because we were now hiding some of the existing commands on those platforms, especially cURL. cURL was a big problem. So a big idea, universal command line parsing.
Unlike most shells, the commandlets are not responsible for doing their own parsing. The common parser code does that and breaks everything into data structures that then get passed to command type specific parameter binders. So advanced functions or commandlets each have their own parameter binder.
And you can find all of those things under the engine subdirectory of the SMA directory in GitHub if you want to take a look at how the parameter binders work. This approach gives us broad consistency across commands. Commands always have a dash. They can always be separated by spaces and so forth.
This works really, really well for all of our command types except native commands, that is, executables. Because they still do their own command parsing. They take the argument string that they're passed and then they parse it up into arguments. And so native command binder had to take his arguments and then turn them back into a string, kind of like what we think the user meant when he typed them.
This works OK sometimes. Sometimes it doesn't. There are a couple of bugs in GitHub around this. And we're trying to fix them. We fixed some. But they're kind of hacks. They're not appealing from an architectural perspective. But they do make the user's experience better.
This is one of these cases where it's not quite the foolish consistency. But being super consistent makes the end user's experience a little rough. Declarative parameter constraints. Rather than writing imperative code to do all the checking, we have attributes to deal with that.
These attributes did a couple of things. One, they reduced the amount of code you had to write. And two, they reduced the amount of error messages you had to write and internationalize and so on and so forth. And it also eliminated or reduced the number of error messages that the user had to learn. So this was a win-win-win situation, except when it
didn't work. So you can get into situations that are sub-optimal, like the regular expression validate pattern, where it tells you the regular expression for a phone number. It doesn't tell you your phone number's invalid. It tells you it's some strange sequence of characters, gobbledygook, that is not right, or you're not matching this pattern.
So it's not a great user experience if you use that particular attribute. And then sometimes this whole idea of attributes just didn't work. This is a comment out of the process, get process, start process commands in the source code.
Basically, process name glob attribute was deeply wrong. I think it's a particularly funny comment. I'm removing this comment from the code, by the way. It does have the programmer's name associated with it, so I'm taking it out. Providers. So we were sitting around in a conference room discussing how many get commands we were going to need.
And then somebody, and I don't remember who it was exactly or whether it was simultaneous invention, decided that, wait a second, if we had a standard set of commandlets and just created this sort of pluggable model for hierarchical data stores, then we only have to have one get command, because it'll work for all of the stores.
And so we developed what are called the core commandlets, item, child item, item property, et cetera. And these sit on top of the providers, which are, in the source code, they're called namespaces. So if you want to see how all the providers work, look at that location, sma slash namespaces.
Again, in the effort to be consistent, we decided that everything could be access to providers. So file system, makes sense. Registry is a hierarchical store, sure, that makes sense. Functions, okay, that's a little strange. And variables and the environment. And things that, like the wsman configuration, although it
doesn't come until version two, could all be exposed to the same set of commandlets. So you could navigate into these stores using CD, basically. And basic provider options are also available through the variable syntax, so this was kind of dual ported. You can, for example, define a function by assigning a script
block into a name in the function drive. A big idea as an extended type system, we wanted to build a management oriented type system.
And we were going to layer it on top of the existing .net type system. We had a wild dream of this idea of a central type extension system, where you'd be able to publish patches to the type system of the running code. It was very cool, but it didn't happen. I leave this as an exercise to the audience.
We worked with a variety of teams at the time, and a lot of people, not just PowerShell team, a lot of people thought this was kind of a good idea, having this management layer of the type system. And so we worked with them about doing some of the designs, but ultimately it got lost in the world of
schedules, I suppose you could say. And so as near as I can remember, there's only one actual result, and that's the fact that arrays have a .count property in PowerShell, and they don't in .net. So that you can always say .count on a collection and get a number back. We also had the synthetic type system that supported the
idea of deserialized objects. So objects that, a deserialized object remembers what it used to be, not what it, it doesn't deserialize as the original type, because the original type might not exist on that system anymore. But you can still work with that type because it remembers
what it used to be. And we also allowed you to create new custom objects that didn't have a type associated with it. You couldn't construct types. You couldn't construct types dynamically in the CLR in version one or version two, I guess it was. Anyway, but we didn't let you construct types.
Big idea, ps object. This has come up a bit lately on some of the GitHub issues. Jeffrey's original type system only allowed you to do class specific extensions. But we also had scenarios where we wanted to be able to extend an individual object, attach
information to an object. And so we have ps object. Ps object is a class that wraps instances of other objects and holds all of their extensions. So note properties, script properties, code properties are all wrapped in the ps object.
It simplifies a bunch of things because you always go through ps object. But it also makes some things complex because you have to always deal with the ps object. C sharp code that has a parameter of type object has to deal with a ps object just in case and then get the underlying object if it needs to. Ps object is mostly intended to be invisible to the script
user, but is not entirely invisible to the script user. So it is part of the PowerShell API, which is part of the overall PowerShell user's experience, just to be really clear about that. But most of the time, you don't need to worry about it. And then we have the typeless objects I mentioned earlier, the ps custom object type.
This ability to cast hash tables into custom objects is a v3 thing, I think. But you could still do it in v1 with add member. It was just harder. The implementation.
So the initial design was highly componentized. Most of the components in the source code, they live in system that management that automation slash engine. So you'll find pretty much all of the pieces of the core PowerShell runtime in that directory or subdirectories. Some of the pieces include on version one, it was run space
configuration, which was all a set of initial trans that you had in your run space when you started up. Well, yeah. Run space configuration, largely supplanted by initial session states starting in v2. The language, the tokenizer parser, and all of the execution stuff was there.
Execution context, which includes all of the stores. So aggregating all of the various provider instances into one single object. Pipeline processor is the piece that runs pipelines, kind of obviously. And then there's all the command processors that are associated with that. And finally, there's the error handling subsystem.
The problem with this design is that you had a bunch of Lego blocks and you basically had to build your own shell. And it was not super trivial. It required a certain amount of understanding of how the pieces went together. It meant you could do a very minimal shell. In theory, you could have the pipeline processor without
the engine or without the language at all. So you'd have a very minimal shell. You'd have something that didn't have language, that didn't have most of session state, that didn't have formatting and outputting, and it would give you a relatively small footprint. Anyway, to deal with the complexity, we decided that we
would bundle everything together into an uber class called the run space. And what is a run space? A run space is a space where you run things. I've never understood why people find that hard. But people found it hard enough that we have a wrapper class called a session for promoting that sits on top of
a run space. And the run space sits on top of something called automation engine, which so it kind of turtles all the way down. And then the run spaces are exposed to the APIs, but not necessarily through the shell level.
Context objects, so execution context. I said we had session that included a run space, that included an automation engine, and they also included a context object. Context objects were all the resources that the running engine needed, and it got passed around all over the place. Every node in the expression tree of the compiled language
had to get this thing. Eventually it got to the point where it was just too much trouble to keep passing it from class to class to class to class. So we ended up sticking it into a thread local storage, thread local variable, and it's accessible through get execution context from TLS. Get a fairly straightforward name, but you'll see that in
the code in a number of places. You'll see execution context all over the place, and then you'll see people calling this API to get their execution context. Execution context that's inside the engine is not the same as the execution context object that you see in the dollar execution context, because that is a facade
object that only shows through the public parts of that object. We'll talk about design patterns in a minute. Command provider context did the same thing for providers, it provided all the provider context. By wrapping everything up into the context objects though, this gave us the ability to do multiple run spaces per process, or per app domain if you're a
stickler for details. This means that we can do parallel execution. We can do run space pools, we can do all those things. This was, I'm not sure how super deliberate it was. I mean it was, yeah I guess it was.
It sort of did something we did because it seemed obvious rather than being, oh yeah we should do this. So lots of design patterns I talked about. We used design, it was the era of Gang of Four patterns. We used a lot of design patterns fairly informally throughout PowerShell, the PowerShell source code,
PowerShell design. We have adapter pattern in the type adapters. So we have a type adapter for various data stores. .NET objects, synthetic objects, all have type adapters. We used the facade pattern a lot for making all of the commands look similar.
We used delegation in the command binders, so the command processor orchestrates the commandlet, but it doesn't do the commandlet binding. It uses a separate delegated class to do that. And the commandlet binder for example is used both by commandlets obviously, and by advanced functions.
We have factory pattern in the run space factory, strategy pattern in command discovery as to how we won't go about finding commands. So again, if you're going through the source code, you'll see a lot of the stuff should look familiar because of all of the user patterns. It should be relatively easy to figure out the role of
the different objects in the running process. Okay, now we get to the PowerShell language. Now, as I mentioned earlier, in the original architecture, the language was actually conceived of as being completely separate from the PowerShell engine.
You would be able to not have the language or even plug in another language. Somebody had, at one point, had expressed interest in doing a command.exe style language for PowerShell. But that didn't happen. I say that a lot.
In practice, the language has become pretty deeply mixed into everything because it's recursively used. The language uses the pipeline, which uses the formatting system, which uses the language. So it's pretty difficult to separate now. I think one of the reasons for making it separate
is we weren't sure that the language was going to be right. We had grave concerns about that early on. I think we're okay with it now. It's only been 16 years, I think we're fine. The language roots, where did it start? We started with the IEEE POSIX.2 shell grammar, which is essentially the corn shell,
which is more or less bash, which all come from the born shell. POSIX stood for portable operating system interface with an X on the end because it sounds cool, the official designation. There are big changes to the implementation though,
because, and I'll talk about that more in a second, but we tried to stick to the POSIX grammar as much as possible. We actually had a yak grammar that we started with. And for a long time, I think, Jim Truer, who was the PM, was maintaining the language grammar in yak
and trying to update it. Of course, part of the problem is that PowerShell is not a context-free grammar. There are a lot of deals between the parser and the tokenizer, and tokens have different meanings depending on their context, so you can't really represent that type of grammar in yak. And so after so many, I don't know how many shift-reduce conflicts he had,
he gave up. We tried actually using other grammar tools to capture the grammar, and so far we have not succeeded. What is written, the source code is written in such a way that you can infer the grammar from the code. It actually has comments. All of the different parsing rules have comments associated with them saying what they parse.
So you used to be able to do a grep through the source code looking for slash slash G, and that would pull out all of the grammar comments. I think it's drifted from accuracy. So anyway, we started with the public shell. The parameter syntax though was inspired to a large extent
by DCL, DEC command language, courtesy of Jeffrey. So we have the long options, use of colon. I think those are the primary components. Right, but that's different from the Unix shell that he uses one character options, or double dashes for long options and so forth.
But it makes for a very readable shell. Then there were a whole bunch of concepts that we wanted to have that weren't in the POSIX shell. So we started by adapting elements of Perl. At the time, Perl was pretty much the dominant scripting language on the internet.
Perl had arrays, Perl had hash tables, it had regular expressions, all of the things that we wanted. Super significant is that Perl had CPAN, which was the archive of modules. The joke at the time was that CPAN was the language of the internet, and Perl was just syntax. And it was critically important
to the success of Perl at the time. And we have retained some syntactic elements from Perl. We used dollar under bar, which is the pipeline operator that fell over. Just remember that, the pipeline operator falls over. If you want to explain why it is, it's the way it is. We use that in a bunch of places.
So in array sub-expressions, in the null array, or at least for empty array. We use the ampersand for our call operator. We use regular expressions all over the place. But not much else from the language. The reason is that, in the end,
Perl is kind of icky as a language. It also grew up from a shell background. But it didn't sort of rethink its things terribly clearly at any point. I haven't looked at the Perl 6 grammar these days. It might be cleaner. But at the time, with Perl 4 or 5,
you could basically dump arbitrary text into it, and it would be a valid script, right? You know, line noise should not compile. We wanted people to be able to distinctly tell whether their script was valid or not. So we switched our syntax model to align with C-sharp, which essentially means that we're aligning
to everybody else that derives from C, which coincidentally also includes Perl. So that's sort of ironic. And our value proposition became, in part, that if you learned PowerShell, then you could move to C-sharp. You should be able to reuse a lot of your knowledge in C-sharp. And likewise, if you knew C-sharp, then you should be able to pick up
PowerShell pretty easily. So again, we want to protect the students' investments. Learning is hard. Not entirely sure how well this worked, because I see people who keep wanting to turn PowerShell into C-sharp, which is a little weird.
C-sharp's great, but it exists. Why would we do that? This is not your parental unit shell. I'm being gender agnostic here.
So almost all shells are expand and parse. How many people know what an expand and parse shell, or what expand and parse means? I see a few hands. Yeah, so the idea of an expand and parse shell is that the shell reads your script. It doesn't read your whole script. It reads your script or it processes your script line by line. It reads a line.
It goes through and it does variable expansion. So it turns all of the variables get expanded, and that gives you a new string back. And then it goes through and it tokenizes that string. So the first position in the tokenized string is always the command. So if you had a dollar sign, like if you had a variable, dollar command,
in the original source string, during the expansion phase, that would have expanded into a command name. And so by the time you parse it to, that expanded name is the one you execute. So you can say dollar foo, and just run whatever dollar foo points to in an expand and parse shell. You can't do that in PowerShell.
PowerShell, on the other hand, parses the entire script. So the whole script is read in. It is parsed from beginning to end. It's parsed once. Again, expand and parse shells, expand and parse every time it hits a line. PowerShell reads a whole thing once, it parses it once, it builds,
in version one, it built an expression tree that is an executable representation of the script. In PowerShell v2, it builds a proper abstract syntax tree that has most of the elements of the syntax are captured in the tree. And then that tree is used for execution.
And so if the user types the same command, but with an ampersand at the beginning because we need the call operator, it parses to call and then variable command. So you have a node in the tree that says this is a variable expression. So you have these expressions in the tree.
And the variable has not been expanded at this point. It won't be expanded until runtime. So then you take that representation and you execute it. And it goes through that there and it recursively walks through the thing. It says, okay, this is my command. These are my arguments. Let me see if I have to evaluate the arguments. Oh yes, this is a variable argument. I'll evaluate that argument and substitute the string and the string will become the argument
all the way through the rest of the arguments. So this produces a bunch of things, a bunch of significant differences from a traditional shell, not the call operator is one. One of the ones that's kind of annoying is that it restricts what aliases can do. So aliases, again, it's all compile ones.
So the aliases get expanded at runtime, not at compile time because you'd never be able to change the alias in your compile script if it got expanded at compile time. And the other thing is it can only resolve to a command name. And you can't have fragments of grammar in there.
In the example with the expanded parse shell, I could have had something that said LS space dash L could have been the value in the variable foo or dot variable command. And then it would actually parse into two arguments. So you're actually doing kind of an eval
on the resulting string. But we needed to do this. We needed to have this different approach because we're dealing with objects everywhere. And we'll see that in a second. Splatting, this was designed to solve
or was added to solve commands calling commands problem. In a lot of places, you have a command that wants to then call a subcommand with the same parameters that were passed to it. And so splatting does that, right? You get your PS bound parameters variable and you do at PS bound parameters to the called the child command.
Splatting comes from Ruby. It's another language as I know. I got it from Ruby. Just to clear that up because it matters to me for some reason. Ruby uses the asterisk to indicate a splatted variable. We use the at symbol
because the asterisk is a glob, right? If I wanted to have, if I wanted to explain, I'm doing my command, my command space, asterisk foo, well that's a glob expression. If I wanted to pass that variable, we'd see at something or other. So we needed to use a different sigil character to distinguish it
from globbing expressions. Script locks in the call operator. I've talked about this a little bit. We needed the call operator because we're not an expand-impart shell. I need the call operator to be able to do an indirect call through an expression.
So, and this includes things like commands with spaces that have parens around it because that's an expression. The script locks grew in a strange way. There was a gentleman on the MMC team who came to me and said, well, we're gonna want to use PowerShell but we need to have this idea
of isolated fragments of code. Is there a way you can do that? I thought, well, we don't have a way to do that now but I've heard of this thing that might work. And so this seems like a job for a lambda function. And so we introduced script locks using paren notation, block notation.
Lambda notations are much more common now. At the time, they were sort of notoriously known for being part of hard-to-use languages. It was this strange exotic thing that nobody understood. That would be a bad thing to do to our less than technical users. So we wanted it to be more amenable
and so they became script locks rather than lambdas. And take a look at an example with where each are for. It looks pretty natural. Most users are not aware that they're using a script block there or that is anything strange. Okay, so script locks are pervasive
throughout the language runtime. Every piece of code gets compiled into a script block first. When you run a script, it gets built into a script block and then evaluated. I mean, really, a script is nothing more than a script block on disk. You could read it into memory,
compile it with create script block, and then evaluate it. They also allow things like delegates, which open up a whole bunch of areas of coding that we weren't able to do before. So we can do WinForms coding in PowerShell. So you can write GUVs in PowerShell. You can do XAML coding in PowerShell.
You can even do stuff with the DOM and HTML if you really want to. So that opens up a whole range of possibilities that wouldn't have been available otherwise. Yes?
Oh, yeah. Yeah, yeah, so the where command was very complicated and then, well, it was very complicated and then they got reduced down to, here's a script block, just execute it. And then it got complicated again for a simple where. Which is funny. Be careful what you wish for, you might get it.
Yeah, and so we had to, again, this function example where, I mean, it's a parameterized function and it'd be a script block, say parameters. And so you can do fully the equivalent of a function definition with parameters and everything
by assigning a script block into a variable or into a, well, into the function drive. So there's a bunch of language design questions that we had to deal with. First off, PowerShell is an expression-oriented language. Again, does that ring a bell with anybody?
Total silence. So expression-oriented language is everything returns a value, which is what happens in PowerShell. So even something like a foreach statement returns a value. The if statement returns a value, which is our argument against having the conditional operators. One difference from a lot of expression-oriented languages though is that it returns everything.
Every statement, every line in a block of code returns a value. In a lot of languages like Lisp, it's typically the value of a collection of code is the value of the last statement in that collection of code. PowerShell is not, it's everything. The original version of the interpreter actually worked the way I said the Lisp interpreter worked.
It only returned the value of the last statement. But that's not very shell-like. Shells write to streams. Streams get aggregated from every line within a function or a script. Lexical or dynamic scoping. This is another one. Ring a bell?
No? Lexical scoping is most common in programming languages these days. Dynamic scoping, the original Lisp interpreters were dynamically scoped. Basically what it means is when you're looking for a variable, you look in your scope, your media scope, and if it's not there, then you look in your color scope, and if it's not there, then you look in the color scope all the way up until you hit the global scope.
And this is basically how PowerShell works. It makes a bunch of things simple. You can pass a whole bunch of context through scoped variables rather than having to have a lot of parameters. But it also makes things fragile, that you can have false coupling between functions because a variable has leaked from the wrong place.
So good for small programming in the small, not great for programming in the large. We had discussions with some of the architects in the CLR team, and they were like, oh, you're gonna get it, if you go with dynamic parameters. But it has largely worked,
and one of the things that we did in VT with modules is that we introduced this idea of module scope so that you don't leak into the functions in the module. Should the option character be dash or slash? That was easy. We're competing against the Unix shell, not command-add-exe, so we'll go with the dash.
How to handle switches. Switches are parameters that are either present or absent. Easy to deal with and expand and parse because if your variable is empty, if your variable is not empty and contains a switch parameter, then the switch parameter is there, and it's on, and if the variable is empty,
then the switch parameter is not there, and it's off. Again, that doesn't work in a fully parsed environment, and so we needed a different mechanism. When we were addressing this, we thought, well, they'll just be Booleans. But people wanted to use Booleans for a whole bunch of other things.
It seemed like kind of a large space to co-opt, and so we introduced this new class, switch parameter, that has this characteristic of being present or absent and doesn't take an argument until it does. Because again, commands, calling commands,
you do want to be able to pass through an argument to a script block, or through a command through a switch parameter. Splatting and ps-bound parameters made that a little bit less difficult, but it's still a case where, in simple cases, where you want to be able to pass it through.
And you'd think this would be obscure enough that people wouldn't use it by accident, but I've seen lots of people, well, lots of stuff online where people are putting in switch parameters, colon, dollar, false. I have no idea where that's coming from. More language questions.
So expand and parse shells. They only deal with strings. All the arguments to a command are strings on the command line, and so it's up to the shell to figure out what, it's up to the individual command to figure out what that string is for, whether it's a number or a string or a file name.
But it means that everything just kind of like automagically works because the commands know what they want to do with strings. They have their own built-in parsers. In our environment, we don't want people writing all these custom parsers. We want to be able to pass real objects.
And so the primary reason for having the fully parsed shell is that because the variables are not expanded into text, they're passed in as objects, then we can get objects passed through to commandlets rather than expanding into, some people thought this is what we were doing at first
is that we were expanding into this big-ass XML string and then turning it back into an object inside, which would have been horrendously slow. But we just pass in the object itself. But then we still have this idea that we want everything to just kind of work.
You shouldn't have to worry about doing a lot of explicit conversions. How do we get the user experience with a type-based shell to be similar to that of a string-based shell? And the answer to that was super aggressive type conversions. So the runtime, when you're passing stuff around,
the runtime makes a lot of implicit guesses at what you meant. There is a constraint on, one of the things that we considered doing was having an automatically chain conversion. And then we realized that that was a bad idea because everything inversed a string eventually.
And so we'd be right back where we started from. And so we don't do implicit chaining, but we can do explicit chaining. When we're doing this, we use an algorithm for searching for the converter between the source type and the target type. And we use an algorithm called the type distance algorithm so that in ambiguous cases,
we compute a distance between the target types and whichever one is closest to the target is the type conversion that we use. I think that has been tweaked a little bit over time, but I think we're still using the same basic type conversion algorithm.
So that's how generic type conversion work. The parameter binder does not do this. So this is one of the things that we, in version one, said, well, we can ship it this way and we'll change it later. And 10 years later, we haven't changed it. So I don't think it's ever going to do.
But anyway, binding in the parameter binder is done in two passes. The first pass goes through and tries to do an exact match on each of the parameters. So if the type of the argument and the type of the parameter are the same, then it can simply bind the parameter. And if that doesn't succeed, then it does a second pass
where it looks for our type conversion. So if there's a type conversion for the explicit target type, the source type to the target type, then it can succeed and proceed with the bind. So everything is done twice. If you ever wonder why it's slow, well, that's one of the reasons why it's slow is that it does this extremely complicated process
using bitmaps or bitfields to determine the degree of closeness of the match and so on. It makes parameter binding very heavyweight. But hey, we said we were not going to worry about CPU cycles and stuff back in the beginning.
So this was designed to be, this was designed to give the best user experience at the cost of significant resources. We'd like to fix that, give some ideas. One of the things that we don't do right now in the parameter binder is we don't do, so for the DLR runtime does type caching or method caching,
where once it figures out how to do something, then it caches that. So the next time through it says if I'm still doing the same thing, I can use this mechanism that I've cached. We don't do that for the parameter binder today. We could probably do that. So we do the expensive computation once and then cache the result.
That would be fun. I'd like to work on that. Anybody else want to work on that? How am I doing? I have allegedly no time at this point. So finally, we're near the end of what I was going to talk about today.
So we have, we built everything. It's all spiffy and cool. We have all these great ideas. Now we need to ship. Shipping is fun. PowerShell, because PowerShell started as an independent project inside Microsoft, it had its own build system. It had its own everything. But later on, we shifted to be part of Windows. Windows has its own build system.
So we switched from our build system to their build system. And they have a bunch of rules about how stuff looks, so we changed our stuff to look like their stuff. And then there's the Longhorn reset, and the world explodes. We were a brand new .NET component and we were kicked in some place in the anatomy out of Windows.
Crap, now what? Okay, so we're drifting aimlessly, and along comes the exchange team, our best friends, our new best friends. They were investigating an automation solution for managing exchange servers. And PowerShell pretty much matched
all of their requirements, just by simultaneous invention, which was very cool. So this was awesome. We have a new development partner, a great development partner, because they gave us a lot of feedback. Very constructive. And we have a new shipping vehicle. We can ship as an exchange shell.
There's some impact on the product. First, the fact that the Longhorn reset kind of gave us a lot of time to work on things, and so PowerShell became much, much more refined than it might have been. So that was good. On the somewhat less good side was there was a focus on the exchange feature,
team features first. And that meant that some of the things that are applicable in general shells didn't get that much attention. Native command support is an example of that. So Native command processor was barely there. I barely had time to get anything working
at the timeframe. And it still sucks. And I know that Sergey Vorbev did a bunch of work to get, to slightly unsuck it for PowerShell 6. But it still needs a bunch more work. It still has parameter binding issues. The pipe doesn't work the way it should.
You're sort of just barely able to launch Notepad. And that wasn't bad for exchange, but it was bad for a general purpose shell. And that's something I would really like to make work better. Shipping is fun three. So okay, we finally get approval to ship as a Windows feature, not as an exchange feature.
So now we're Windows PowerShell, not Exchange PowerShell, or Exchange Shell, or Exchange Monad, or whatever. But we have to pass a bunch of Windows quality gates. So there are versioning requirements. There's a couple of ways you can go about doing versioning. You can have loose versioning, also known as DLL help.
Or you can have rigid versioning, which is what the .net team did with the GAC. And that is completely brittle. I mean, you'll never be wrong, but you might be useless. So as Jeffrey says, this versioning is not a problem. Problems can be solved.
This can be only mitigated. Part of it had to do with the way we dealt with plugins. There was a no plugins requirement, because IE was IE, and it had lots of plugins, and it hung a lot.
I mean, it got to the point where there were fewer, there were fewer crashes. There were more hangs than crashes, in coming back from telemetry. And a lot of the hangs were in IE, not in the code itself, but in the plugins that people were writing. So there was a big push not to have new plugin architectures. And Devdiv did a big effort to come up with,
how did I describe it? A very resilient plugin architecture. It was also complicated and heavyweight. And so we looked at it,
but it would have been wildly inappropriate for us. It was designed for the level of a component, not a cmdlet. But PowerShell is all about plugins. So we had to do something to satisfy this requirement. So our first attempt was mini shells.
So there wouldn't be one PowerShell. There would have been a PowerShell, and there would have been an AD PowerShell, and there would have been an Exchange PowerShell, and there would have been everything. Because what you would have had to do is you would take the libraries for PowerShell, and you'd take your cmdlet and some tooling,
and out of this you would spit out a custom shell that included your cmdlet, it's hard linked in. And so the only way to call from cmdlet to cmdlet would be from shell to shell. And there is a feature in PowerShell. If you're within PowerShell, and you type PowerShell, open brace, like a script block, close brace,
then that will be evaluated as an expression in the child shell. That was actually added to make mini shells work tractably well. I think it's the only part of mini shells that are left. All the tooling and everything else got cut ages ago. Exchange looked at this and said, no, are you guys idiots?
This is not going to work for us. We need to be able to extend our shell with some kind of plugin mechanism. And so once again, we return to the halls of Windows Core Architecture team, supplicants on mended knee, begging for a solution.
And since MMC already shipped with Windows, and it had snap-ins, we thought, okay, well, this is shipping and it seems to be okay, so can we do this? And I think there was a diverse set of opinions
on the MMC snap-in mechanism that are probably not publicly consumable. But ultimately, they agreed that we could do the snap-in mechanism. And so we went off to do the snap-in mechanism where you have to register the DLL and all that kind of stuff.
And this still has dregs in the code. You can see it under system management automation, single shell, and there's another thing, single shell utilities. These are left over from, I mean, the reason it's called single shell is because we have mini shells, which was mini shells was mini shells and single shells is one shell with plugins.
So we got approval for snap-ins. It was not optimal, but we could ship. That was one of the first things we went after in PowerShell v2. Ah, yes, the language reviews, because PowerShell is the only language product
that ships outside of dev-div. The developer division was a little concerned about this because at the time, everybody, well, we were kind of siloed and everybody was protecting their products and making sure that we weren't confusing the user with the wrong thing and all the normal stuff that the businesses do,
unless they're used to do anyway. So we had to have a review with the CLR and the relatively newly formed DLR team. One of the big issues was the architectural approach to passing objects through the pipeline. At the time,
The name devdiv was producing link. Link works by a composition of enumerators and it pulls the objects through their pipeline. PowerShell, on the other hand, the pipeline processor pushes the objects one by one through the pipeline. So we're a push model, they're a pull model. There was some idea that we should have one model. Fortunately, Anders Helzberg was part of the review
and he pretty quickly figured out what we were doing and he understood why we were doing it. And interestingly enough, some significant number of years later, devdiv has the reactive framework and the reactive framework works by pushing objects through a pipeline. So we felt slightly vindicated.
And then we were stuck in the corporate layer. Oh, oops, somebody noticed that you could build software with PowerShell. Oh, my. So the MSBuild was new at that time. It was just coming out. Microsoft had invested a lot of energy
into building these new tools. Visual Studio was notoriously weak on build automation. So it was a tool for automating builds. Through mechanisms unknown, someone saw a slide talking about how PowerShell could be used as part of a software construction tool chain. This was, I think, actually being used,
the slide deck was used to justify the existence of PowerShell. Look at all the places you can use it. So of course, it tread on somebody's toes. And not like it's, that idea is unique. I mean, shells have always been part of the programmer's workbench on Unix.
The programmer's workbench is a set of utilities make and so forth for building software. So it's not unexpected that PowerShell shouldn't be able to do this. But we had a review. And it was a review.
In the end, it actually, it sort of clarified the relative roles of things by kind of like MSBuild is like Ant and PowerShell is like She, you know, and make is like make. And once people understood sort of the relative roles of the tools, we got through that little hurdle without dying.
And then finally, we're through all the hurdles, we're through the gates, we've had our reviews. Oh yeah, I forgot the first monad virus thing. No, it was the first Vista virus where some kid in,
somewhere, this guy wrote self-replicating code. He wrote it in every language. He liked using this to explore new languages. And so he wrote the same self-replicating virus that he wrote in C Sharp or VB or Unix Shell,
he wrote it in PowerShell just to see if he could do it. And somehow this got picked up by the security community in probably like on a Sunday morning after a big party when nobody's thinking clearly. And it got promoted in a big article. And this just started a huge firestorm.
So again, had to calm everyone down, publish the appropriate discussions explaining what was going on. And again, another shipping hurdle was passed. But blood pressure, in that last period, blood pressure was going up and down like a yo-yo.
So finally, we're at the end. There's one teeny thing left we have to fix. Marketing has decided that we will rename monad, which everybody loved, to Windows PowerShell, which was not as well received. And it had to be Windows PowerShell. It couldn't be PowerShell because there were other things that used PowerShell if we wanted to get a trademark,
it had to be Windows PowerShell. So, and we had to change it in the code. We had to change it everywhere. Everywhere where it was visible, but also a lot of places in the code where things might be visible through reflection. And so with a bunch of scripts and a whole lot of scanning later, everything got cleaned up.
Well, almost everything. You will still see in the code, like the fact that the PS object, like the core of our type system, is stored in a file called MSH object. You'll see, you'll still see bits and pieces of that.
So anyway, shipping is fun, and now we're ready to ship, yay! And finally, we have shipped. PowerShell 1 shipped in 2006 to great acclaim and huge size of relief, and we've been going steady ever since. And that's all the time I have, based on, oh boy, I'm way over.
So again, I encourage you, based on what you've heard today, if there are things that I haven't answered, or things that you'd like more information on, please let me know.
Drew had all my stuff in the back. Ah yes, about me. So my email is here. Was there?
Stay. Okay. Yes. Yes, I, jitters. My email, my Twitter handle, let me know stuff that you think I should cover,
stuff that you think I should cover in more detail, stuff I should cover in less detail, stuff I should emit completely. Please let me know. Anyway, so thank you.