We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Embracing Uncertainty

00:00

Formal Metadata

Title
Embracing Uncertainty
Title of Series
Number of Parts
110
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Agile software development was born ten years ago, with a gathering of industry luminaries in Snowbird, Utah. They were frustrated that so much ceremony and effort was going into so little success, in failed project after failed project, across the software industry. They had each enjoyed amazing successes in their own right, and realised their approaches were more similar than different, so they met to agree on a common set of principles. Which we promptly abandoned. The problem is that Agile calls for us to embrace uncertainty, and we are desperately uncomfortable with uncertainty. So much so that we will replace it with anything, even things we know don’t work. We really do prefer the Devil we know. Over the last year or so Dan has been studying and talking about patterns of effective software delivery. In this talk he explains why Embracing Uncertainty is the most fundamental effectiveness pattern of all, and offers advice to help make uncertainty less scary. He is pretty sure he won’t succeed.
Software developerComputer configurationData modelReal numberMathematicsGame theoryAverageSoftwareDecision theoryDevice driverGroup actionBookmark (World Wide Web)CASE <Informatik>Graph coloringDisk read-and-write headMessage passingDesign by contractMultiplication signPoint (geometry)WordProjective planeNormal (geometry)NumberSound effectMathematical analysisEstimatorRight angleSeries (mathematics)Database transactionElectronic mailing listPerformance appraisalStructural loadInformationDifferent (Kate Ryan album)Computer configurationPattern languageEndliche ModelltheoriePhysical systemPlanningConstraint (mathematics)Axiom of choiceFactory (trading post)Computer fileLikelihood functionSoftware testingWritingArc (geometry)OracleSign (mathematics)Lattice (order)Degree (graph theory)Computer hardwareMereologyMoving averageShift operatorTerm (mathematics)Condition numberHierarchyVector spaceMetropolitan area networkData conversionPosition operatorProcess (computing)BitService (economics)WebsiteInterface (computing)Server (computing)Object (grammar)Point cloudOperator (mathematics)Moment (mathematics)Boss CorporationCycle (graph theory)INTEGRALPurchasingGoodness of fitInferenceSpacetimeChannel capacitySingle-precision floating-point formatProduct (business)Scaling (geometry)Software developerStatement (computer science)Coma BerenicesSet (mathematics)Real numberInstance (computer science)VideoconferencingInternetworking2 (number)Natural numberRule of inferenceVideo gameRing (mathematics)Mathematical optimizationFunction (mathematics)Speech synthesisCollaborationismInformation securityIterationChainDrag (physics)Dependent and independent variablesComputer programmingQuicksortAnnihilator (ring theory)10 (number)Integrated development environmentInterpreter (computing)Interactive televisionSystem callRange (statistics)Arithmetic meanGodBoundary value problemConstructor (object-oriented programming)Entire functionSpreadsheetExterior algebraStrategy gameLie groupSinc functionLatent heatFocus (optics)Line (geometry)19 (number)Traffic reportingFood energyVariable (mathematics)PlastikkarteException handlingDescriptive statisticsDirection (geometry)Reading (process)Arithmetic progressionData structureSubject indexingFigurate numberPredictabilityWechselseitige InformationWhiteboardTheoryOrder (biology)DivisorNoise (electronics)Water vaporCartesian coordinate systemLogic gateInheritance (object-oriented programming)Social classMultivariate AnalyseLoginReduction of orderMathematical modelDemosceneData centerAreaCommitment schemeExtreme programmingResultantOpen setWaveComputer architectureDifferential equationMaxima and minimaProfil (magazine)Graph (mathematics)TwitterLocal ringOffice suiteCausalityMetric systemFirst-order logicMeasurementPasswordFunctional (mathematics)Machine visionFraction (mathematics)Flow separationDatabaseMassTunisDomain nameLoop (music)Computer virusDoubling the cubeAdaptive behaviorBinary decision diagramDistanceAttribute grammarGoogolVulnerability (computing)1 (number)Closed setBuildingForm (programming)Standard deviationStack (abstract data type)Kanban <Informatik>Analytic continuationBuffer solutionOcean currentSuite (music)Self-organizationStability theoryContingency tableProgrammer (hardware)SummierbarkeitCNNComputer animation
Transcript: English(auto-generated)
Good morning. Am I on? Hello. Can anyone hear me? Excellent. This is a much nicer room than yesterday. I can see you all. There's really steep rooms. I can't see anyone.
I might jump down and start running around in a minute. Okay, so this morning's talk's a bit different. I spent about eight years at a company called ThoughtWorks. They're an agile software delivery company, and I spent some time there writing software,
learning how to do all this agile stuff, and then coaching teams, and then ended up in organizational change and big programs and that kind of thing. I realized I was losing the tech stuff. I wasn't doing all the tech stuff anymore. And I was missing that. So I left there, and I joined a trading firm called DRW,
and I joined a very small team writing trading software. And I had culture shock, is the only way I can describe it. Because I was fairly convinced that all these things that I knew and I'd figured out and learned and whatever, and had been teaching, I thought that these were all really good ways to write software.
And I'm doing a talk later on today that goes much more into what I learned there, in a theme I'm calling patterns of effective delivery. Because what I found was that these guys were able to deliver really, really good quality,
robust, production-ready software into a live environment, into a live trading environment, kind of tens of times a day, tens of times an hour. They could just bam, straight into production. And they're sitting in with traders and they're kind of writing these systems. And they were breaking all the rules, and I was really upset by that, because I like rules.
Or rather, I like understanding what's going on. I didn't know what was going on. I didn't get why what they were doing worked. And so I've been, spent the last two years, kind of researching this and trying to document it and understand what they've been doing. And I've come up with this suite of patterns of effective delivery.
And as I've done this, I've realized that there's one pattern that kind of underpins all of these others, and they all sort of emerge from that. And that's this idea of uncertainty. And that what these teams were able to do, that other teams I'd seen weren't able to do, was embrace uncertainty and understand the nature of uncertainty and work with it,
rather than what we usually do, which is resist uncertainty. And so what I'm going to try and do with this talk is kind of explain to you a bit about what uncertainty is like, what it feels like, and why we're spectacularly bad at managing it, and why I think that even after I've told you that, I think we'll still struggle with it.
So who knows what's going to happen in the next hour. So I was looking at process, looking at how people do things. And process, I think process is a way of managing that uncertainty. It's a way of getting some degree of certainty and predictability and that kind of thing. And it's a way of managing risk.
And risk is just a business word that we associate with fear. Because if you say, oh, I fear this thing, then people point at you and laugh because you're weak. But if you say, oh, I see risk here, they go, oh, you are a sensible business person. So it's the same thing. So it kind of pans out like this. So with Apologies to George Lucas, fear leads to risk,
risk leads to process, and process leads to hate. And Gantt charts and suffering. And in fact, I discovered a newborn baby is hardwired to fear only two things. One of them is sudden loud noises, and the other one is Gantt charts.
Actually, the other one isn't Gantt charts, but it's an interesting thing to go and Google. So what's the other thing that a newborn baby is hardwired to fear? What's interesting about that statement is that there are only two things. So every other thing in life that we fear, we learn to fear. Okay, think about that for a minute. All the things you fear, all the things you have anxiety about, you learn to do that. You can unlearn that if you want, but that's a different talk.
Today, I want to focus on this word risk. Okay, so risk is an interesting thing. Risk is multivariate. I'll come on to why risk is multivariate in a little while. But we have a very simple one-dimensional model of risk mostly, and actually it's more than that.
I want to go back now. I want to go right back to 2001. In 2001, a bunch of techies met in a log cabin in Snowbird in Utah, and they were the original Agile.
Well, that's where they wrote the Agile Manifesto. Who hasn't seen the Agile Manifesto? Excellent. Who has seen the Agile Manifesto? Oh, fantastic. He's not putting their hands up this morning. Brilliant. Okay. So the Agile Manifesto, I think it's a work of genius.
It's one of these things that's really, really small, and so it fits in your head and you can carry it around with you. And it has this preface. It says, we are uncovering better ways of developing software. So just unpack that. We are uncovering. We haven't done it. We're still uncovering it. Better ways of developing software by doing it and helping others to do it. Through this work, we have come to value these things.
We've come to value individuals and interactions over processes and tools. We think we value processes and tools, but we think individuals and interactions are more useful. We value working software over comprehensive documentation. You should have documentation, but that's not the point. Software is the point. Customer collaboration is more important than contract negotiation.
Yes, you need contracts, you need boundaries, you need all that certainty, but the actual collaboration aspect is more important. And again, having a plan is important, but responding to change is more important. And we embedded that into our DNA, into our psyche, and we said, we can take this forward and we can be agile. And then we came up with methodologies.
We came up with XP and Scrum and all this other stuff that goes around the fluffy edges like BDD and some of those other things. And I had a bit of an aha moment last year, and it was a really sad aha moment. So I'm going to have to share it with you so we can move on and be happy,
but I need to share my sadness with you. This is what I realized. By about, well, last year, but certainly now, we've kind of gone the other way. So I don't see teams talking about how to make individuals happy or how to figure out how to be more productive.
I see them arguing about whether we're doing Scrum or Kanban or Scrumban or Scrumbut or XP or Pure XP or, you know, what's Pure XP? Or whether we're going to do BDD or Scrum, like their alternatives. I don't know. And the tooling. I've got to apologize. BDD is probably responsible for more tiny little tools than lots of things.
You get into these massive religious debates about what CI tool you're going to use, what deployment tool you're going to use. And this suddenly burns up a load of our energy. What else? Comprehensive documentation over working software.
This whole obsession we've got with executable specification. It's still a specification. It's not the software. It's not delivering the value to the business. It's a useful thing. It's a thing we need to be aware of. But we're valuing it over the working software. We're getting obsessing about the documentation side of things. What about contract negotiation over customer collaboration?
Surely we're not doing that. What are we doing when we commit to a delivery in a sprint? What are we doing in our sprint planning sessions or our iteration planning sessions? What are we doing when we commit to delivering a bunch of things and don't deliver them and then beat ourselves up and then sit around all glumly in the retrospective saying,
how can we go better rather than, aren't we great? This is fantastic. We're going as fast as we can be going because we're in a system and therefore we are currently the output of that system. If we want faster output, we need a different system. And finally, this whole thing about responding to change. Can you respond to change when you have a backlog of 600 stories?
I'll tell you, you can respond to change. You can spend hours and days grooming your backlog. What does that mean? Delete your backlog. You don't need your backlog. It's drag. It's slowing you down. And this is what I see. And all this process stuff, it seems to me,
is because we would rather be wrong than uncertain. And that's a really unsettling thing to discover. We would rather be wrong than uncertain. Like, we will take something we know definitely to be false
rather than say, well, we just don't know. I'll give you an example. I'm a Christian. I became a Christian a few years ago whilst researching Christianity to prove to myself it was a load of bunk. So don't do that because they might catch you as well. But one of the things I discovered while I was researching Christianity,
and some other faiths as well, was just how horrific some things have been done in the name of Christianity and in the name of loads of other religions as well. And it's this idea that faith, which is a simple, small thing, C.S. Lewis wrote a fantastic book called Mere Christianity,
which kind of describes what Christianity is. No, we put religion around it. It's a human thing. It's a man-made thing. And religion is all the rules and the structure and the hierarchy and all those kind of things that allow us to say, oh, I'm a Methodist or I'm a Baptist or I'm a Catholic or I'm a, you know,
we like to have denominations. We like to have these little separations. It turns out, when you read a bit of scripture, there's another great book I read called The End of Religion. Jesus in the Bible is a religious agitator. The one thing he does again and again is he bashes on the Pharisees, who are like the established church of the day. If he was around now, he'd be bashing on the Catholics
and bashing on the Anglican Church and bashing on all the big churches because that's not the point, he says. The point is, yeah, you have a relationship with God and that's it. Everything else is detail. And the church says, well, you've got your priests and your bishops and your archbishops and then you've got your... I'm particularly upset with the Catholic Church
when I've done all this research. So they've got this guy called the Pope. The Catholic Church believes the Pope is infallible. In the Bible, rule one, everyone's fallible. No exceptions, everyone. One guy ever wasn't fallible, that's Jesus. As long as the Pope isn't Jesus, he's fallible. But no, they have this rule that the Pope is infallible.
He's the pontiff. And I'm thinking, that can't be right. So, similarly, these complex questions become simplistic answers. So things that are complex, there are things that are necessarily hard to solve or things that you can't know.
So, for instance, I'll give you another Christian example. The Trinity. So there's this idea of you've got God the Father, Son and Spirit. And are those three things or are they one thing or are they both three things and one thing? To a lot of people, it doesn't matter. The point is, you can't know. The answer isn't in Scripture and that's the only place it could be.
So what happens isn't a lot of people going along being Christians and saying, well, we're just going to have to live without uncertainty. They go, well, we are going to draw a line down here and we're going to say it's three things. We are going to draw a line down here and say it's one thing. Well, then we must go to war with you. Well, we're Christians. We're Christians.
I know, let's fight. What? Now, if you decide that it's one thing or decide that it's three things, you are wrong. You are guaranteed to be wrong because the one thing you can't be is you can't know which one of these is right. So, by making that decision, you are taking a thing that's wrong rather than living with the uncertainty.
Complex questions become simplistic answers. Buddhism has a fantastic model for this. They call them koans. Who's heard of koans? Right, so a few of you. Koans are things like, what's the sound of one hand clapping? If you read any Terry Pratchett books, it's cl.
No, the sound of one hand clapping. The point about a koan isn't that you go, oh, I'm just going to look up the answer in the book of koan answers. It's that whilst you ponder the question, you have a moment of revelation, a moment of enlightenment. The Buddhists call it satori. So, the point of the question isn't to have an answer to the question.
It's to ponder it and to grow. So, I was pondering the question, what's the sound of one hand clapping? The point is that the clap sound is both hands, but you don't get a partial clap for only one hand. Neither one hand can make any part of a clap sound. Both hands make a clap sound. And it occurred to me this is just like pairing.
This is just like pair programming. There's a dynamic that happens when two people sit and try and solve something that doesn't happen even partially when one person is sitting there. It's something about articulating a problem or sharing a thing or that kind of stuff. And I was thinking, two people pairing is like two hands clapping. There's some metaphor of clap that happens
when two people pair programming or pair solving anything that simply doesn't happen elsewhere. And I had this moment of satori. Oh, it's like two hands. Wow. Again, back to Christianity. It has these things in the Bible called mysteries. And they're underlined. It says this is a mystery.
It's not like, oh, what do we think this is a mystery? It says this is a mystery. Judging people. This is a mystery. Did you know that? No one is allowed to know whether you're going to go to hell. And according to Christian doctrine, only Jesus is judged. He decides who goes to hell and who doesn't. So anyone who says, well, I know that so-and-so is going to hell
or I know that so-and-so is going to heaven is lying. They're wrong. You can't know. And so what we do instead is we introduce this idea of doctrine. And so interpretation then becomes dogma. And I think the saddest example of this is the Spanish Inquisition. Ha ha, no one was expecting that.
Sorry. So the Catholic Church, bless them, they invented the crime of heresy. And heresy is the crime of not believing what the Pope believes. It's basically it. So if you decide that your interpretation of the Bible
is different from the Pope's and the Church's interpretation of the Bible, we burn you. That's what we do, or we stick you on a spike. That's how understanding and tolerant and loving we are. That's the way we roll. And you look at that and you go, when did the wheels come off?
What part of love your neighbor and love your enemy and love God and love, love, love, love is, and we're going to put you on a spike? Because you don't agree with the guy that we decided is always right, which means we have to be wrong. And this is what we do. We build up these constructs. So we resist uncertainty.
OK, enough theology. Back to software delivery. So what do we do? We resist uncertainty of scope. How do we resist uncertainty of scope? We create this illusion of certainty with things like planning and estimation and scoping and our story backlogs.
The epitome of this that I saw, which, I don't know, it caused me to write an article a few years ago called The Perils of Estimation. It was a spreadsheet. And the spreadsheet for this project, and it was used on several projects. It became this spreadsheet. It had different columns. And so for each story, you would have an estimated,
a high, medium, and low estimate. You have a variable for risk, a variable for volatility, how likely it was to change, a variable for, I can't remember what the other ones were. For every single story. On one project, the way they'd done two weeks of planning, they had 400 line items in this.
Imagine the effort that's gone into this. Now let's just go right back to 1970s Standish report stuff. Let's assume that two-thirds of that aren't going to be useful. Only one-third of that list of stuff is ever going to be delivered, or rather, ever going to be used as features. Another third, roughly, is going to be used, but in a completely different form.
And the other third is just going to be deleted. It's never going to be used. And some other stuff is going to come in in the meantime. How much extra effort and work went into creating this massive thing? It's an illusion of certainty. It's a thing we do to give us a sense of security. We resist uncertainty of technology. So we have blueprints.
We have white-coated architects handing down tool chains and standardized tools and standardized development stacks. Because again, we want to resist this uncertainty. We resist uncertainty of effort. This is fantastic. Who's read the book Slack? Anyone? Read the book Slack.
Rachel's read the book Slack. Rachel's read all the books. Read Slack. It's fantastic. The point of this is, and especially read the goal. Eli Goldratt, the goal is, I've seen it in the bookstore here, go and buy it. It will change how you think about software delivery, I promise you. The point is that in a globally optimal system,
you will have lots of local non-optimizations. Or rather, let me invert that. If you locally optimize everything, you will end up with all sorts of bottlenecks in the system. And it's really, really hard to do anything with those because everyone is working at maximum capacity. So you necessarily need areas of slack
and areas of low effort, if you like, activity in a system for it to be optimally conditioned for flow. Now, if you don't know how to measure the throughput of a system, you'll never see that because all you're doing is measuring the local effort. And so we end up with metrics like cost per use. We buy a really expensive license for, say,
an Oracle database or some IBM toolkit. And we say, well, this car's so many, $100,000 a year. And so if we divide it by the number of projects we can foist it on, it becomes cheaper. It doesn't? It's still the same amount of money. You've still spent it. You might as well have burnt it
because every single project you're putting on it, if it's not appropriate, not only have you spent the money, you've slowed down the project. Win. And we do this because we resist the uncertainty of this effort and this investment thing. And we resist uncertainty of structure. We are, man, this is where churches come from. Sorry, back to theology. This is where these hierarchical churches come from.
I'm reading about Islam at the moment as well. And Mohammed, it turns out, fairly early on says, don't have structure. He says, there's no hierarchy in Islam. Jesus, all the way through, says, oh, there's no hierarchy in Christianity. So what do we have? We have imams and we have archbishops
and we have cardinals and all this kind of nonsense. Because we need some certainty of structure. And particularly hierarchical structures. Why do you think we have hierarchical structures? I've got a theory. My theory is this. We want someone above us telling us what to do
because that gives us certainty. And they want someone above them telling them what to do because that gives them certainty. Or another way to look at it is there's somewhere to pass the buck when it all goes wrong. But at some point up that hierarchy, you can see it thinning out above you and you know you don't know where you're going. And so you start to think, right, now what I can't do is tell these guys
that I don't know what I'm doing because there's too many of them. And they'll be terrified. So I'll just act as though I know exactly where we're going forward and tell everybody, follow me. And suddenly end up with these hierarchies. In the same way the whole waterfall gated delivery model came out of a desire for certainty. Well if we have these gates at these various different points
we will know that something is true at this point. You'll know that everyone's terrified of giving you bad news so we'll rub it under what we'll have is a series of rugs under which we put bad news until the end when it fails but everyone's left by then so it's okay. So we resist uncertainty of the future.
The future is uncertain. A famous baseball player called Yogi Berra, he wasn't very well educated but he was a brilliant baseball player and he used to come out with these accidentally very, very profound statements like when you come to a fork in the road, take it.
And he famously said I don't like to make predictions especially about the future. Which I just love. So now let's have a look at what uncertainty looks like. We have a model of risk and our model of risk looks like this. Risk is a big messy space and it happens on two axes.
You have an axis of impact which is how bad it will be when something goes wrong and likelihood which is the probability of something going wrong. Does that sound reasonable? Right. Think about your software process. Think about how we do stuff and look at which of those we're trying to optimise for, we're trying to manage.
So our mental model of likelihood is a probability. We're scientists. It's a number, it's a real number in the range zero, one. And somewhere along there is the likelihood of a bad thing happening and we want to make it as close to zero as we can
because then the bad thing won't happen. Does that make that sound reasonable? Risk mitigation is about minimising likelihood. Then we have this mental model of impact. What will happen if the bad thing goes wrong? And our mental model of impact is this. Infinity. When anything goes wrong, it'll be a disaster.
People will die. People won't die. So we optimise all of our software process around minimising likelihood rather than minimising impact or even looking at impact. And an awful lot of the ways that we can embrace uncertainty is about flipping the axis.
Because the thing you can't do, you can't ever make that likelihood zero. And so going forward assuming that because it's small, it may as well be zero is a recipe for disaster. That's that lying to ourselves thing again. We would rather believe the lie that there is zero chance of this thing happening
than embracing the fact that since there is a non-zero probability, it will happen at some point. And we don't know when. Thinking about how we might change our behaviour because of that. So this is our model of risk. What would embracing uncertainty look like then? What would it look like if we went forward and said,
I want to embrace uncertainty. I want to accept that there is uncertainty and weave that into how I think about things. I went back when he first wrote down Extreme Programming, when he first documented it. His strap line for his book was embrace change.
And I think he missed the trick there. I think what he was talking about was embracing uncertainty. Because a lot of the resistance I hear to things like XP is, oh well we know what this project is about. This project isn't going to stop being about this half way through so we don't need to embrace change. Right, fine, but you can't know everything that's going to happen going forward.
So there is uncertainty there and so all these kind of practices do apply. And it's actually about embracing uncertainty. So let's look at this. If we now go back to our idea of scope, how would you embrace uncertainty of scope? What are some things you could do? Interactive part. We're Norwegian, carry on talking.
Right, scrap most of the backlog. Who wants to burn this man? What could you do instead? If you scrap most of the backlog, what could you do?
Right, so you can tell what you want to be better for the users. There's a lovely model called rolling wave planning. And rolling wave planning says we know roughly what we want to achieve with this spend, with this project, with this piece of work. We want to reduce operating costs. We want to make this thing more reliable. We want to enable online transactions.
We want to encrypt our passwords, LinkedIn.com. What is anyone on the planet not storing clear passwords in clear text these days? Oh dear, I saw a tweet this morning that said, I wonder if the director of security for LinkedIn has updated his profile this morning.
By the way, if you haven't heard, LinkedIn.com was hacked yesterday. Your password is in clear text out on the internet somewhere. If you have a password for any other system that's the same, go change it. Okay, that's quite important. So the idea of rolling wave planning is you have these big rocks, boulders,
that are the themes, that are the point of this project. And then as some of those things become important, you kind of chip a bit off so you have smaller lumps, little rocks. And then kind of in the near term, which is maybe a week or a couple of weeks out, you then break those things down into manageable, deliverable chunks. And you're doing this continually.
So continually your backlog is the next half a dozen things you're interested in. And then maybe the next two or three big-ish themes, in order of priority, current priority, because it might change tomorrow. And then going further out is, oh, and here's the other stuff we want to achieve. And it turns out you can have a surprising amount of certainty
in terms of delivery, in terms of direction, with just that. And that's a super lightweight way of doing things. The team I was working on at DRW, they have a whiteboard. And every Monday what they do with the whiteboard is they go like this. And then they write up the themes for the week. Okay, they just chat and they write them,
maybe three or four things they're going to try and do. They have a corkboard, which is a work in progress, index cards. And they move stuff across the corkboard. And so they'll have a weekly planning session. It takes them ten minutes. They'll have ten minutes of like, what are we going to try and do this week? And it's not just the developers, it's traders as well. And they're saying, well, what kind of stuff do you want to see? Or what should we experiment with? Or what should we explore?
And that stays on the board for the week. And then as they're coming up with things to do, they stick them on cards. And that's where they have the little stand-up is around the corkboard wall. And that's their process. That's all of the planning they do. And they're measured by how successful the software is that they produce. How about that, right? The software they put into production makes money or doesn't make money.
The stuff that makes money, they make it better, so it makes more money. The stuff that doesn't make money, they either make it make money or they throw it away. They've got a pretty good metric for whether software is successful or not, because they're trading. So that's how they can embrace uncertainty of scope. Embracing uncertainty of technology. There's a pattern I talked about yesterday, and I'll be mentioning again later on today, called Spike and Stabilize.
So the idea of Spike and Stabilize, and in engineering, they call it concurrent set-based engineering, which is a much bigger word. But concurrent set-based engineering is something like this. I'm Boeing, and I want to write a fly-by-wire system. What I will do is I will take two or three vendors,
and I will engage all of them to create me a fly-by-wire system in isolation. They're not allowed to talk to each other. And I'll pay all three of them. And then at some point, I will make a decision. I will exercise an option to go with one of those three systems. At that point, I then disengage the other two and just stay with that one.
Concurrent set-based engineering is wasteful. It's not efficient, but it's very effective. It means I get to where I want to be, which is having a really robust fly-by-wire system, much faster than going through a series of experiments and evaluations and committees and all that kind of nonsense. Spike and Stabilize is the same thing. Try loads of different things. Try lots of right software.
Don't worry about making it robust and test-driven and production quality and all that stuff. Just get it out there and see if it seems like it's going to help. And if it does seem like it's going to help, then make it robust. Then stabilize it. Then give it the love. And so you can embrace the uncertainty of technology. You can accept that you don't know whether a particular technical solution is going to be the right one.
It's okay not to know. It's not okay to both not know and pretend you do, because we've got a word for that. Embracing uncertainty of effort. Again, I would ask you to look at
Theory of Constraints and Eli Goldratt's book, The Goal, which is a lovely book. It's a book about a guy who's in a factory, and it's failing, and he's in a marriage, and that's not going so well either. He has a schoolteacher who he bumps into. And the schoolteacher basically introduces him to Theory of Constraints through a series of conversations.
So it's a lovely narrative arc over the story, but it gives you this, I don't know, I had like a 90 degree shift. It was brilliant. Where you start thinking, rather than in terms of effort and activity, you're thinking in terms of throughput, you're thinking in terms of results. Why does it take this long to do this thing? If you break down the steps of this process, you discover that maybe 10% of the time you're actually doing stuff,
and the other 90% of the time you're waiting for stuff to happen. You're waiting for hand-offs or you're waiting for things. And what we do is we optimize the 10%. So now imagine you could optimize that 10% infinitely well. You now still have 90% left. So we obsess about this stuff rather than chopping down this stuff,
which is a much easier target, but we don't see it. It's behind the scenes because this is where the activity is happening. So we can embrace uncertainty of effort. We can say we know in order to get a globally optimal system, there will be areas of slack and there will be areas of waiting for things, and there will be areas of buffers and those kind of things,
and that's okay. Because if we start obsessing about that slack, we can get into all sorts of trouble. If instead we say, well, let's measure the throughput of this system. Let's see how much we're investing in this system, how much we're getting out of this system, and what the cost of operating this system is.
If those three things seem to balance, the system's fine. And this goes back to a lot of the self-organizing stuff. If you weren't in Roy Ochoa's session just now, you missed a lovely Scrum Master's lament to the tune of Adele. It was rather lovely. I kid you not, it'll end up on Vimeo at some point.
He's talking about self-organizing teams. Now the point about self-organizing teams is they can see the stuff that needs to be done if you give them a vision. If you say to them, the way you contribute to this bigger system, is by doing this thing. So if you can do this thing a bit better, the whole system will benefit because of this. And you go, oh, okay, well, we can go figure out what that means. That's a useful thing to do.
Rather than saying, well, what percentage are you utilized? What's your percentage utilization? How many of you are using a license for this thing that we have a site license for? Because I want to reduce my cost per license. And then we embrace uncertainty of structure.
So this is where we get into this idea of generalizing specialists. Again, in the teams I'm working in now, we don't have testers. We don't have analysts. We don't have programmers. We have a bunch of people. And all those bunch of people are wearing those different hats at different times. So we'll do our own build and deployment. We'll do our own programming. We'll do our own, sometimes we'll do our own trading.
So a couple of the programmers know more about trading and how the trades work than some of the traders. Likewise, the traders are becoming suspiciously technical. And what they'll do is they'll work up an idea they've got in Excel. And they'll say, we think there might be something interesting here. Hand it off to the programmers and they go, yeah.
Well, let's just carry on with this in Excel for a bit and see if we can... Right, well now I'm going to turn that into an application. How about that? Ka-ching, money. So the innovation comes from all over the place. And again, you don't get this siloed thing where as an analyst I'm allowed to think analyst thoughts. As a tester, I'm allowed to think tester thoughts. It's like I'm in a team and I'm trying to make
the team go faster. And so if I have an idea, I don't need to be able to solve it, if I can spot a thing that could help the team, that's okay. So what I want to do is give you a couple of tools, if you like, that I've encountered that help me embrace uncertainty.
And I hope you'll find them useful. So real options. A few years ago, a chap called Chris Matz, who was a business analyst at ThoughtWorks a few years ago, with me.
I was a developer there. And I was talking about this thing called behavior-driven development. And he was chatting to me about this, about behavior-driven development. And he realized that this thing I was trying to teach TDD with actually applied to analysis as well. And we went, oh, that's really cool. We should do that. And he carried on thinking about this, and he ended up, what he did was he took the idea of financial options and applied them
to any decision you make. So this is the complicated bit. Does anyone know what financial options are? Excellent. Could you explain to these guys what a financial option is? No, shaking your head. Okay, it's not that complicated. A future, I think it's two bits of it, a future is a contract to transact
at some point in the future. So we might agree that it's now June, right? We might agree that in August, I'm going to buy so many krona from you and I'm going to give you so many pounds. And we decide that now. Now, in August, we're going to do that transaction. And it may be that the value of krona has gone down,
in which case I win. And it may be the value of krona has gone up, in which case I win. But that's a future. It's a contract to transact at some point in the future. There's uncertainty associated with that. Now, an option is the right but not the obligation to do one of those. So in other words, an option is like insurance.
So it says, I'm going to buy the right to buy so many krona for so many pounds in August if I choose to. That's an option. Now, come August, I've got two choices. If all these krona have gone up in value, I'm like, well, I should do this. And I do the transaction. And you have to transact with me.
So if the krona's value has gone down and the pound's gone up, well, I'm fine because my pound's gone up. And so I take this option, I tear it up. I don't exercise it. So that's an option. And the point about an option is options have value. At any point you can plot on a graph using all sorts of complicated mathematical models the value of an options contract over time.
So that's the first thing about an option. It has value and the value is a function of time. The second thing about an option is it expires. There is some point at which the option is now worthless. It's now a contract that happened in the past or didn't happen in the past. Okay, so that's how options work. So what Chris Matz and a Dutch guy called Olaf Marson did
is they applied this idea to every decision and they said, well, look, options have value. Any option, not financial options, real options, any option, any choice you can make has value. So if it's the choice when to buy my service,
I could choose to buy my service now. Well, what does that mean? Well, that means I've sunk some costs now that I could have kept in the bank for a bit. But it also means I have my servers on their way. I now know my servers are three weeks away because that's how long it takes them to deliver servers. So I've bought an amount of certainty of when my servers are going to be arriving.
What have I also done? Well, there's the opportunity cost. There's money that I just spent on servers I'm now not able to spend on some other stuff. So there's a bunch of factors in that decision to buy servers. Cloud versus data center. So do I buy my own servers and put them in a data center or do I, and that's called capital expense, I've bought stuff,
or instead do I basically rent space in the cloud? So that's now operating expenses, like a monthly rental. And again, there are different decisions associated with that. So all these options have value and that value changes over time. Now if you think about my server purchase decision, as I get towards when my project is due,
that option is only going to be useful up until if I've got this three week lead time. If I don't buy servers at least three weeks before the project ends, oh dear, I now know I'm going to be late because I won't even have the servers. Whatever else I've done in terms of software delivery, I won't have the servers.
So options expire. So they have a value that changes over time and they expire. So what he said was, in which case, then commit deliberately. In other words, know why it is, never commit early unless you know why. And we commit early. And why do we commit early?
Because we fear uncertainty. Committing when we have insufficient information, i.e. when we've got a fairly high likelihood of being wrong, is more comfortable to us than living with that uncertainty. Knowing that the uncertainty is the better place to be. We don't have a good reason to exercise an option,
so let's not yet. So we know that's the right thing to do. We know that committing early is a wrong thing to do because it eliminates a bunch of options that may be useful. We still do that. And we do it all over the place. We do it in our sprint planning.
Especially anyone who's doing four or six week sprints. You just close down the option of all the things you could do for the next six weeks. Then, bang. Fantastic. Well done. So that's making the assumption that you're going to learn nothing over the next six weeks. If that's also true, that's really sad. I don't believe you. I believe you'll learn tons of stuff over the next six weeks,
but you won't be able to action any of that without causing pain and suffering in meetings. So commit deliberately. Don't commit early unless you know why. Buying hardware, buying licenses, any kind of cost decision is almost always an early commitment,
an early investment. Technical decisions. You can make technical decisions such that you leave options open. Bob Martin this morning was saying one of his criteria for good architecture is that it allows you to defer decisions about tooling and about the tool chain you use and about the technologies you use.
You can defer those decisions. Good architectures isolate little subsystems so that you can make independent decisions within those subsystems. You're not coupling decisions to each other. In other words, exercising an option here doesn't cause another option over here. Sorry, it doesn't cause another option over here. So real options is a model.
Deliberate discovery. This is something I wrote about a couple of years ago. I wrote an article called The Perils of Estimation and basically saying I think that estimation in the process isn't useful. I want video saying that. That's going to be on the Internet.
I think estimation, certainly in the way we do it on projects, is pointless. Actually that's a bit mean. I think that by doing estimation you will learn stuff. So by doing the process of estimation and planning and all those kind of things you'll discover things
and those things might be important. You might discover that actually there's a whole bunch of technology decisions you haven't thought about. There's a whole bunch of interfaces with third-party systems that suddenly come up while you're discussing the detail of a story and planning. So actually there's enormous value in having those sessions,
getting everyone together, and doing the thing we call planning and estimation. But because what we're trying to do is create a big backlog, any of those really important discoveries are accidental. We do accidental discovery. So whilst doing the objective of the session, which has come up with 400 stories with seven data points on each story,
we accidentally discover that there's some architectural decision we need to make. We accidentally discover, because the business sponsors are in the room, that when the guys are talking about where they're going to put the data center, that the business guys talk about latency and they say if you have the data center over here it's going to take too long to get messages to the exchange over here, and so that's bad.
And they go, oh crap, we didn't even think of that. Right, now we need to go and do stuff. But those things, there's an Australian agile coach I was working with. He called them oh shit moments. So I'm British, so I call them oh crap moments. And he says, you know those moments
that even when you've done all your agile and your BDD and your scrum and your whatever else, and you get in towards the end and you go, oh crap, oh crap, we didn't plan for that capacity. Oh crap, we didn't think that all these things would come out of testing. Oh crap. All these things. He says, what I want is to pull those old crap moments back.
I want to make those things happen earlier because I want the actual release and the path to production to be really, really boring. So deliberate discovery says this. What if instead of pretending that we made the likelihood come down to zero, we assume the following statement. Some unexpected bad things will happen.
I want you to write that and put it on a wall above your desk. I want to have posters of this. I'm going to unpack this. Sum. Sum is a non-zero amount. On your project, on the next project you do, in fact on the current project you do,
I'm going to go with three. Maybe two. But not none. You don't get to choose none. You get to choose a number bigger than none of things that are going to happen. A non-zero number of bad things are going to happen. Unexpected is this. You cannot plan for them. It's what Donald Rumsfeld calls the unknown unknowns.
All of the planning, all of the contingency you could possibly do, this thing will still jump out and bite you. Doesn't that suck? So a non-zero amount of things that you cannot possibly plan for are going to happen. Bad things means that they will adversely affect
the delivery of your project. They're not a thing that you can suddenly sidestep. They will get you. So let's do this again. Some unexpected bad things will happen. What would you do differently if you were to assume that?
What could you do differently? And this is where I was going with the whole perils of estimation thing. Yes, get all those stakeholders in a room, because that's really important. But don't do this story thing. Do whatever you can think of to do. And there's tons of exercises and
games and things you can do, group activities you can do that are going to help you discover this stuff. And this is deliberate discovery. So assume you're second-order ignorant. Do you know what I mean by second-order ignorant? Second-order ignorant is you don't know that you don't know. First-order ignorant is I don't know how to do something.
So if I'm 5 years old and my mum drives me everywhere, I don't know that I can't drive, because I don't care that I can't drive. It's not even in my world. Mum operates the car. In fact, mum and the car are the same thing. I get in the car with mum in it and I go places.
When I get out, I get out of the car with mum in it. And when I come home, the car turns up with mum in it. It's fantastic. It's like they were made for each other. At some point, I'm about maybe 17 and I want to take my girlfriend on a date. And I'm very, very aware that I can't drive. I'm now first-order ignorant. I now know that I don't know how to drive.
So second-order ignorant I don't know what I don't know. In other words, you don't know what unexpected thing is going to happen. That's what I said about the unexpected thing you can't predict. What you can't do is make a list of all the things it might be. Or rather, you can, and the thing it is won't be on that list.
Make as long a list as you like. It still won't be on that list. You're second-order ignorant of the thing that's going to bite you, of the non-zero number of things. Are you uncomfortable yet? Because this is reality. This is what actually happens So now let's assume that you can actively reduce your ignorance.
And the way you do that is you get together and you think of all the axes and it's a group activity. And the great thing with groupthink is they will come up with stuff on that list that you wouldn't have. Aha, there's your second order right there. Brilliant. So trading, market conditions might be a thing that one of the business guys, one of the traders comes up with, that as a techie I might not have come up with.
We end up with all these different axes, all these different vectors along which we might be variously ignorant. And then we could maybe put a number against those or a low, medium, high of how comfortable we are with that part of the domain. Maybe the technology stack, maybe how we're going to integrate with our third parties. Maybe one of the things that I was doing
was connecting to exchanges, financial trading exchanges. Every single financial exchange, when you connect to them, is subtly and evilly different from all the others. Writing software that connects to an exchange is a black heart. I tried it once and I sucked at it. I spent a long time trying to be good at it,
and I sucked at it. We've got guys at DRW who are really, really good at it, and they scare me. But you can actively reduce your ignorance. If you know that you're going to be connecting to a new exchange, you know right there there's going to be dragons. Let's do that thing early. Let's do that really uncomfortable thing early because then we'll have a sense of certainty around it.
Then there's this idea of double loop learning. A guy called Chris Argyris talks about this. He says double loop learning. Single loop learning is your lean plan, do, check, adapt cycle. The idea is you decide that you're going to do something. That's your plan. You say, we're going to start automating our builds.
What do we want to make better? Let's choose the thing we want to make better. We want to make our mean time between deployments. We want to bring that down. We want to make it quicker to deploy stuff. Let's start automating builds because that takes a load of time. The plan stuff is you baseline. You say, we're going to measure how long it takes us to get stuff into production at the moment.
What's our mean time? Then do is you then do this thing like you maybe automate your build. Check is you then see now what kind of impact that has. Did it make it better? Then adapt is, well now what? That was really cool so let's do another cycle of that. Let's do it again. Actually that made things worse, bizarrely,
because the kind of thing we have has so much uncertainty in it that we're not ready to automate it yet. That was a dumb thing to do. We've introduced more issues. Let's back it out. That was cool and we're done now. What's the next problem? Plan, do, check, adapt is a common mantra, if you like.
By the way, what most people do when they're doing any kind of process improvement is they just go do, do, do, do, do. That's what they do. They don't do any planning or checking or adapting. They just do stuff and keep on trying stuff. Unless you baseline where you are and measure things, you can't tell whether you're improving, but that's one other topic.
Double loop learning says, we've got this cycle and we've got this way that we're moving forward. Let's step back and see if that cycle is the right cycle. Let's see if we can learn about how we're learning. Are there more effective ways we can learn? For instance, using deliberate discovery rather than story-based planning
is a double loop learning exercise. It says, let's look at using the time with those people in the room and see if there's a more effective way we could use that time to learn what we learned to reduce our uncertainty, reduce our ignorance on this delivery. Here's why you won't believe me.
I don't think you'll believe me. I don't think you'll believe me because you're hard-wired not to believe me. The first way you're hard-wired not to believe me is a thing called attribution bias. Attribution bias says this. It says, when a bad thing happens to Dan, it's because Dan's stupid.
Dan should have seen it coming. Come on, Dan. When a bad thing happens to me, well, it could have happened to anyone. How could I have seen that coming? That's called attribution bias. When bad things happen to other people, well, they should have seen it coming. When bad things happen to me, well, no one could have predicted that. I mean, surely that was just bad luck, right?
That's attribution bias. We are all wired to do this. It's how we protect our egos. There's a fantastic book by a lady called Cordelia Fine called A Mind of Its Own. There's a whole load of stuff in there about this. It's wonderful. It's a really great read. Confirmation bias is the next one.
Confirmation bias says, I will seek out data that confirms my position, and I will delete data that doesn't confirm my position. Again, we are all wired to do this. There's a great behavioral psychology experiment that demonstrates this, where you take two radically opposing positions,
two people or groups with radically opposing positions, give them the same data and see what happens. An example that Cordelia Fine uses in her book is, you take pro-life and pro-choice, so pro and anti-abortion, which is a really, really volatile topic, proponents, and you give them infant mortality data.
Both groups look at the same data and go, See? See? I'm right. What? They go, look at the data. See, this proves it. This proves you should be pro-choice. The data, they will interpret the data in such a way that it confirms their belief.
Again, Chris Argyris has this model he calls a ladder of inference, which I'll just mention. Go and Google it. It's a very, very powerful model. Haven't got time to talk about it now. Confirmation bias says, I will seek out confirming data, and if what I want to believe is that this project will succeed, I really want it to succeed, I will roast in everything.
More importantly, if I want my boss's approval, I will roast him the message I give to him or her. Because I like being approved of. I like being liked. I'd rather be liked than, I don't know, be right. Be honest, have integrity.
Oh dear, there I go. My favorite one is bias bias. Bias bias is the reason that 84% of men are more than averagely sensitive lovers. Okay. And why in excess of 70%,
I think it is actually in the 80s again, I'm going to go with men again, are better than average drivers. Okay. We know this can't possibly be true. Because that's what average means. Yeah? But what happens is, you decide that, on any scale, you kind of again, it's about protecting your ego.
Well, I reckon I'm better than average driver. I wonder what an average driver would do. I would, obviously, I'd be far more considerate than average driver. I know my stopping distance is better than an average driver. I'll let people out more than average. Yeah, I'm higher than average. Everyone's higher than average. My favorite thing about bias bias is it applies to bias bias.
So having told you all that, you're all sitting there going, yeah, but I'm not as biased as the next guy. So I'm okay. And this is this whole uncertainty thing. You're going to be really uncomfortable with that. So I'm just going to finish with this. I'm going to say, yeah, this craving for certainty.
This is why you're not going to believe me, because what you're going to do is you're going to put mental constructs in place that give you the illusion of certainty. Okay? I've been talking for the last two years about this idea of embracing uncertainty, and I'm still desperately uncomfortable with it. Chris Matz, he's built this whole model of real options for embracing uncertainty,
and it scares the bejesus out of me. It's an astonishingly powerful way to think, but it messes with your head. So this is the TLDR. This is the too-long-didn't-read. Expect the unexpected. Okay? Expect the unexpected because it's going to happen.
And in fact, I would go further. I'd say expect the unexpectable. Right? Even now I've said expect the unexpected. You can't go off and make a list of unexpecteds. You don't get off that lightly. Okay? You've got to live with this. Anticipate ignorance. Assume that there is stuff of which you are ignorant, and assume that there's stuff you can do about it.
And then I guess my parting message is embrace uncertainty. It's inevitable. The one thing you have absolute certainty of is uncertainty. The one thing you can be sure of is a non-zero number of bad things that you can't predict will happen to you on your next project. What are you going to do about that?
Thank you.