How to sell a big refactor or rewrite to the business?
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 131 | |
Author | ||
Contributors | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/69457 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
EuroPython 202477 / 131
1
10
12
13
16
19
22
33
48
51
54
56
70
71
84
92
93
95
99
107
111
117
123
00:00
Code refactoringData acquisitionAbstract syntax tree2 (number)Error messageCodeWave packetMilitary baseAbsolute valueBitRankingScaling (geometry)Computer animationLecture/Conference
00:33
CodeStrategy gameSoftware industryBlog1 (number)DivisorSoftwareDampingMereologyStatement (computer science)Multiplication signException handlingInformation technology consultingSoftware bugRewritingComputer animation
01:53
Data structureCASE <Informatik>Goodness of fitParameter (computer programming)Representation (politics)Observational studyRight angleRewritingThread (computing)CodeComputer animation
03:18
Parameter (computer programming)BitProcess (computing)CodeGraph (mathematics)Product (business)RewritingMultiplication signDecision theoryForceExterior algebraMaxima and minimaMathematical analysisCode refactoringData managementCumulantProof theoryHypothesisStandard deviationFitness functionCurveLine (geometry)Hecke operatorRight angleDivisorRevision controlComputer animation
07:19
RobotPoint (geometry)Frame problemCodeMereologyIntegrated development environmentMultiplication sign2 (number)
08:07
TetraederCodeForcePoint (geometry)Shared memoryMultiplication signDot productProcess (computing)Product (business)Right angleRobotComputer animation
09:38
Code refactoringObservational studyRight angleCASE <Informatik>Projective planeParameter (computer programming)WritingPoint (geometry)Shared memoryOrder (biology)DiagramComputer animation
11:04
CASE <Informatik>Observational studyDifferent (Kate Ryan album)Number1 (number)Formal languagePoint (geometry)Multiplication signScripting languageGroup actionClassical physicsFlash memoryPresentation of a groupComputing platformRight angleIntegrated development environmentProduct (business)SoftwareUniqueness quantificationProblemorientierte ProgrammierspracheComputer animation
13:30
Dependent and independent variablesQuery languageCodeComputer architectureState of matterEmailService (economics)Event horizonEntire functionGraph (mathematics)Electronic mailing listSoftwareLine (geometry)CASE <Informatik>DatabaseMultiplication signCatastrophismComputer animation
16:42
Location-based serviceMultiplication signFamily of setsPoint (geometry)Product (business)Social classProcess (computing)2 (number)Set (mathematics)Router (computing)Daylight saving timeBoss CorporationProjective planeScaling (geometry)MathematicsLaptop
21:21
OracleMagneto-optical driveOctahedronDesign of experimentsCloud computingCross-site scriptingManufacturing execution systemOvalGoodness of fitSlide ruleLaptopThread (computing)ForceComputer configurationMathematicsData managementWebsiteQR codeYouTubeDerivation (linguistics)Extreme programmingProjective planeComputing platformMereologyCode refactoringVideo gameCodePoint (geometry)HypothesisReflection (mathematics)BlogProduct (business)RewritingGroup actionSoftware developerShared memoryRevision controlCASE <Informatik>Multiplication signStrategy gameConstraint (mathematics)Inheritance (object-oriented programming)Traffic reportingVisualization (computer graphics)Integrated development environmentWeb 2.0BitEstimatorSoftwareCartesian coordinate systemRight angleSingle-precision floating-point formatDivisorLine (geometry)WritingBinary fileQuicksortComputer architectureLoop (music)Computer wormFrequencyHand fanNP-hardWeb-DesignerExistenceLetterpress printingContent (media)Computer animationLecture/ConferenceMeeting/Interview
Transcript: English(auto-generated)
00:04
So how many of you have code bases that you hate? Hands up. Absolute train wrecks, right? And we as an industry are really good at creating train wrecks.
00:20
But yet, rewriting or refactoring like large-scale bit refactoring seems to be something that a lot of people argue against. Specifically, for example, Joel Sposky. He famously said in one of his blogs, they did it by making the single worst
00:43
strategy mistake that any software company can make. They decided to rewrite the code from scratch. Now I'm gonna skip the fact that Netscape didn't only collapse because of that. There might have been other factors involved.
01:01
But Microsoft is a sponsor of this conference, so let's not be too hard on them. But it is true that part of the problem was that. Part of the problem that Netscape collapsed is that rewrite. But at the same time,
01:21
there is this very clear statement, and then I believed it first. And then I started noticing some interesting anomalies. Company after company rewriting their software and being successful at it. These are companies I've worked for or worked with as a consultant.
01:42
With a few exceptions, there are ones where I just heard the story. But you know, I got curious. Why is that? So this talk is going to be exploring this. What if we could rewrite? What if it wasn't a
02:01
perfectly horrible idea to rewrite things? And the structure of this talk will be this. We are going to start by me explaining the argument against code rewrites. This is going to be like a more or less fair representation. Of course, I cannot represent anyone.
02:23
Everyone has slightly different opinions, but I will try to represent the counter-argument as fairly as I can. After that, I will show you the case studies that make me reconsider, and in the end
02:41
I will just try to find the common thread among these case studies and tell you why those rewrites were successful and what to do to have those successful rewrites. And then the answer to the question, how do I convince the business, will be
03:03
you will do those things that I said, and you will tell them that this is the way we are going to do it. Yvette said that that way it's gonna be successful. Okay. So let's get started with the arguments. Why is this a bad idea? Maybe I will stand here anyway.
03:23
And move this over, because my clicker doesn't seem to work as well as I expected. So this is going to be a bit of a hand-wavy pseudoscience. But unfortunately, it seems like in our industry that is the
03:42
standard for proof. I would really like to change that. But still this thing that I'm going to show you, it is called a hypothesis for a reason. It is hand-wavy science, but it feels right. And this is also the argument against rewrites. So you have this graph of
04:03
investment over time and then the cognitive features. And what we feel is that if you do no design at all, then at first we are going to move really quickly. We are going to just start out, because with a small code base you can always hack it together.
04:24
That feels right. It also feels right that after a while it's gonna be really painful. Because your design will suck. So the alternative is that we do good design. We start by months-long analysis of the requirements and so on and so on.
04:44
Now the problem with that is that it starts out slow and with the hope that someday maybe we will have a good design. Now here is the problem. And there is a design payoff line. This is the amount of features you need to have for design to pay off.
05:04
And there is your minimum viable product. This is the smallest amount of features you need to figure out if your business is going to stay afloat. And that is the reason why every single startup is motivated to hack.
05:21
Because if they don't, then they have all that initial investment risk extra. And that is painful. Because that means that I can run certain times, certain as many experiments, and I can fail just certain as many times. So the responsible thing to do with a new product is to hack.
05:45
But then, of course, we need to fix that somehow. So there is this idea of, okay, how about we first start out with hacking? We hit our product market fit, and then we slowly try to improve on the design. And we slow down a bit. Maybe it will be
06:04
even slower than just moving on. But over time we can get to this nice curve. Question, is that the rewrite? Maybe. Maybe not. I don't know. Maybe you can do this with small refactorings. And in many times, by the way, today
06:22
I'm talking about refactorings where you cannot do it under the radar, right? If you can do it under the radar because it's a small refactor, please do it. Like, that's your job. You have to, these small decisions is your job. Don't ask for permission on small refactors.
06:41
Only ask if it's big enough that it's going to be visible on your management charts or whatever. But then, what if we won't have that? Now, before I move on, I do want to make a slight adjustment to this.
07:01
Because this is the original version from Martin Fowler. But I want to make a small adjustment. Instead of cumulative features, I'm going to call it product attractiveness. The reason I do that is because I want to show you something I learned during running a bunch of
07:21
bunch of these workshops. So this is, these are people who are taking part in a workshop called LeanPoker. In LeanPoker, each team has their own little poker path. They are playing against each other. But the trick is, the bots started playing even before they checked out their code from Git.
07:43
And they play all through the day. And every few seconds, someone is going to get points and someone else is going to lose points. And this happens to be a really good environment to experience in a short time frame what being a startup looks like over a year or two.
08:06
And when you're having this workshop, you will also see a chart like this. This is the cumulative points for each team after a couple of hours. Now, notice something. One is that the black dots are deployments.
08:22
This is less than eight hours. Do you deploy that many times in your job? Like that's, that's like a deploy every 10 to 20 minutes. Who is deploying that often? Interesting. Now it turns out that's a competitive advantage. Why don't you do it?
08:44
But that's not my point today. My point is that when someone's code is a little less, less good than the competition, they are going to start losing points. Right? So these bots are horrible at playing poker. You open it, and if you're a poker player, it's painful to watch.
09:06
Because they don't know poker. They just know poker better than their competition. Right? And here is the learning that I had from this. Your product attractiveness minus the attractiveness of your competitor.
09:24
Take an integral of that. That's your market share. The minute your attract business is less than your competitor, your market share will drop. And this is an important point to keep in mind when we look at this chart.
09:43
Let's assume that we got to a point. We do very right. That means that we are not going to deliver features for a while. Here's our competitor. They were behind us. We were beating them. And then a few weeks later, they are ahead of us, and we are losing market share.
10:02
What is a responsible CEO to do at this point? They're going to shut the project down, obviously, because they don't want to drive the company into the ground, right? And then what we have, we have a
10:21
partially started rewrite or refactor that is probably just going to be in our way going forward. We made things worse. So what's the implicit assumption here? Because this is the argument for an order of rights. But I made some implicit assumptions.
10:43
I don't want you to say it now, because what I'm going to do now is I will go through the case studies. I want you all to think about what could be the thing that these projects are different from the situation that I explained until now. What is the difference?
11:05
So, case study number one. So there is this presentation software. It's called Prezi. Who uses Prezi? Not many people. Okay, it's a pretty nice presentation software. For a while, they were pretty successful.
11:23
Recently, they seem to have been falling out of favor, but when this happened, they were still very successful. And the thing that happened with them is that they built their whole product on a little
11:40
language called ActionScript and a platform called Flash. Yeah, I mean, yeah, that's a problem, right? And then they were thinking, okay, no problem. We are just going to rewrite this HTML5. Now, how do you think that went?
12:00
Not great. The reason it didn't go great is because HTML5 is significantly different from Flash. There are things you can do in Flash that you cannot do in HTML5 and vice versa. They couldn't make a compatible product. This is impossible.
12:21
So what did they do? They created a competitor product. They created a new product. It was called Prezi Next at the time, or Prezi Business at first and then Prezi Next. And this new product was in HTML5. It was targeted at businesses
12:40
who couldn't install Flash anymore because of corporate policy. It had a lot less features, but it was running without Flash. And that was their unique selling point. This is not as good as Prezi, but almost as good, and you kind of are allowed to use it in a corporate environment.
13:06
And they sold it at actually a higher price than their initial offering. And then over time they improved on this new product. At some point it just got better than Prezi Classic, and today you are not even able to create Prezi Classic presentations anymore.
13:28
That's an interesting story. Okay, let's move on to the next one. Catastrophic failure. That was actually, yeah, one of my birthdays. I was the lead of this team.
13:42
And we were creating a workflow engine. This workshop, you could basically create a graph of events, and then those events would be executed in the background for you. So you could say, if the user clicks this button,
14:01
I'm going to wait seven days and send them this email. And then if they served in response, sorry, and if they still didn't respond, then we are going to send another email, and so on. Now, there was a really interesting thing about this. Originally, this was a monolithic architecture.
14:24
Everything in the software was in one repo and in one service. And then we had the genius idea of moving some of the things into microservices. And one of those microservices was filtering.
14:40
Now, what happened there is that one of the steps that was supposed to filter out most of the users had one step where it inverted the whole list against the customer's entire contact database. And that step didn't happen. So instead of sending out
15:04
a 70% coupon for three users, we sent it over for 300 users. You can imagine that our customer wasn't happy, and we were really worried. And then we realized that there was a major issue with our design.
15:22
And that major issue was that it didn't handle failures correctly. We actually, there is actually a separate talk called Learning to Fall, you can find it online, that I gave about that specific solution, how we solved that problem.
15:40
But the bottom line is what's relevant here in this case is that we didn't stop everything and just went on and rewrote that thing. What we did was we took the different queries that were running and and looked at, okay, what's
16:03
what are the queries that are most likely to cause us pain? And we fixed those first and deployed it. And then we found the next query that had to be fixed and fixed it and deployed it, and so on and so on. After about
16:21
half a year, we fixed all of the issues in the code base. And then we stopped doing it. That was, but basically here we started out with a code base that was not stable and we got to a stable state by fixing one problem at a time.
16:44
Then we have the story of the slow import. That's also a product I was, or project I was working on. So one day my boss at the time comes to me and says hey at the time we had three monthly releases, and then he comes to me and says hey, but
17:02
you've been talking about this continuous deployment, and we are really not voting to that, but here is this problem that we really need to fix. So you get to do continuous deployment, but you have to fix this. Okay, so the problem was there was a daily import. This daily import was running for 25 hours.
17:25
Yeah. There was one day in the year when it actually finished within the day. Which day? Who knows? Exactly, daylight saving time change because that's a 25 hour day.
17:46
So yeah, that doesn't sound like a daily import. So what we did there is we looked at it and said okay, so there is, we have of course hundreds of customers. This is a problem for all of them, but for most of them
18:01
it's a problem on the scale of a few hours, and a few hours import is not great, but not catastrophic. But we have this one customer that has this daily import that runs for 25 hours. How about we do this? How about we
18:20
just fix it for them? We looked, because this import had lots of settings, but this customer was using just one set of settings. So if I fix just that one set of settings for this one customer, that's a win. So what we did, we basically
18:41
created like, created a new class that was a router class. It will look at the incoming request for import. It would see, okay, is this this customer or not? This customer. All right, do it this way, otherwise do the old way. Like this was literally an if customer ID equals,
19:04
but that's fine. At this point, that's fine. So and then we released that and the import for that customer took how many guesses? Okay, it was 90 seconds.
19:22
Yeah, that was a very inefficient way of doing the import that we had originally. So of course, that was a big win. And then my boss comes to me, great job Ivett. How about we fix it for other customers? Sure, why not? So what we did is we looked at
19:41
all of the settings that our customers are using. We immediately noticed that there were five settings that only one customer used. We were like, but also, let's be more supportive. What we did is we looked at the setting that was the most used and that would enable
20:06
this feature for the most users. We did that feature, we implemented it in the new way of importing. And then suddenly a bunch of other customers just got a faster import. And this point, our router is changing. This time, it says, it's not this customer.
20:23
It says, are these the settings that are set up or not? And if the settings that are enabled already are there, good. Otherwise, we are going to use the old import. And we got to the point where eventually, after about, I think it was six months or so,
20:42
we had an import that was good for like 98% of our customers. For the rest, it was running fast with the old one anyway. So we just left it there. And it was sitting there for a while until one day someone wanted to add a new feature to the import that would have been challenging to add in two places.
21:06
So we realized that it was easier to just finish the job. By the way, five of those features, like we lost those customers, so we had to just implement two more features. Great. So that was another interesting story. And for some reason, my laptop just crashed.
21:26
Becoming a platform. So in this story, it actually discussed this with this one. I cannot give many details, but the gist of the story is that this company wanted to become a platform. And when
21:42
they told their developers that no worries, no worries. Like the rest of the slides are really just beauty. I can do the rest of the talk without them. So so we had this, they wanted to have
22:04
an API so that other people can build on their platform, right? And then they went to their engineers and said, hey, we want this, this when we want a platform, we want an API. Can we, we heard that you already have some sort of, you know, APIs.
22:21
Can we just expose that? And engineers were like, are you crazy? Like everyone is going to think that we are horrible engineers, like no one should see that. Not even us should see that. So, so then what was the solution? Well, how about we take the opportunity to fix our internal architecture?
22:43
So what they did is, okay, let's assume that we add the first API endpoint for our product. When we are adding that API endpoint, we are going to make an effort to clean up that part of the API, just that part, and then release it.
23:02
And then we are going to clean up another part of the API and release that as well. And so on and so on. One by one they were cleaning up and basically this was, this was interesting because this is not refactoring in a code base. This is refactoring across microservices.
23:21
Like that's crazy hard. But they did it. And while they were doing it, they were gradually exposing their API to their customers. Last story is about reacting to new markets. So,
23:42
who knows VS Code? Who uses VS Code? Surprisingly, many people, given this is a Python conference, I would have expected you to use Vim. But, but if you are not using Vim, you are using VS Code. So, the interesting story about VS Code is that
24:04
Microsoft already had an IDE. It was called Visual Studio. They still released VS Code. Why? What was the reason? Because web developers hated them and they needed to get a better reputation in that
24:24
market segment. So, they created a smaller, more lightweight IDE that they could market to the market segment that hated them. And they were like, hey, and we know you hate us, but here is a product that you will like, we think, maybe.
24:46
And we were like, huh, Microsoft can actually do good stuff. Great. So, they actually created a new product and it was a lot less capable than Visual Studio. It was actually a lot less capable than a lot of the things that we were using, but it was pleasant to use it.
25:05
And they sold us the convenience. And that it was free at the time. And then, as time went by, they actually added new features. And by now, VS Code is pretty much a full-fledged IDE. At the time, it was
25:23
not really a full-fledged IDE, but now it's a very good IDE. And what's really interesting is that they didn't use a single line of code from Visual Studio. They actually built the whole thing on Node.
25:40
The reason they did that is because they realized that the Node community has such a vast array of packages already, that they can create their product much easier, much cheaper, and much faster using that platform. So, that was already also interesting, but there was another story that is very similar.
26:03
This was an accounting software firm. For the life of me, I can't find their name anymore. I read about this story at some point, and I was trying to find the blog about that anyway, again, but I can't. But anyway, this was, they were,
26:22
they had this product, and they realized that if someone were to enter the market with a new product, their product is so hard to use that a much simpler product could be a dangerous competitor for them. So, their genius idea was to create a competitor. They actually created
26:44
another company that was owned by this larger company, just to hide the fact that it's the same company, and they released this smaller, easier to use, cheaper competitor to themselves. And for a while, their market share was
27:02
like dropping and dropping, and that this new company was coming up, and the market was like, yeah, this company is over. And then they announced that, yeah, by the way, that's us too, and now it's called this. And they just simply replaced their existing product with this new version.
27:20
So, that's also an interesting story. Okay, so enough of the story time. I still have about quarter of an hour to tell you about the takeaway. So, what was the implicit assumption? Who can tell me?
27:42
Yeah, your software increases always, and it never decreases. Maybe that could also be one. Any other ideas?
28:00
Okay, competitor might have the same issues. That shows an option. That's not what I'm thinking. Think about the attractiveness. That's really good. Thinking about the attractiveness over the features you actually have. Okay, these are fairly close to what I actually found to be the
28:22
common thread, and I actually need my glasses for this. On the slide I have, during the rewrite, we don't deliver value to the business. This is one of the assumptions. The other assumption is the rewrite is an all-or-nothing project.
28:40
You either start it and finish it, and you have value, or you start it, don't finish it, and you don't have value. These are the two assumptions. then how do we sell the refactor? Well, you need to verbalize the value, and you need to make sure that the value comes early.
29:02
If you can make it so that your refactor has actual business value, actual customer benefits, and you can make sure that in the first print of that project, you can show that that benefit exists.
29:22
That's a win-win. You are going to have a better code base, but also your customers are going to be happier. So, and basically two big groups I find here. One was creating your own competition, like this is VS Code, this is Prezi, where in the case of Prezi, they had to rewrite their software because of platform reasons.
29:48
In the case of VS Code, they just wanted to enter a new market, but in both cases they basically created competition for themselves, and then waited for the competition to take over.
30:02
The other strategy is the incremental replace with user value. This is the story with the workflow engine, where we just looked at, okay, what is the most painful issue that we have that we need to fix right away?
30:20
This is what we had to import. How can we help the most customers to have faster import? With what feature can we move towards that again? Sorry, with the API, with the platformization, we actually want to give our users more
30:42
ability to interact with their API, so we are going to expose more and more of that. And this is the part where it would be really nice to have slides anyway, but my laptop is completely crashed, so sad story.
31:01
Anyway, so what I have on this slide, actually, I just uploaded the slides to the Python website, so you probably will find the slides later. Please don't find it now, but you can download it and then you can look at this, but basically you have
31:21
quadrants here where there are two axes. One is all-or-nothing projects and incremental delivery, and the other one is no verbalized customer value versus customer value. Okay, and then what I'm hypothesizing, I don't know, this is something that someone should do a research on, is this true?
31:41
But my hunch, based on the data that I have seen, is that if you have an all-or-nothing project with no customer value, that's going to be a fail. Very likely fail. If you have an incremental delivery, where every step of the
32:01
delivery also creates new customer value, that is something that is likely to be successful. Even if you don't get to do the whole thing, like just a large part of it, you are actually going to be happier by the end of the project, because you have
32:20
already delivered some value out of this. And then you have two other options. You could have incremental delivery, but no verbalized customer value. Now, of course, you should have some value from your refactor. This is something we talked about yesterday at the speakers in there.
32:41
And someone said that, like, if you don't have value from your business value from your refactor, then why are you doing it? And that's fair. But what I mean here is, can you verbalize it so that the business understands? And if you cannot, but you are doing it incrementally,
33:01
there is still some chance of success. This is something that Shopify did. There is a talk about that from Kristin Weissstein, the Craft Conference 2023. If you want to look it up, look it up. And then the other option is
33:20
an all-or-nothing project that has customer value. I think that is still very likely to fail, because you will have this long period of no value where business will be tempted to shut it down. Okay, so this was the end of the talk last time I gave it, but
33:42
there is a question here that is lingering in the room. What about LLMs? Because now we have LLMs, and we can refactor much faster. And that's a fair question. And the answer is, I don't know. I don't have enough data yet. So one of the things that I'm doing is actually
34:04
doing such projects right now, and I will collect data from that. But Ozu, if any of you are doing big rewrites after this and are using LLMs, please get in contact and tell me your stories, because I want to know as much as possible about this.
34:24
And that is all I had for today, but I do want to just give a short plug to do things. One is Limpoker. I already mentioned about it. If you are interested in bringing it to your company, limpoker.org. And the other thing is, if you enjoyed my talk and you want to see more of content like that,
34:44
I have a YouTube channel called Next Increment. There are QR codes in the slides, so you should be able to find it. Any questions? And please go to the microphone if you have questions.
35:11
Yes, I have a question. Would it be a good idea to do a refactor usually as part of implementing a new feature?
35:21
Yes, yeah, that's okay. So you cannot do that without talking to business people, right? So this is a situation where you, what I said at the beginning, right? If you are delivering a feature and you can do a refactor that is
35:44
small, that's fine. You don't even tell anyone. If it gets bigger, then you need to talk to business. If it's already, if you already have customer value that is available, that's a win. Because now it's not like, I am an engineer,
36:05
I need to refactor this and I'm going to manufacture customer benefit for that refactor that I want. You already have the benefit and you just have this refactor. And all you have to do is basically give a higher estimate. Yay! And you're done.
36:22
And also because it kind of creates optionality, right? That's what you want to achieve usually with refactoring. Now we have the option to implement this feature or another one, another one. Yeah, and of course you can always say that, hey, implementing this feature will be slightly slower if we do it like this with this refactor.
36:45
I would be cautious with that though. Because if you say that, then all the product person is going to say, see, is I get to test my first idea later and I'm still at the first idea. So what you want to do is don't, when you are getting started with a new project, brand new project, and
37:07
your product person is not even sure yet if this is the right product, don't bring up the refactor at that point. Wait a bit, hack it in the way you can, and when your product person is coming back with,
37:25
yes, we are starting to have adoption, we need this and this and this feature more to get even more adoption, then you can say, yeah, about that, we need to do a bit of work. And then you can say that, okay, we will have a bit longer time between the next two
37:44
and between each of the next releases, but while we are doing this feature, this is the part we are cleaning up, while we are doing this feature, this is the part we are cleaning up. But don't try to do it at once, like try to do it in chunks, try to make sure that your refactor is just as incremental as
38:03
the delivery of the product. Yeah, because with the incremental development, you also learn continuously, incrementally about what the design should be. Exactly, exactly. So basically you are learning about what the product should be and what the design should be. So it's a double win.
38:23
Other questions? Okay, can you go to the microphone? Thank you. So I had this experience with time constraints and big refactors.
38:42
The timing we had to do the feature was limited, but the code is just horrible, like code smells everywhere, multiple inheritance, you name it. It was really bad. I tried to sell the refactor to the business. Eventually, I was told, okay, but only if you do it in two weeks or in a sprint.
39:06
In the end, it was okay. I was able to do it not as extensively as it should have been. I will have an opportunity to do it again in the same code base, in the same feature
39:21
in the next few months, I'd say. How should I sell it to the business saying, okay, I've done this, but we'd be better off if we did even bigger changes. Let's do the role play. I'm your manager. You are saying that to me. I'm gonna say, yeah, but we do need to deliver features. We cannot just stop. We need to have features.
39:49
Can we do it? What's the smallest thing that you can do right now? And then maybe you can add another small thing next. So that is the problem. The small things, it's what I did in the past, and now it's only the big things, big schema changes.
40:05
Big schema changes. Oh, schema changes are fun. Okay, so this is the one I'm saying that because it's hard, but you can do it incrementally. It's just a bit harder. And this is the part where I'm gonna say, let's talk in person because it's
40:21
kind of worms that I don't want to open during the questions. I wanted to ask if in your data collection you have looked also at the practices that the teams were using. So if this slope, let's say, is depending on some, I don't know, Agile, Extreme Programming, whatever, and they have a hint for you. If you can verbalize the business
40:46
value for the management, maybe you can for LLM, yeah? Yeah, okay. All right, so as I said, this is the reason I said it's a hypothesis is because I don't have solid data. And
41:04
doing this kind of research is expensive, and if anyone is interested from academia to do this, I would be happy to collaborate on that because I think that this needs to be researched more. And not just this, like I'm a big fan of the DORA reports because that's finally actual science.
41:24
Like finally actual science that proves what works and what doesn't work. And sometimes it goes against my intuition, and then I'm happy because then I learn something, right? And this is the kind of stuff that we should do rigorous research on.
41:45
yeah, so coming back to the question of what practices these teams were using, these were different teams. Mostly teams that claimed themselves a child. I wouldn't say that they are all a child,
42:01
not from my understanding of a child. But yeah, that was the basic idea, and I think we are almost at time. Okay, then thank you for your attention.