An Optimistic Proposal for Making Horrible Code... Bearable
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Teil | 14 | |
Anzahl der Teile | 86 | |
Autor | ||
Lizenz | CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben. | |
Identifikatoren | 10.5446/31476 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
RailsConf 201714 / 86
1
8
10
12
18
19
23
30
35
42
49
52
53
55
56
59
65
74
77
79
82
83
84
00:00
Stützpunkt <Mathematik>ThreadDifferenteCodeGruppenoperationResultanteComputeranimation
00:45
CodeStützpunkt <Mathematik>Charakteristisches PolynomZellularer Automat
01:16
SoftwaretestProgrammbibliothekPlotterGeradeMultiplikationsoperatorWort <Informatik>Güte der AnpassungVirtuelle MaschineKomponententestRechter WinkelBenutzerfreundlichkeitMathematikNotebook-ComputerSuite <Programmpaket>Figurierte ZahlTouchscreenApp <Programm>Endliche ModelltheorieMessage-PassingComputeranimation
02:45
Ganze FunktionCodeWald <Graphentheorie>Rechter WinkelGruppenoperationRechenschieberDienst <Informatik>Computeranimation
03:09
CodePartikelsystemElektronische PublikationGeradeEntscheidungstheorieMengenlehreKlasse <Mathematik>Monster-GruppeRechter WinkelPunktXMLComputeranimation
04:56
CodeSoftwareentwicklerExogene VariableBitPartikelsystemBasis <Mathematik>DatenbankMaschinencodeKomponententestVererbungshierarchieKryptoanalyseMathematikProgrammierstilMultiplikationsoperatorWasserdampftafelSoftwaretestVerschlingungPolstelleObjekt <Kategorie>Metropolitan area networkProzess <Informatik>Physikalische TheorieSchlussregelInstallation <Informatik>KontrollstrukturComputeranimation
06:12
MathematikStereometrieSoftwaretestCodeSuite <Programmpaket>FunktionalProgrammbibliothekFigurierte ZahlGruppenoperationMultiplikationsoperatorHeegaard-ZerlegungComputeranimation
07:20
Arithmetische FolgeCodeMathematikMultiplikationsoperatorComputeranimation
08:00
MultiplikationsoperatorCodeNumerische MathematikSoftwaretestDistributionenraumInformationsverarbeitungGeradeSuite <Programmpaket>Leistung <Physik>Bildgebendes VerfahrenEinsRechter WinkelRechenschieberDiagrammComputeranimation
09:32
Elektronische PublikationGeradeNumerische MathematikSoftwaretestCodeCharakteristisches PolynomDienst <Informatik>ZählenTotal <Mathematik>Physikalisches SystemWort <Informatik>Funktion <Mathematik>Objekt <Kategorie>MusterspracheQuick-SortProjektive EbenePunktspektrumStützpunkt <Mathematik>Basis <Mathematik>EinsComputeranimation
10:42
Rechter WinkelExponentialverteilungDiagramm
11:10
Kartesische KoordinatenRechter WinkelGamecontrollerCodeFunktion <Mathematik>Numerische MathematikGeradeMetrisches SystemCASE <Informatik>GarbentheorieEndliche ModelltheorieBitrateApp <Programm>StatistikElektronische Publikation
12:23
DifferenteLogischer SchlussMetrisches SystemFunktion <Mathematik>CASE <Informatik>Endliche ModelltheorieDatensatzNumerische MathematikKlasse <Mathematik>SichtenkonzeptEinsPhysikalisches SystemQuick-SortZeichenketteMapping <Computergraphik>CodeMinimumTrennschärfe <Statistik>ComputeranimationXML
13:22
GeradeCodeStützpunkt <Mathematik>Endliche ModelltheorieSichtenkonzeptQuick-SortEinflussgrößeEinsSoftwaretestAbfrageOrtsoperatorDatensatzLogischer SchlussXML
14:55
ParametersystemMetrisches SystemCodeElektronische PublikationSchlussregelKlasse <Mathematik>SchwellwertverfahrenBefehl <Informatik>Numerische MathematikMultiplikationsoperatorBasis <Mathematik>Green-FunktionGeradeKonfigurationsraumMereologieHoax
16:04
Metrisches SystemCodeStützpunkt <Mathematik>SoftwaretestRatsche <Physik>Multiplikationsoperator
16:43
p-BlockKlasse <Mathematik>SchlussregelMultiplikationsoperatorObjekt <Kategorie>SoftwaretestMengenlehreMathematikCodeUmwandlungsenthalpieErwartungswertXML
17:29
StellenringWiderspruchsfreiheitWeb logSoftwaretestObjekt <Kategorie>Suite <Programmpaket>Ordnung <Mathematik>InstantiierungVersionsverwaltungNumerische MathematikMultiplikationsoperatorGanze FunktionCodeSoftwareFunktion <Mathematik>SoftwareentwicklerMusterspracheStrategisches SpielGruppenoperationDienst <Informatik>Stützpunkt <Mathematik>MathematikMomentenproblemParametersystemProzess <Informatik>Rechter WinkelMereologieMonster-GruppeProdukt <Mathematik>SchlussregelGreen-FunktionArithmetische FolgeSchätzfunktionResultanteStabilitätstheorie <Logik>GewichtungReelle ZahlURLGeradeComputeranimation
21:40
Klasse <Mathematik>MultiplikationsoperatorSoftwaretestDatensatzBitMereologieVerzeichnisdienstFramework <Informatik>
22:04
CodeAggregatzustandFrequenzDynamisches SystemQuick-SortEliminationsverfahrenKlasse <Mathematik>ZweiMetaprogrammierungBefehl <Informatik>Schreiben <Datenverarbeitung>VersionsverwaltungKonstantep-BlockMusterspracheMultiplikationsoperatorZeichenketteGeradeMaschinencodeOptimierungDeklarative ProgrammierspracheCASE <Informatik>PlastikkarteLokales MinimumComputeranimation
24:13
DistributionenraumSchlussregelTypentheorieRefactoringGeradeRoutingElektronische PublikationSchreib-Lese-KopfSingularität <Mathematik>GamecontrollerMultiplikationsoperatorNebenbedingungCodeAdressraumMathematikRechter WinkelUnrundheitÜberlagerung <Mathematik>Klasse <Mathematik>
25:50
Klasse <Mathematik>VariableDienst <Informatik>CodeParametersystemGruppenoperationInstantiierungGeradeEinfache GenauigkeitGamecontrollerProzess <Informatik>Systemaufruf
26:40
Wort <Informatik>MereologieOrdnung <Mathematik>CodeMathematikGamecontrollerDienst <Informatik>Basis <Mathematik>Metrisches SystemTermProgrammbibliothek
28:08
Elektronische PublikationDigitalisierungTwitter <Softwareplattform>CASE <Informatik>TermProzess <Informatik>RechenschieberMultiplikationsoperatorRechter WinkelHill-DifferentialgleichungStrategisches SpielMathematikSpezielle unitäre GruppeHypermediaVerschlingungGüte der AnpassungMonster-GruppeZellularer AutomatArithmetische FolgeFokalpunktNumerische MathematikComputeranimation
30:31
ARM <Computerarchitektur>JSONXML
Transkript: Englisch(automatisch erzeugt)
00:13
Hi. Welcome. My name is Joe Maste. We're going to talk about horrible, terrible, no-good code bases. Can I get a quick show of hands? Who here is dealing currently with just horrifying code at work?
00:24
Yeah, awesome. So this is pretty much a support group, if nothing else. So I've worked at a lot of different companies over the course of my career, and this has really been one of the common threads across all the companies I work at. I guess I'm fortunate enough to work at places that are very profitable,
00:40
but the result of that is that they're usually places that have just terrifyingly bad code. And so we're going to talk about a lot of different things. I want to start off, I'll give you an example of one of the code bases I'm working with, because I'm going to use it as an example across the entire talk. This is a place I was working at when Rails 4.1 was being released.
01:00
The code base I was working with was Rails 1.2. We were actually super proud of being at Rails 1.2, because we'd actually started at 0.7. I managed to wake our way all the way up to 1.2. It was fantastic. And a lot of the characteristics that you're probably thinking about for your own code bases, we had custom packages, somebody decided that they should write their own libSSL changes,
01:23
which meant that we were running our own Postgres system libraries. You know, pretty nasty stuff. We had a 6,000 line user model. Yeah, fantastic. The good news is that everything was in one place. The bad news is that everything was in one place.
01:41
The test suite took four days to run on a laptop. It was a little misleading, because, of course, if it takes four days to run your test suite, you don't run your test suite on a laptop. And so we actually paid, you know, five figures' worth of AWS bills to run across hundreds of machines. And we got it down to four or five hours before you could see whether your test passed.
02:03
And then when it did pass, about 10 percent of the tests flapped. Does everyone know what a flapping test is when I say that? Some people do. All right, so a flapping test is basically any test where it passes sometimes, but not all the time. So like 50 percent of the time it's green, or 30 percent of the time it's green.
02:21
Or it's green unless it's like a Friday at the end of the month. And so every time you got your test run back, you'd have, you know, somewhere between 150 and 300 failing tests, but it's not really certain which tests they are. And we had about a million lines of Ruby. So it's probably more accurate to say this wasn't a Rails app. This was a Ruby ecosystem with a Rails app hiding somewhere inside of it.
02:44
All right? And so we did exactly what you would do in this situation any time, right? We declared the entire legacy code base as deprecated. We split off 30 percent of our team, had them build a brand new copy of the code base, decided that a year from now we would completely tear down the old code base and be done, right?
03:05
It's going to be Rails 4. It's going to be, you know, microservices. It's awesome. And it went exactly the way that you would expect it to go, right? Isn't that a successful rewrite? Here's a show of hands. Has anyone actually deprecated their terrible code base before? I don't see.
03:20
One hand, two hands. Nice. All right. So, yeah, it happened the way that you would expect. Eighteen months later we had the new code base, which we had managed to move about 10 percent of the revenue onto it, so we couldn't kill it because now it was actually making money, and we had the old monster that continued to make all the money, right? And, of course, everybody really wanted to work on the new code base,
03:41
and you really can't blame them, right? The new code base is where all the more interesting code happens, you know, things that took us two weeks to do on the old code base, we could do in just a handful of days on the new code base. But the challenge here is, of course, that we have to continue running the business, right? Our requirements are still changing. We couldn't actually freeze or in our hubris attempt to deprecate the entire thing.
04:02
We had to keep updating it. And so this is a big challenge, right? And I became really interested in this topic because we did this to ourselves. At some point, somebody went into a class file that had 5,990 lines of Ruby in it and decided that the right thing to do was to add 10 more lines of Ruby, right?
04:25
And you know better. You know that you don't need to have 200 methods in one class. You know that that's not the right thing to do. And we made that decision anyway. And I wanted to understand why because, I mean, first of all, because we were about to build a second monster,
04:42
and because this to me is like such a common thing that I don't really understand how people who have all of these skills, who have, you know, you go through refactoring courses, you spend years on this, and yet you make these really bad decisions. And we keep adding barnacles upon barnacles until we get a completely untenable code base.
05:02
And so I was working with a bunch of developers on this code base, and I basically sat down and started to ask, like, you know, why is it that we don't split out a little bit of code? And I got responses like this. I can't even tell what code's in use anymore, right? Any time you have 200 methods, you go into an object,
05:20
you delete a little bit of code, and you go, ah, I wonder if that's still being used. Of course, it's practically impossible to do. We had some super nasty method-missing magic that we used all over the place. Somebody had just learned Ruby in the really early days of the code base, and they're like, man, I don't have to, like, declare anything. It's just all going to be method-missing. We'll dispatch that way.
05:41
And so it's impossible to tell what's going on, right? Changing the code breaks completely unrelated tests. I swear to God, one time I had somebody change a view and break a unit test. I do not know how that works. And yet, here we are.
06:01
Good luck merging your style pools. Hopefully you guys are of the belief that it's useful to have a somewhat consistent code style. So if you do what we did, you, you know, install RuboCop, you run RuboCop against your code base, and it's a trouble. Yeah. You know, and you could do, there's a dash A option. So you can auto-fix all of your style violations.
06:22
You know, and then you just put up a pull request that has 69,000 changes. And then we can't trust the test suite. As I mentioned, it takes forever to run. There are lots of failures. You know, of course, anytime you change a piece of code, a bunch of new tests are breaking, but we don't know if that actually means functionality is broken.
06:42
And so you start to shy away from this, right? Like, I'm not going to wait days to run my tests. So what did I do instead? We became surgical in our precision. We learned how to not disturb any of the code around what we were trying to change. Make the tiniest possible change.
07:02
Push it up. Don't run your tests. And then run like hell. As long as my name is not on the git blame, we are okay. Right? And a couple times, you know, we tried, you know, we figured, we'll fix this all. We'll add, you know, whatever new libraries. We'll split everything up.
07:21
But it always felt like, you know, basically trying to bucket out the entire ocean with this little plastic bucket. Like, you keep going and keep going, and you never make any kind of progress. And so that leaves us here. Over time, we became afraid of changing the code base. This is an actual psychological condition.
07:41
Interesting side note. It's also one of the underlying conditions to clinical depression. Awesome. And we just, we became afraid of making changes. And so now, not only did we have the original problem of a code base that continues to get worse, we have the secondary problem of not being willing to change it.
08:01
Which leaves us here. A trough of despair. So this is a pretty nasty situation, right? Well, the good news is that once we actually took the time to, you know, take a look, take an honest look at why we continue to make everything worse, it gives us the opportunity to start fixing it. Right? So we're going to talk about
08:21
how to tackle these two problems for the rest of the talk. Number one, how do you become less afraid of your code base? And number two, how do you actually dismantle it in a way that is sane? And the first goal that we have is to name the thing. There's this whole concept of knowing a thing's true name is to have power over it, right? It's such a cool concept. And I think that this is actually true about
08:41
when we deal with our really problematic chunks of code. Because if I ask you to think about this really tough, terrible code base that you have, that you've dealt with, what you usually think of is the very worst of the thing. Right? I've put up a whole slide of it. It's a million lines of code. It's, you know, 6,000 lines of this. And it's, you know, a test suite that takes forever.
09:01
And it becomes a stand-in in your mind for everything about that code. Every time you try to deal with it, what you get is the image of this big, impossible thing. But that's a terrible cognitive trick, right? That's not the reality. Every code base I've seen, the badness is unevenly distributed.
09:21
And what I mean by that is there are usually those couple things that you would put on the slide. And those are the ones that are horrible. And everything else kind of trails off. So here's an example of how you can look into this. It's something we do. If you don't do a lot of bash, this might look kind of weird. You go into your project for this, taking this from the inside out.
09:40
If you look inside the parentheses, we can do find with a name of star.rb. So this finds all your Ruby files. Pass it to wc-l. That is the word count. And dash-l tells you how many lines there are. So that's how many lines in every Ruby file in your system. And then pipe it to sort on the right. So this is something that I do pretty frequently, actually,
10:00
when I'm looking at big chunks. People will say, well, our service objects are bad, or our tests are bad. And I'll say, okay, well, we'll go into the tests, and we'll just run this command on your tests. And what we find is this kind of thing. You get an output that is the number of lines sorted, a nice big total at the end. And I didn't want to call out anyone in particular, so I did this on the devise code base. Turns out devise, not so bad.
10:21
But the characteristics of this, I think, are really interesting. This follows a pattern that I've seen in most people's code bases. The very bottom file has 700 lines. Not great, not terrible. That's fine. But it only takes like four or five files before you're at something that is half the size. And most everything in that code base is much, much, much smaller.
10:42
And so what I think is that the badness is really, it's an exponential distribution, right? So at the very far left, we have the couple files that are infinitely terrible. As we move to the right, what we get is this, you know, the long tail of things that are, they're just kind of okay.
11:01
And that's actually a really useful realization as far as I'm concerned, because that means that we can just cut off a couple things, and we'll actually move really far down that line. There's another way we can quantify some bad things. Has anyone used rake stats before? Only a couple. So this is actually built into your Rails app already. If you have Rails, this is a command that you can run right now.
11:22
And what it does is it just runs through all of your files and gives you an output like this. Again, did it on a relatively small code base for illustration. There's a lot happening here, but just to give you a sense of something that can be useful to us, so this column will give you the lines of code in every section of your application. And the absolute numbers don't really matter here,
11:40
because you already know that you have a billion lines of horrible code, and you're really not gonna delete all of your Ruby. But it does tell you kind of where it's hiding. In this case, there's not much happening in mailers, and there's a ton happening in controllers. So we know this is a very controller-heavy application. And so we can start to quantify what's going on. And then over here, the last two columns,
12:01
the one on the left is the number of methods per controller, and the one on the right is the number of lines of code per method. And again, we can start to get these real metrics that are accurate so that we can quantify what's going wrong. So we can see that, yes, we know that our methods are too long, but the reality is that that's really only true inside of our models.
12:21
It's not evenly distributed. And then in our case, there were a lot of different metrics that we wanted to gather. Once we got the inference that we could, you know, start quantifying these things, we realized that the normal output of the tools we were looking at wasn't very helpful. And so we wrote some of our own. So we made a... It's cut off on my mind.
12:40
We made a quick method. It's the number of methods per active record class. Here's some fun. If you don't know, you can do this. You can actually look at all classes that exist on that first line, select for the ones that are children of active record. We can sort them by the number of methods, so class methods.count, reverse it because it's in the wrong order,
13:01
and then map it against some string. So not great code, not the kind of thing that I would put up on a pedestal, but it actually gives you this really interesting view of what's happening in your system. And once again, we see that the models at the very top have a lot more going on than the models at the bottom, and that it's distributed in the same kind of way.
13:23
And this is another thing I created. I had an inference in one of my code bases that my tests were slow because I was creating tons and tons of active record models. I know it's a blinding realization. It is exactly what people tell you. But it's not really good enough to just say, our tests are bad because there are lots of models being created.
13:42
The reality is that you usually don't know which ones are creating a lot of records. And so I built a simple formatter for RSpec, and what it does is whenever you run your tests, I don't know if you can see in the green there to the left, it shows you how many models were created and how many queries were run. It takes an uncertain bad and gives you a sense of measurement,
14:02
a sense of precision. And when you do this, when you add these sort of things to your code base, what you find is that the sort of overwhelming terror of fixing these code bases ends up becoming pretty manageable. There are a couple places that are horrible. There are a couple tests that are horrible, right?
14:21
And so the next thing we need to do is talk about how we actually approach fixing some of the code, right? This is the first thing. Stop the bleeding. We got where we are precisely because we kept making things incrementally worse. We cannot fix everything until we stop making the code base worse.
14:42
So it's difficult to enforce that line by just saying, please don't make everything bad. Because, of course, you know, people already are in that position where that's the obvious thing to do. And RuboCop, remember, gave us, you know, 70,000 style violations. But there's actually another way we can run it. If you run RuboCop against your code base,
15:01
find the very largest file, and then set the threshold to that file's number. So what I mean is, run RuboCop, it says, well, you've got, you know, 23 parameters on this method. You say, great, this is our new threshold. We will never be worse than 23 parameters on a method. This is, again, a very courageous statement, right?
15:23
And you go through and you do this for all the major metrics. And the goal here is to have a green RuboCop run. And mind you, it's kind of a fake green, right? You're like, oh, man, you know, no worse than 6,275 lines of code. But the usefulness of that green is something that you shouldn't take for granted. Because then, any time somebody goes back into that class,
15:44
and adds more code, they get one violation. They get an actionable violation. And every time you're able to go in and start cutting apart the very worst parts of your code base, you say, well, we have this method that has 23 parameters, let's take it down to 22. Great victory.
16:01
We can also tighten up our parameter in the configuration here. This is called ratcheting. This is one of my very favorite techniques for starting to get a handle on these kind of code bases. Because they only move one way. At the very least, do not get any worse. And as you get better, you'll find that it's incredibly satisfying to be able to tighten down those ratchets.
16:23
Now, again, in the situation where I was dealing with this, there were some metrics that RuboCop didn't have at the time. I think some of them are actually handled now. But there's another way that you can do this, too, if you don't want to do it with RuboCop. It's also useful if your engineers in your team don't really care very much about style violations,
16:40
if they're not the kind of folks who are gonna run that test. So this is the code that we had before. Remember, we grab all our objects, and we put it inside of a spec. And we set a threshold. And then for each of those classes, we create a describe block. Now, this is kind of devious RSpec. This is not something I would encourage you to do
17:02
in any other situation. But it's actually really easy to create these expectations. End to end to end. And what that gets for you is that now, every time they violate a rule, they actually end up failing a test. And this is a lot easier to catch in code review. It's a lot easier to catch overall. And it's one specific set of tests that you can look at
17:22
and know that these are not the kind of tests that would normally become flapping, right? So stop the bleeding, and then we need to actually make an improvement. That's the important part, right? But our skills that we had before
17:40
weren't helping us very much. Let's look at some ways that we can actually scope down to some smaller changes we can actually enact successfully. So first rule, before we get into an actual result, think globally, but act locally. What I mean by that is that if your goal is to make sure that all methods have no more than ten lines,
18:02
you will not go do that to your entire code base. There will never be that pull request. It's not gonna happen. And so you should keep that goal in mind. You're gonna have to become comfortable with the notion of fixing these things a small amount at a time. One of the ways that you fix bad code bases is typically to introduce new patterns, you know, decorators and service objects and such.
18:23
And, you know, yesterday Justin Serles was talking about the value of having consistent style across your entire code base. And what he said was that, you know, if you don't have that consistent style, you pay a penalty. And this is interesting, because what I'm encouraging you to do here is specifically to have different patterns in your code base
18:41
while you're in transition. But I think the reality is that when you're dealing with not Greenfield code, you're not doing new development, you're working on this monster, you have to accept that downside in order to make any progress. The pursuit of that consistency is destructive. Where, too, breakage is going to happen.
19:02
If you are a senior developer, do the junior developers a favor and fuck up every once in a while. It's scary, right? Like, the first time you take down production, the first time you make a change, that is like a truly frightening moment. And we do all make mistakes. Like, I should probably rephrase this,
19:21
not that you should make mistakes, but that you should be honest about the fact that you made mistakes. We are all screwing up consistently. But be honest enough about it that everyone else doesn't perceive you as never making mistakes. And remember that as we fix things, we are going to break things. That's reality. We do our best to mitigate it, but we can't get rid of it.
19:42
And then third, prioritize. Here's my order of things that I think you should fix. I think you should make your tests okay. You should eliminate dynamic code. You should behead the dragon, by which I mean you should kill those really bad pieces of code. And you should prioritize high-churn code. Let's walk through those.
20:00
So, tests. There are actually some really cool blog articles on how you can look at, you know, your flapping tests or how you can look at your slow tests and go in and fix each one. And I've had a lot of people inside of teams say, oh, I'm going to assign my two QAs, and they're going to look at every single flapping test, and then we're slowly going to move towards having them all be stable. It has never, ever worked.
20:22
I have never seen this work. And so I have a much better strategy for you to get your tests back to being green. Delete them. I mean it. Delete them. And the reason I mean this is that bad tests, a bad test suite, is worse than useless.
20:41
You still pay the entire cost of maintaining the suite. You pay the cost of writing new tests. You pay the cost, the literal dollar cost, of running them apparently on AWS across 100 instances. But at the end of that process, it doesn't tell you whether your software works or not. I don't care if you have 100 percent coverage if the output of it is useless.
21:02
You would be better to reduce your coverage and have a green run than to attempt to keep working your way back towards that stability. Because once you have a green run, you have real signal. If something goes red, you have to fix it. You will not get to that place incrementally. Number two, make writing new tests fast.
21:21
So all the techniques that we know about refactoring depend on the ability to run tests. And especially if you've taken my advice and deleted some of your tests, you're going to be writing some new tests. And this doesn't work if you need to spend an hour running any piece of your suite. Now, the realistic version of this
21:40
is that you should really endeavor to write tests that are not coupled to the Rails framework. You know, again, there was a little bit of what Justin was talking about yesterday was he writes these classes that do not need to interface with active record directory. And when you can do that, these tests are 10 times faster at least. And they allow you to avoid
22:00
some of those worst parts of what you've been doing. Second thing, eliminate dynamic code. So I'm a little bit sad about this because Ruby is, like, really cool with metaprogramming. Right? Like, we're really good at metaprogramming. You can do anything. It's one of those neat features that everybody loves.
22:21
But I'm going to say that you're not allowed to do it because you were, like, a teenager with a credit card, and you maxed it out, and you're, like, looking to score again. You cannot handle it. There are some people who are allowed to have dynamic code. Once you're in trouble with your code base, your privileges are revoked, which means that you need to find the places in your code base where you're using this dynamic code and remove them.
22:42
So, for example, this is a pattern that I've seen a lot. On the very first line, we grab something that is probably a string, constantize it, and then instantiate it. Here's the problem with this. If I ever wish to delete any code ever, I have to make sure that it's no longer in use.
23:00
This is impossible to grep through the code base for. And the more that you have this kind of magic, the worse off you are. There's another version of it, grabbing states and doing send. You know, somewhere there are probably, you know, methods actually defined for these mark as things, but, again, it's impossible to know where they're being used. Or if you're really evil, like Josh Cheek,
23:22
I'm going to call him out. I don't know if you can see the code in here. Inside of the class declaration, apparently you can write actual code snippets inside of your class. Not the block of the class, but where the name would go. You can put an if statement. Period.
23:42
So any time you really use any of these methods, consider getting rid of them. As much fun as it is to be dynamic and elegant and all of that, it's not as important as being able to understand your code base. And what you end up with is this sort of thing. And anybody who knows me knows that I abhor case statements.
24:01
I want to burn them all with fire. And yet this is better than not being able to figure out where your code is in use. It's better to be understandable than elegant. Next, head the dragon. So what I mean by this is that, like I said,
24:21
we have this distribution where there are only a few things that are really, really terrible. And I think that you should spend time specifically dismantling those things, our 6,000 line user class. And this actually runs against the advice that you get from a lot of people who teach refactoring. The rule is usually that you should only refactor something
24:42
that has a new constraint or you need to fix a bug. And again, that's really good advice because normally you're undertaking some risk and some pain and you don't want to do that for no reason. The overriding reason here is, again, that those singular pieces of terror make people afraid to work with the rest of the code base.
25:01
And if we can cut them apart, our distribution looks like this. It's not so bad. So what this looks like specifically for your code base is really individual. I'm not going to cover every possible type of refactoring. Sadly, this is not that kind of talk. But one thing I want to get across to you is that when I say to get rid of these files
25:20
to make them less bad, I am not saying to go into your 6,000 line user file and refactor every line of it because once again, we do not have the ability to make that many changes. You will not refactor 200 different methods. What you can do is tear them apart. So I'm going to take as an example here a controller.
25:41
So controller, say it's got, you know, 50 routes defined underneath it. It's really, really difficult for us to go through all those routes and make a sensible change, right? What we can do is define a new class, initialize it with some parameters, and then take the 200 lines of misery and just plop it into the call method.
26:00
You can do this. Run it. It'll fail. Find whatever helper methods you need to pull into it, and through a repeated process, you're going to be able to create a single chunk of code for this action. And we go into our controller, and you see the first two lines of this method. All they do is grab that service, invoke .call on it,
26:21
and pull some instance variables back out. Now, when you do this, you're going to basically, instead of having one controller that has 3,000 lines of code, you're going to have 10 classes with 300 lines of code, and you'll still have some amount of controller left over. Again, explicitly, you will have more code than when you started.
26:40
And you may say, that's dumb. But you get something in trade for this. What you get is that when you're working with any individual one of those services, you only need to think about what's happening inside of that service. That is an incredible improvement in your ability to make changes. You don't have to execute the controller itself
27:01
in order to test the service. That is a huge improvement. And you no longer have that one spot where humans, you know, don't dare to tread to try to fix things. That is a huge improvement. And last, let's talk about churn. Who doesn't know what churn is?
27:20
Has everyone heard this metric before? Cool. So churn is the notion that there are parts of your code which change a lot, and there are parts of your code that don't change a lot. Simply the word for that idea. And there are a couple things that fall out of this. The one idea is that if your code is churning, there are probably one of two things true about it. Either the requirements change a lot,
27:40
or your code is bad, and probably both in reality. Well, there's another thing that we can take out of this for our own purposes, which is that any kind of badness that you have to deal with gets worse when you have to deal with it on a daily basis to keep changing it. And so once you get rid of the very worst parts of the code base, you're left with, you know, a large well of meh.
28:01
It's not horrible. It's not great. And so the next thing that you should prioritize is that code that churns. You know, we can't go through and fix our 70,000 offenses, but I can go through and fix one file worth. Another name for this is essentially the Boy Scout policy. You know, whenever you go into a file, leave it cleaner than you found it.
28:22
This is something that we can actually start to handle as an individual. So name the evil, stop the bleeding, and strategically improve. That's our strategy. And if you do this, things will get better.
28:41
And they're gonna get better slowly. And that's another one of the challenges that we face. And so number four is focus on the process. I don't know if any of you saw Nick Mean's talk yesterday. He was talking about really focusing on those long-term goals, or DHH's keynote on the first day, talk about having faith. That's important in this case.
29:01
I should say, from DHH's keynote, I don't think you are Sisyphus. I actually had a Sisyphus slide in here, and then he used it, and like, that seemed like a really weird specific thing to repeat between two talks, and so I took it out. But I don't think you are, because in the legend of Sisyphus, he made no progress. And you may roll the rock up the hill, and you may roll back down,
29:20
but the reality is that you can make things better slowly. And so what you have to do is take some amount of solace in the good things that do happen. When you have these victories, celebrate them, because when things aren't going as well, you're gonna want to look back on them. So the one other thing I wanted to give you, and I'm throwing math at the end of the talk,
29:42
this has always been a really interesting idea to me. On the top, you have making things 1% worse every day for a year. At the end of the year, you've got Monster. Tiny change, make things 1% better instead. It's not such a big change. At the end of the year, you've got something incredible.
30:03
There's something really interesting to me about this. And I think you have the opportunity to be the latter of those two. And so that's what I think you should do. That's all I've got. My slides are up on, well, this bit.ly link. I'll post them on Twitter as well. Please feel free to follow me on Twitter
30:22
and come ask any questions you'd like. Thank you.