An Introduction to Spies in RSpec
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 65 | |
Autor | ||
Lizenz | CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben. | |
Identifikatoren | 10.5446/37592 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache | ||
Produzent |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
00:00
GruppenoperationTwitter <Softwareplattform>Fitnessfunktion
00:30
Statistischer TestQuick-SortEreignishorizontProfil <Aerodynamik>Web SiteFramework <Informatik>Physikalisches SystemMultiplikationsoperatorBenutzerbeteiligungAlgorithmische LerntheorieSpeicherabzugMobiles InternetVirtuelle MaschineApp <Programm>FreewareComputeranimation
01:59
SignifikanztestStatistischer TestSoftware EngineeringSignifikanztestQuick-SortProgram SlicingSoftwareProgrammierspracheGeradeDifferenteVorlesung/Konferenz
03:07
CodeStatistischer TestGebäude <Mathematik>Quick-SortApp <Programm>SignifikanztestEin-AusgabeDifferenteSoftwareProgrammierungPerspektiveMathematikProgrammierspracheAppletEndliche ModelltheorieHumanoider RoboterKartesische KoordinatenTaskBereichsschätzungMAPVorlesung/Konferenz
04:22
SoftwareSchreiben <Datenverarbeitung>MathematikBereichsschätzungDatenverwaltungMultiplikationsoperatorProdukt <Mathematik>VerkehrsinformationProgrammfehlerQuick-SortShape <Informatik>Computeranimation
04:54
SoftwareSignifikanztestSuite <Programmpaket>Shape <Informatik>Quick-SortKartesische KoordinatenMultiplikationsoperatorBildschirmmaskeSchreiben <Datenverarbeitung>ProgrammverifikationImpulsProgrammfehlerTerm
05:52
SignifikanztestRückkopplungCodeSignifikanztestSoftwareCodeProgrammfehlerBrowserSoftwarearchitekturDatenverwaltungSchreiben <Datenverarbeitung>Kategorie <Mathematik>MathematikKartesische KoordinatenResultanteMultiplikationsoperatorEinfache GenauigkeitPhysikalisches SystemTopologieWeb logCoxeter-GruppeQuick-SortComputeranimation
07:47
PunktspektrumSignifikanztestQuick-SortKollaboration <Informatik>Zusammenhängender GraphKlasse <Mathematik>AnalogieschlussStatistischer TestImplementierungForcingInformationSoftwarearchitekturPunktspektrumKartesische KoordinatenOrdnung <Mathematik>MereologieDruckverlaufFunktionalPhysikalisches SystemE-MailQuaderCASE <Informatik>Schreiben <Datenverarbeitung>SoftwareGanze FunktionObjekt <Kategorie>Dienst <Informatik>CodeComputerarchitekturNatürliche ZahlURLIntegralAbgeschlossene MengeSchlüsselverwaltungInternetworkingDatenbankWort <Informatik>Computeranimation
11:38
Statistischer TestAbgeschlossene MengeParallele SchnittstelleGebäude <Mathematik>DatenbankRechenwerkAutomatische HandlungsplanungDatenverarbeitungssystemE-MailServerSignifikanztestSystemzusammenbruchPhysikalisches SystemMultiplikationsoperatorSoftwareQuick-SortKartesische KoordinatenUmwandlungsenthalpieRechter WinkelMereologie
13:00
Exogene VariableDatensatzUmwandlungsenthalpieStatistischer TestMereologieKartesische KoordinatenQuick-SortFramework <Informatik>Prozess <Informatik>Exogene VariablePhysikalisches SystemZusammenhängender GraphObjekt <Kategorie>SignifikanztestSystemaufrufImplementierungKollaboration <Informatik>Kategorie <Mathematik>Wort <Informatik>Künstliches LebenHoaxFokalpunktBrennen <Datenverarbeitung>Rechter WinkelTaskDatensichtgerätBeobachtungsstudieComputerspielTopologieComputeranimation
15:42
SignifikanztestSignifikanztestDatensichtgerätTemplateElektronische PublikationQuick-SortSchnittmengeDefaultHilfesystemComputeranimation
16:14
SignifikanztestMultiplikationsoperatorSystemaufrufObjekt <Kategorie>ParametersystemSchreiben <Datenverarbeitung>Elektronische PublikationGeradeCodep-BlockSuite <Programmpaket>Message-PassingQuick-SortSchnittmengeDefaultErwartungswertNormalvektorZahlenbereichKette <Mathematik>Brennen <Datenverarbeitung>Mailing-ListeUmwandlungsenthalpieGruppenoperationWeb SiteTropfenIndexberechnungTask
20:35
ZählenGanze ZahlZeichenketteSchnittmengeObjekt <Kategorie>Prozess <Informatik>DifferenteClientSignifikanztestWrapper <Programmierung>SymboltabelleDienst <Informatik>Wort <Informatik>InstantiierungStellenringInformationsspeicherungParametersystemQuick-SortCodeZufallsgeneratorSchlüsselverwaltungLaufzeitfehlerExogene VariableMAPMechanismus-Design-TheorieInteraktives FernsehenImplementierungSoftwareBeweistheorieIntegralSchreiben <Datenverarbeitung>ComputerspielRechter WinkelStatistischer TestTaskSystemaufrufMultiplikationsoperatorCodierungp-BlockElektronische PublikationMathematik
25:04
ClientKonstruktor <Informatik>ImplementierungInstantiierungVariableDatenstrukturOrdnung <Mathematik>Nichtlinearer OperatorTypentheorieSignifikanztestMusterspracheProgrammiergerätObjekt <Kategorie>Message-PassingQuick-SortRechenschieberMomentenproblemElektronische PublikationData DictionaryRichtungZufallsgeneratorEntwurfsmusterKomplexes SystemKollaboration <Informatik>BrowserKonstanteSchreiben <Datenverarbeitung>ErwartungswertInteraktives FernsehenGruppenoperationMathematikSystemaufrufSchlüsselverwaltungPhysikalisches SystemGeradeAggregatzustandZeiger <Informatik>ProgrammbibliothekEndliche ModelltheorieService providerParametersystemCodeGamecontrollerRechter WinkelMAPUnrundheitGenerator <Informatik>TaskAutomatische HandlungsplanungDigitaltechnikFaktor <Algebra>Leistung <Physik>ProgrammierungStatistik
32:22
SignifikanztestHill-DifferentialgleichungQuick-SortMereologieComputeranimation
32:52
Güte der AnpassungWasserdampftafelSiedepunktMinimalgradComputerspielComputeranimationVorlesung/Konferenz
34:04
Kartesische KoordinatenComputerspielURLReelle ZahlRechter WinkelElektronischer ProgrammführerGüte der AnpassungSignifikanztestProzess <Informatik>VersionsverwaltungInternetworkingMathematikSuite <Programmpaket>Computeranimation
35:18
AdressraumApp <Programm>COMTwitter <Softwareplattform>E-MailComputeranimation
Transkript: English(automatisch erzeugt)
00:19
Thanks for coming. This is my talk, entitled
00:24
An Introduction to Spies in RSpec, and I think we'll get started. So, I'm Sam Phippen, I'm Sam Phippen on Twitter and Sam Phippen on GitHub. You can have a look at my various profiles on those sites if you want to.
00:42
And if you do have a look at my GitHub profile, you'll probably notice that I spend most of my time on GitHub working as a member of the RSpec core team. And that's sort of why I'm here giving this talk today, because to me, it's really important that RSpec is sort of represented
01:03
in community events like this, and also that we give introductions to beginners that enable them to more powerfully and quickly use the testing framework. So I hope that everyone goes away having learned something about RSpec today.
01:20
I work for a company called Fun and Plausible Solutions. We're a sort of consulting agency for data science problems, which means that we tend to work with companies that know how to build really great web and mobile applications, but don't necessarily know how to do things like machine learning or recommender systems or A-B testing and things of this ilk.
01:45
And if you're trying to do those in your own work and you're struggling for whatever reason, please do come and have a chat with me after we're done, because I love talking about this stuff. So I wanted to sort of preface this talk by saying that
02:04
my sort of conceit for this talk is not for me to drop some grand position on software engineering or present my ideas for what we should be doing ten years down the line, but instead sort of present an interesting slice of facts
02:24
about testing and how RSpec works. And I'd much rather everyone in the room learn something than I get to the end of my talk. So if I say something that you find confusing or you'd like me to expand upon,
02:43
I would ask you to interrupt me and ask a question, because more likely than not, if you have a question about something, someone else in the room will as well. So this talk is really about testing and how we actually go about testing our software in Ruby
03:03
and the tools that we use to do it. And one thing that I find amazing when I work with people that use different programming languages, like I work with a lot of people that do Python and Java and build apps for iOS and Android, is that the Ruby community is the community that has just embraced testing.
03:24
They've really sort of engulfed it and accepted it into their everyday practice. And that's not the case with many of the other programming languages and communities. And to me, it's amazing that we do as much testing as we do,
03:41
even at the beginner level, because even if those tests aren't perfect, even if for whatever reason they have problems, it still means that I can walk up to someone's app that I've never seen before and begin confidently making changes without having to worry about what's going to happen as I do those changes and I'm going to break something.
04:03
And I wanted to sort of provide my thoughts on why writing tests, why actually building automated testing for software is a really useful thing to do, or at least one perspective that you could take. And to me, this is really to do with mental models
04:23
of how we write our software. So in the beginning, when you're working on an application, I would argue that it's entirely possible for you to hold nearly everything that your software is doing in your brain. And that means that it's really easy for you to make changes with confidence
04:42
and adapt to the software that you're writing. But as time goes on and our product managers and our users come to us with feature requests or bug reports and we, you know, make changes and grow our software, that becomes more and more complex and our software begins to get sort of bent out of shape and it becomes very difficult to hold everything
05:02
that your software is doing in your brain. And to me, this is where the tests come in. They literally allow us to serialize knowledge about our application into an executable form. And I think this is one of the things that I often see beginners struggle with,
05:21
is actually a good impetus for why writing tests is useful. They understand sort of behavior verification and so on, but that doesn't seem to be a long-term goal. Once you've got the behavior written, that's it, right? Well, yes and no. And I think the just sort of test suites soaking up knowledge
05:40
is really useful to have in the applications that we write. And I think that that knowledge as it grows over time allows your team to expand and allows you to continue to work with your software. It's also true that when you're writing tests alongside software together, so growing your software and your tests at the same time,
06:01
you're able to find bugs in new features as you're developing them. And what that means is that you can build your feature and deliver it, knowing that more likely than not, it's going to work when it's integrated with all of the others. And also, that all of those bugs that you encountered whilst you were developing your software
06:22
are less likely to actually be present ever again in the future. If you just refresh a web browser every time you make a change to the software that you're writing, it's more likely that those things are going to come back. And perhaps more than this,
06:41
when a bug in our software does make it all the way from us through our managers through our QA and all the way out to our users, if you write a test that demonstrates that bug and fails when that bug is present, and then you implement the fix to your software
07:02
that allows you to actually verify that the bug is gone, you can be pretty certain that that bug is never going to come back. And I think that that's a really useful property to have when writing software. It's also true that we can write tests
07:21
that actually help us improve the design of the software that we're writing. Some kinds of tests, when you write them, allow you to focus so deep on a single piece of code in your application that the natural result is that the actual design, the software architecture of the system that you're writing, improves.
07:42
And this is a really useful property to have when writing tests. But to talk about that in any detail, we need to talk about the kinds of tests that it's possible to write. And I wanted to start with the sort of test that I think most beginners write when they're thinking about how to test applications.
08:03
And that's an integrated test. And the idea behind an integrated test is that you're going to take your entire application, your database, things that talk to the internet, email systems, Amazon access, whatever, and just box it up. And then you're going to sort of interrogate that entire system as one piece,
08:23
effectively interacting with your application as a user would and faking no part of the world in which your application lives. And this kind of testing, this integrated testing, I think is really useful for certain kinds of behavior
08:42
that we expect when writing systems. So it's generally true when you're writing integrated tests that if an integrated test fails, your application is definitely broken. And if it's passing, that means that your application might be working. And those sort of information keywords there are really important.
09:05
The opposite end of the testing spectrum is an isolated test. And an isolated test takes a single piece of your application, a class or perhaps even an individual method, and isolates it from all of its dependencies
09:21
and all of its collaborators and forces you to focus on the very specific implementation of just that piece of functionality. And in order to achieve isolation testing, you necessarily have to fake part of the world that that test is going to touch. A way to think about it is that when you're writing an isolated test,
09:46
you're basically hiding as much of your application as you can from that piece of code in order to be able to test it on its own. And isolated tests, due to their extremely focused nature,
10:00
are what give us our ability to exert design pressure on the software that we're writing. Because if you find that it's difficult to write an isolated test, you'll generally find that the design of your system has some problem. The software architecture that you're working on is not as flexible as it might be.
10:22
And the sort of intuitive explanation here uses the idea of coupling. If your component in your system is highly decoupled from the rest of your application, it's very easy to isolate it. And if it's highly coupled to random parts of your system, that's not the case.
10:40
And that's sort of why isolated tests are useful. It's also true that a spectrum of tests exists in between integrated and isolated tests. For example, if you're building an application that uses a service-oriented architecture, you could take a single service out of your application
11:02
and test that on its own by faking the other services that it's going to talk to, but not faking any of the objects that are internal to that service. And that would be like a partially integrated and partially isolated test. But that's just to sort of demonstrate that there are a spectrum of isolations that you can apply to tests,
11:24
and that's an example. But I wanted to sort of talk as well about the use of actually faking different components of our system, and to do so, I wanted to use an analogy that I've actually borrowed from Justin Searles, who's giving the closing keynote of this conference.
11:42
He gave a talk about isolation in testing that I found quite useful. And the idea here is that let's imagine that we're building a GPS system for a new Boeing 747. Well, we could do an integrated test as we were building our GPS system. Literally put it in a plane and fly it,
12:02
and the plane crashes because we wrote our GPS wrong. We do this again and again and again until our system works. But that's obviously going to be very expensive and slow and destroy a lot of planes. And if we take that GPS unit and we isolate it,
12:21
we can test and get fairly confident that it's going to work before we put it in any plane whatsoever. And that's going to be a lot faster and a lot cheaper and a lot more useful. And we can sort of draw a parallel in building computer systems where talking to a database server and an email server and Amazon is really kind of going to be expensive.
12:45
And it's not going to work all of the time if your network is down for whatever reason or you're in a foreign country and your Wi-Fi isn't working and you can't have roaming. Anyway, so yeah, let's talk about ways
13:01
that we can actually fake different parts of our application and talk about how those are useful. And this is where we're going to deviate from sort of talking about testing in general and start talking about RSpec in specifics. So in RSpec and in fact some other testing frameworks,
13:21
we have a concept of a stub. And the job of a stub is to take some object that our system collaborates with, sorry, a component that we're isolating collaborates with and fake out a response to one of that object's method calls. So what you're going to do with a stub is you're going to pick an arbitrary object
13:42
that your object collaborates with and replace one of its methods with the stubbed implementation. And the idea with a stubbed implementation is that it's so simple that it allows you to not have to worry about the implementation of that collaborating object, but instead just allows you to focus on the implementation of the object
14:02
that you actually care about whilst ignoring that particular collaborator. Stubbs are really useful for taking an object and isolating it, but they don't allow you to verify that any collaborations between objects actually happen. And it can be a desirable property to actually test
14:22
that our objects are collaborating with each other, right? If my object depends on something else, I may want to verify that I'm actually calling that other object's methods. And to do that, you use what's called a mock. And mocks are very similar to stubs. They take the implementation of some method on another object
14:42
and they replace it with what's called a mocked implementation whose job is to actually check that a call occurs and then cause the test to fail if no call is made. And so where stubs just merely allow you to isolate yourself from a dependency, mocks allow you to verify that you're actually interacting with that dependency.
15:04
And so now we should talk about spies. That's sort of what the title of this talk is about. And spies are different to stubs and mocks. In that their job is not actually to replace the implementation of any individual method,
15:21
but they are objects in their own right. So when you're using a spy in your tests, you're actually creating a new object and then pushing that object into your test in a way that allows you to sort of isolate with your collaborators. And if all of those words didn't make all that much sense,
15:42
don't worry, because now I'm going to do some live coding and hopefully it's going to go fine. So let me just mirror my displays here. I think everyone should be able to read that. I ran to the back of the room and checked, but please holler if you can't. So has anyone here never used RSpec before?
16:06
Cool. Oh, that guy. So this is a really simple just like template RSpec test file. And all we're doing here is we're loading a file called spec helper via require.
16:21
And the spec helper just sort of sets up some common defaults for our RSpec test suite. And then we're using the describe method here to actually set up a group of tests. And we're going to write all of our tests inside this describe block to actually do some things. So let's have a look at how spies in RSpec actually operate.
16:43
Then I'm going to move on to an example using an existing piece of code, and then I'm sort of going to wrap up and take questions, and we can play with the technology. So it records method calls. So in RSpec, the way that you get a handle to a spy object
17:02
is you just invoke the spy method anywhere in the body of your test. When you're writing tests in RSpec, you use this it method to create a new test, and then everything in here is actually the sort of body of our test. So I've got this handle to a spy object, which is called mySpy. And the way that I set expectations
17:22
that methods actually get called is with some sort of normal RSpec syntax where we say expect mySpy to have received foo. And what this line of code does is it will check all of the method calls
17:42
that have been sent to the mySpy object and see if any of them match the method name foo. And so if I run this test, it's going to fail. And the reason that the test has failed is that it says right here double.foo any args expected one times with any arguments
18:02
and received zero times with any arguments. So all I need to do to make this test pass is invoke mySpy.foo. Now if I run the test again, it's passing. So what's actually happened here is that when I've invoked the foo method on the spy object,
18:20
it's recorded that that call has been made, and then I'm actually checking which calls have been made here on this final line of the test. You can also match arguments when you're writing tests with spies. So if I copy the body of this test and drop it in here, I can add this with call to the have received call,
18:44
which will actually validate the arguments get passed. So what I'm expecting now is that there will be an invocation of the foo method with the arguments one, two, and three. And if I execute this test, it will fail, because it expected one, two, three,
19:02
and it got no arguments. If I delete the call altogether, it will go back and say that it was expecting one times with that one, two, three argument list, but it was received zero times. But if I just add the call back and actually provide the correct argument list,
19:22
that test will now pass. Finally, you can actually also check that method calls happen a specific number of times. So what I'm gonna do is check that the method was received four times and then do it.
19:40
Times the method was called. Let's just copy the body of this. And the way that you do that is with this slightly sort of funky syntax where you say exactly and then a number and then dot times. This is all just method chaining. So what's actually happening there is have received creates an object,
20:01
and then all of these calls are just calling back onto that same object. And so this test, as you would expect, it's gonna fail, because it was expecting to receive four times and it only got it once. So if I copy this and paste it out four times and then run the test,
20:20
everything is now passing. So this is the basic things that you can do with spies. You can check which methods are being called, you can match against arguments, and you can also validate that calls are being made a certain number of times. So I'm now gonna move on to looking at a test for an actual piece of Ruby code.
20:41
And so what I've done here is I've written an object called counter client. And the job of counter client is to provide an API wrapper around an extremely simple HTTP service that I've written, which stores counts that are provided to string keys. And so what this is really doing is it's making HTTP requests
21:02
out to, like, some external service and then sort of providing responses to those. And so basically what I've got here is a set of integrated tests for my counter client, right? And so the behavior that it has is that if I don't increment a key at all
21:21
and I call the get method on that key, I get the integer zero back. If I call increment once and then I call the get method on that key, I get the integer one back. And finally, if I do this a random number of times, I get that random number back. And so these three tests
21:40
sort of provide all of the coverage that you need to actually validate that this object is correctly counting string keys. Just to prove to you that the implementation actually works, if I run my tests, they all pass for the counter client. And you can see here that the run time is significantly higher,
22:03
and it is actually making HTTP requests. So one thing to note here, though, is that nothing about these tests actually dictates that HTTP requests are getting made. They're all just doing simple interactions
22:22
with the counter client object, and we don't actually have any proof that any kind of talking to the network is occurring. If we look at the actual implementation of the counter client object, we can see here that it's using this L HTTP thing
22:40
to actually make HTTP requests to the service base URL, which is just a hard-coded local host 4567 string under the key. And then when we make that HTTP request, because the API returns the count as a string and like an HTTP response body, we need to convert that back to an integer
23:02
to have the behavior that we actually want. So we sort of established that our existing set of tests don't actually validate that we're making HTTP requests. So let's do that for one of the methods using mocks, and then we'll do it with spies
23:21
and see what the difference sort of is. So I'm actually just going to do describe get here, and this is a kind of RSpec idiom. When you're describing instance methods on individual objects, so when you're just testing an instance method on a particular object,
23:41
in the string that you provide with the describe block, you typically do like the hashtag symbol and then the word get, or then the word that is the same as the method. So it calls the get method on the L HTTP client, right? That is what the behavior actually does.
24:02
You can see here. So let's implement that. And this is how you do a mock in RSpec. You say expect, and then the thing that you want to mock, to receive, and then the method name, and then I'm going to do the argument here, which is HP local host 4567 key.
24:26
One thing we do need to do is lift key up to the top level of the test. So RSpec has this mechanism called let, which allows you to take common pieces of the test that you're writing and extract them so that you can reuse them
24:42
without having to repeat yourself inside the individual tests. And I've currently got this let inside the describe for the integration tests as opposed to describe the entire class. So I'm just going to move this up a level so that it becomes available to all of the tests in this file.
25:00
And so now that key reference will be the same as the value that I've got up here. And then I'm just going to call counter client dot get key, and that should make the request right. Because we're calling the method and that method collaborates with that object,
25:21
we should be fine. Let's do that. Cool. And so the new test that we've just added is passing. That's useful, but I always think that it's a good idea to see a test that fails as well as a test that passes. And so what I'm going to do
25:41
is I'm actually just going to comment this line out and then go back to my tests and run them. And we see here now that all of our tests are failing, but the one that uses the mock, the one that sets this expect to receive, is also failing. So we know that it's possible for that test to fail.
26:00
We've seen it in both states, and so I'm sort of happy with that test now, and I'm going to move on by fixing the implementation. So we've sort of now verified using mocks that our object is collaborating with the HTTP client correctly by making the get request. But there's a couple of problems
26:22
with this test that we've written. The first one is that the order of operations in this test is different to the order of operations in all of the other tests in this file. So if you look at this one above, there are three distinct steps which I'll highlight here by pushing them apart. We have this sort of setup step
26:41
where we actually generate the random number of calls that we're going to make. We have this action step where we're actually doing the calls, and then we have what's called the expectation step, where we're actually checking that the state of our system is correct. And this is a sort of idiomatic test design pattern called arrange, act, assert.
27:03
And the idea is that if you cause all of your tests to follow this pattern, it becomes extremely obvious for other people to work out what your tests are describing and how they actually work. If you have those operations in any order, it can become much harder to understand
27:20
what the test is actually doing. And so, whilst this test above does follow that pattern, this test below doesn't. There's no sort of setup step, but that's fine, because our setup step is basically to do nothing. But then we've got action and assertion the wrong way around. Right? We have this assert step coming
27:42
straight before our action step. And it's really common to see that when you're using mocks because mocks set an expectation at the beginning of the test that an interaction will occur somewhere in the duration of the test. And the reason that that's kind of problematic is it breaks this arrange, act, assert pattern
28:02
in a way that means it can be slightly harder to work out what your tests mean. And this is a trivial example because we've only really got two lines in our test and our client is really, really simple. But as you build more complex systems and your test becomes more complicated this can cause a real pain.
28:21
And so we can fix that using spies. The other problem that this test has is that it's running away and it's monkey patching a library that we don't have any control over. So LHTTP is a random constant that is provided to us by that library and is not something that we
28:42
really wrote or have any ownership of the API of. And generally speaking, it's a bad idea to sort of change the implementation of things that you don't control. So this sort of leads me to want to make some changes to the design of both the counter client and the test that we're writing.
29:02
And so let's go ahead and do that. So what I'm actually going to do is I'm going to create a new let called HTTP which up here will just reference the LHTTP constant. We're actually going to pass that into the counter client and then in the implementation of counter client we're going to change
29:22
the constructor to actually take the passed HTTP client in and then we're going to change the implementation of this HTTP client method to just return the instance variable of the passed in
29:40
HTTP client. If we go back to our tests and run them all they should all still pass. And this is an example of an extremely small refactoring we've made that will allow us to improve the design of our tests in just a second. What? Oh, thank you.
30:03
Typing is hard. Great! All of our tests are passing. Thank you pair programmers in the audience. So that's great because now what we can do is we can go down to this test and override the definition of HTTP to just be a new object.
30:24
And now when I do this that just went ding object does not implement get oh, um let's use a double. Sorry I screwed up. So doubles in RSpec are
30:41
objects that allow you to take just simply give them a dictionary of keys to values and they will just implement stubs on themselves that implement those methods. So what we've got here is a simple double object which just implements get returns nil and then we should be able to expect
31:01
on that. And again all of our tests are passing. But now because we're at a state where we can actually provide anything in place of the HTTP client and write our tests in the order we can use a spy to get back to that arrange act assert model
31:21
for our tests. And so I'm going to replace the use of a double here with a spy. And then I'm going to move this down here to say expect HTTP to have received get with that argument. Now this is going to work because spies respond to all methods
31:41
when you pass them into your test and then you set expectations afterwards. So if I run this then it will pass. And so now what we've got in our test is much better right because we're not reaching onto the L HTTP object and replacing the implementation
32:01
of one of its methods. And we haven't got our test out of order. We're following this arrange act assert pattern that allows us to simply and obviously have a structure for our tests. And so that's sort of all of the code that I had to write directly.
32:20
I'm going to tab back to my slides for just a moment and then I'll take sort of finishing questions. So as you might have been able to pick up from my accent I'm not really from these parts I am in fact actually British
32:40
or as my friend that lives in Boston likes to say, really British. It's quite a ways to come really really tired woke up at 5am this morning and I have one small rant that I have to deliver
33:00
as a British person to a room full of Americans. And it's to do with the quality of the tea in this country. So I really like tea. I think tea is a really good way to relax, calm down, etc. But I cannot for the life of me get a good cup of tea in this country.
33:22
And like some of my friends who I was visiting in Atlanta before I came to this conference took me to a tea place there and like it's a professional tea place and it gave me bad tea. So the reason I actually understand the reason it's to do with how you prepare water for your
33:42
hot beverages in this country because nearly everyone drinks coffee and the ideal temperature to prepare coffee at is about 92 to 95 degrees centigrade. For tea the water has to be boiling when it hits the tea bag. So Americans, if you do nothing else, if you have learnt nothing from this talk
34:02
learn to boil your water properly when you put it in your tea. Thanks very much for listening. Some people have questions.
34:20
I don't care. The question was what about tea at a higher elevation. Sorry, yes? Is that a real question? Right, give me a second and then I'll be back. RSpec 3 was released about two or three months ago. It's the new major version of RSpec and that means it has
34:41
breaking changes and I know that sounds scary because like your test suites are the lifebloods of your applications, right? I think everyone would be fair to say that they couldn't confidently delete their test suite today and be happy to continue working on their applications and similarly doing a major version upgrade is scary.
35:01
There's a really good upgrade process for RSpec 3 that many people don't know about because it's not very obnoxious and in your face. I would highly encourage you to look for the RSpec upgrade guide on the internet because it will make your life easy and there are automated tools to help you. Okay, now I
35:21
really am honestly done. I'm Sam Phippen on Twitter and GitHub. My email address is samatfunandplausible.com if you want to talk more and you can't find me at the conference. Thank you very much for listening to me rant at you. Let's have... Aww, you guys.