We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Faking It - The Art of Testing Using Verified Fakes

00:00

Formale Metadaten

Titel
Faking It - The Art of Testing Using Verified Fakes
Serientitel
Teil
140
Anzahl der Teile
173
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache
ProduktionsortBilbao, Euskadi, Spain

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Richard Wall - Faking It - The Art of Testing Using Verified Fakes Have you ever worried that your tests aren't as good because they're running against a fake or mock instead of the real thing? Verified fakes solve this problem. Verified fakes allow for simplified testing using fakes while still providing the assurance that code tested using a fake implementation will behave the same way when used with a real implementation. The talk will begin with a case-study, demonstrating what it means to write a "verified fake" implementation of a public API. I will show how to write tests that verify a fake implementation of a well defined API and I will show how those same tests can be re-used to verify and test real implementations of the same API. The talk will end with a proposal that more libraries should include verified fakes. I will show, with real-world examples, how verified fakes can be used by integrators and discuss how they are superior to ad-hoc, unverified, mocking. During the talk I will refer to various real world, Open Source examples. Including: * Flocker's Pluggable "Block Device Backend" This API allows Flocker to manipulate file systems on OpenStack Cinder Blocks and AWS EBS devices. It also makes it easy for third parties to implement their own Flocker block device backends. * Eliot's Memory Logger - and its use in testing and verifying logged messages. * LibCloud's DummyNodeDriver - and its limitations. * Boto - as an example of a library that could benefit from a verified, introspectable fake. * Docker-py - as an example of a library for which we have written a verified fake. There will be at least 5 minutes for discussion at the end of the talk.
Schlagwörter
51
68
Vorschaubild
39:40
108
Vorschaubild
29:48
Attributierte GrammatikBildgebendes VerfahrenCodeDämpfungDatensatzImplementierungOrdnung <Mathematik>SimulationSoftwareProgrammbibliothekMatrizenrechnungSynchronisierungIntegralSoftwaretestAggregatzustandFunktionalIdeal <Mathematik>MereologieMomentenproblemPhysikalisches SystemProjektive EbeneRechenschieberTreiber <Programm>ZahlenbereichQuick-SortVersionsverwaltungReelle ZahlMehrplatzsystemParametersystemCASE <Informatik>Faktor <Algebra>Prozess <Informatik>FehlermeldungInstantiierungDämon <Informatik>Klasse <Mathematik>SchnittmengeResolventeAuthentifikationProtokoll <Datenverarbeitungssystem>Lesen <Datenverarbeitung>BimodulMailing-ListeData MiningBefehl <Informatik>Elektronische PublikationClientPunktwolkeInterrupt <Informatik>Autorisierungp-BlockKomponententestWeb logObjekt <Kategorie>Web ServicesHoaxMultiplikationsoperatorSchreiben <Datenverarbeitung>Message-PassingInterface <Schaltung>REST <Informatik>SoftwareentwicklerCodeDatenerfassungDatenverwaltungImplementierungManagementinformationssystemIterationBildschirmfensterVideokonferenzSoftwaretestGrenzschichtablösungUniformer RaumLokales MinimumZustandsdichteOvalReelle ZahlSpannweite <Stochastik>Intelligentes NetzFaktor <Algebra>Metropolitan area networkATMIntegriertes InformationssystemKünstliches LebenKlasse <Mathematik>SchnittmengeCAN-BusData Encryption StandardInklusion <Mathematik>Service providerBimodulMailing-ListeChatten <Kommunikation>RFIDClientDifferenteRemote AccessWeb logHoaxARM <Computerarchitektur>Einfache GenauigkeitKreisbogenSchreiben <Datenverarbeitung>Message-PassingInterface <Schaltung>Personal Area NetworkMehrwertnetzPortscanner
Transkript: Englisch(automatisch erzeugt)
Can everyone hear me at the back? Yeah, welcome everyone. Thank you for coming. It's nice to see a full house. I'm Richard Wall, a senior developer with Cluster HQ.
We're hiring, so come and see us at the booth. I'm telling you, I'm gonna talk to you today about a technique we use at work for testing our software, a piece of software that I work on called Flocker, in particular, a technique that we use for ensuring that the APIs that we write
and the implementations of those APIs are easily testable and a technique we use to ensure that all of the implementations of the API, the real implementations and the fake implementation, are in sync and that they behave the same. So, I'm gonna start with an introduction.
The talk is gonna comprise a quick discussion of the problems that we're trying to solve using these verified fakes. Hopefully, an in-depth discussion of the solution that we've come up with and then I hope that I'll have some time
at the end for questions. But, if anyone has a burning question, then just put your hand up while I'm talking and it'll be nice for me because it'll interrupt. I won't have to be speaking all the time. Okay, so, first of all, let's talk about the, one of the, there are a number of problems with testing APIs using unverified fakes
or sort of ad hoc mocks or stubs in your tests. And these aren't things that I, these aren't ideas of mine. These have been discussed previous icons, for example, by a couple of guys called Augie,
I think you pronounced it, Augie Fackler and Nathaniel Minister. And they gave a great talk on, entitled Stop Mocking and Start Testing. And so, they say, well, I'm reading here from some sort of a transcript written by Ned Batchelder.
Everyone at Google, where they worked, made their own mock objects. We had n different implementations in the mock. And when the real code changed, you have to find all of those mocks and update them all. And this is a problem that I've seen in Twisted, for example. Oops, I'm not used to it.
So, I've got quite an interest in the Twisted project. I did some work on the Twisted names module last year. And so, I wondered if this proliferation of unverified mock objects affects Twisted. And it does, because Twisted is about 12
or more years old. And so, I crept for fake classes across the Twisted code base. And there are at least seven implementations of the Twisted base protocol. And six fake reactors. And the one I was, I'm probably most guilty of
or responsible for is the five copies of the fake resolver in the Twisted names package. I'm not sure, they may not all be, there may be good reasons for some of this duplication, but I'm sure that some of these could be, are examples of fakes that could be replaced
with a single verified fake, which could be used throughout the code base. So, that's Twisted. And it's guilty of having these unverified fakes. The next problem with unverified fakes and mocks is that they don't, it's easy to write some ad hoc class
which behaves just about the same as the API that you're trying to test. But the trouble is that they're often, they're often inaccurate from the very start. And then they grow more inaccurate as the actual real API develops. And so, I was looking while writing this talk frantically over the last couple of days
for example of this, and someone mentioned to me at the stand a library which I'm quite interested in called Nova Docker, which is a driver for OpenStack Nova which allows you to create Nova instances as Docker containers.
And so I wondered, how do they test that the Nova driver, that the Docker driver that they've written, behaves properly with a real Docker daemon, which would be slow and expensive to run, versus an in-memory version
of the Docker client. And I found that they have actually implemented a fake Docker client, and it suffers from these problems that I've just described in that the fake has got out of sync with the real Docker client.
It may always have been out of sync, it may never have, because they don't run the tests against both the fake and the real client, they may never have been in sync. So here's an example of what can go wrong. Sorry, so we've imported here both the mock client
from Nova Docker, and we've imported the real client from Docker Pi, and we instantiate them both, and we see how the real client behaves when we create a new Docker container. So all you have to supply when you create a new Docker container using Docker Pi
is the image name upon which the container is based. And it returns a dictionary containing the ID, the 64-digit ID of that container, and a list, or a summary of any warnings that have occurred while that container's been created, in creating that container.
And that's not the greatest API, it's pretty ugly, it should really be returning something like a class or a record of some sort containing that information, rather than just a dictionary, but it is what it is. And then we look at the Nova Docker,
the fake implementation that I found, and we can see that the first thing I tried to do was to supply it with the same keyword arguments, and it fails immediately, because they haven't used the same argument names. And then, this is just one example, I went on, I tried and tried to create a fake container
using their API, and I found it doesn't return the same result, it doesn't raise the same exceptions, it doesn't have the same optional arguments as the real API. And some of this may be because they've based their fake
on an earlier version of DockerPy, but I think actually that's not, there are some real problems here, which mean that I can't use this fake in my code, which I'd quite like to do. Okay, so, can we do any better?
Well, let's think about what we, the ideal for a verified fake. Ideally, we want something that provides the same interface as the real API. We want to be able to run the same tests against the fake as we run against the real client.
And ideally, we'd like the fake to be maintained by the same author that wrote the real implementation. That's quite rare, but I hope that that will become more common. And I'll show you an example in a minute
of someone requesting just that. So, I haven't really explained what DockerPy is, but I think probably most of you are familiar with it. It's a library that wraps the Docker demons rest API
in Python, and it is something that we use at work quite a lot, so it's something which I'm familiar with. It seemed an easy target to pick on, but there's lots of examples like this, so don't think I'm picking on this in particular. Okay, so I wondered when I started writing the talk,
is anyone else thinking the same thing that I am? And funnily enough, there is an issue on DockerPy's GitHub. Someone has come and asked, is there any, made the statement that any usage of DockerPy requires unit testing. The latter requires a fake client
that does not require a Docker demon to be running. Providing a mock implementation will avoid every single user having to re-implement its own mock. He says, this is a nice to have. Well, it is a nice to have, but it's probably, ideally, every library would come with such a fake, so that we all don't have to re-implement our own dodgy, unverified fakes.
Okay, so what I'm gonna show you now is my crude attempt to write a fake, a verified fake for DockerPy. Any questions so far?
If not, I'll carry on.
Let's, the first thing I wrote, it's something I'm familiar with
from the Twisted Project, for alternatives, but I haven't used rack-base classes that are available. We start with an implementer, a soap interface,
you pass to it, provides all of the methods, that's something which I, with that test in place,
notice that the test is defined in a mix-in class. We're not actually defining a test case here. I'll show you how this works in a moment. But with that test written, we can run the tests against the real and the fake Docker, having decorated the classes,
the DockerPy client class and our fake client,
and I'll show, again, I just want to show you some results. In order to get this result, Twisted unit test,
it's tests.integration test, real Docker client tests, and I'm also running the tests, the unit tests, tests.test, fake Docker client tests. So we have those two, the integration tests, the slower tests, which exercise the real Docker daemon
in a separate module from the fake. That's a nice way to, that's the way we organize our tests,
existing test modules in Docker, but it's nice to keep the two sets of tests separate. It's nice, the code in such a way that the tests,
the test mix-in, which we saw earlier, and the fake implementation are put in a public module, which means that they're not hidden, they can be imported by any libraries that consume this package, and it means that test case
that gets discovered by the test runner. The way the test runner discovers the tests, or the way the actual test cases are produced are by way of a test case factory function.
What this does is the function,
find a setup method against any version of the Docker.
We have a similar case passing this time, the fake Docker class,
which has been produced by this factory. It's worth noting also that this dynamically, this test class gives us a place to document.
We can start now fleshing out the interface. So we've so far, all we've got is a test for the interface, a soap interface test, which I'll describe in a minute.
Write a test, it passes,
and we read those in the same way with the same image, and with the same parameters, and you're wanting to destroy containers once hinted at some company.
The first key exercise, so you're trying to, in creating a container,
you need to create the container and then ensure that that container is listed, or you want to clean up. All the containers that are available are listed, but in order to test that,
you need to create some containers. So you can't quite implement it method by method. You have to do the chunks of this poison counter when we were doing this sort of stuff, is that you want to start from the very start.
If you're running this, the tests against a real Docker client, you need a way of cleaning up the containers that you've created after the tests have completed. And so to do that, and if your aim is to only use the public APIs, you have to implement a remove container method,
test the remove container method until you've got a way of listing and creating containers. So again, you have to be practical about all chunks.
It's tempting when you're trying to overcome the problems I've just described. Temptation is to test the sort of implementation details. You might have private helper methods inside your implementation. You might start trying to write tests for those, but usually it's not worth it. And it's usually just sticking to the testing
only the public parts of the interface, the parts of the class, which have been defined in our soap interface. So that's as far as I'm gonna go with showing you the implementation I've done
of this fake Docker client, but I wanted to briefly describe some other examples of this. Not, maybe not all of them, some of them are not quite the same. The first one is the one that I've most recently we were tasked with creating a software
which allows us to attach, to create and attach and destroy our tests against AWN.
And we don't want to have to run our tests against OpenStack, but we do have to do that. They're very slow tests to run. What we really want to do is run our tests for this iBlock device API against a fake, a sort of simulation of those two Cloud block devices.
So you can have a look at GitHub here. I'll post all of these slides after the talk. You can have a look at the code and you'll see that we've implemented a loopback simulation of these. We've got an implementation of iBlock device
which creates loopback file systems, attaches them to one process and detaches them and attaches them to another process. By doing that, we have a way of testing, running our tests for this API much faster than we would
if we had to run them against the real rack space, for example. Another example I'm interested in, but I haven't yet tried, and developed by a guy, developed by amongst others, some of the twisted developers, Glyph from Twisted Matrix.
This is a system which allows you to create a fake version of the OpenStack rest APIs. And this is something we definitely like to use at work. The trouble is it doesn't yet implement the Cinder APIs, but that's, again, it's something
which I'd like to try and do. And what this is is a web service which can be primed with both successful and error cases. And it's a web service which you can make rest API requests to, and it tracks the state as the requests come in.
So you can say create a Nova instance, and then you can ask that fake API for a list of the Nova instances, and it'll return you the fake that you just created, the details of the fake that you just created. And it even, I think, simulates the Keystone authentication for OpenStack as well, which will be really useful.
Another one from work, my colleague Itamar develops a library called Elliot, which is what we call a structured logging library. And from the start, he's built into it various features which make it easy for us to test
the code that uses Elliot. So he's implemented a memory logger, and he has done it the way I described. He's got an interface called iLogger and a memory logger, which implements that interface. And he's gone further than that. He's actually, in the public Elliot package,
he provides tools for testing that your code logs the correct messages and logs the correct errors. So that's a really good example of this sort of technique. And finally, I thought worth mentioning is a recent blog post by Glyph
about going even further and writing your fakes in such a way that you don't pollute the fake with attributes that aren't actually present in the real implementation. And it's got a really neat way of doing that separation,
which again, I haven't yet, I haven't done that with my code for this talk, but that's the next thing I'm intending to do. So I think we're about out of time. Quickly to summarize, we've seen some of the problems with unverified fakes and mocks. We've looked at some examples of those problems.
We've briefly shown how we might go about writing a verified fake from scratch. And we've seen some examples of other, other Python examples of this technique.
And that's really the, it's only a half an hour talk, so that's all I've got to say. I think I'll leave it there. Thank you for your attention. Right, officially, we don't have any time for Q&A, so you're free to go to lunch. But if you'd like to stay and ask questions,
then you're free to do that. And does anybody have questions?