Bestand wählen
Merken

Testing in Layers

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
and of potentially you may know me and the best known of the overall so this OK and I apologies in advance if the presentation is now about to my usual standard but I'm not used to being change in 1 spot and the Walker but there is no moving like um the z probably best known for platinum that show the 3rd edition is just out cold with my wife on Mercury radius Croft and Steve Holden it used to be the organizers of the very 1st like of the this is outside of what I call over in by the natural actually how would say and they by some here is only used for examples and what I'm trying to show is just about as useful in any programming languages may be used to
because programming language a typical software system can be seen as a directed acyclic graph in which there's lot lower layer of modules or services or components call them as you will that to provide some functionality but don't depend on any other that you've written they may be of interfacing external entities of course like at the database in a domain and system in this example there's middle layers which both depend on some models are not dependent upon and their top layers that are not dependent on but depend on and you other subsystem so as long as you don't have any cycle in your directed graph any directed acyclic graph can have its snowed classified in the is that if you do have cycles you have far bigger problems that that I can hope to address because the point is that of the the arrows are dependencies if you have a cycle it means a depends on B and B depends on a a neuron help and thus completely refactor everything break the dependency cycle that's much more important than anything else I or anybody else can teach you the so the the conference and go do that and break your dependencies that it's really really want to do that if you don't have dependency cycle so this will always hold and had some questions about weight need people top players well da it's it is 20 17 told so of course you will have an API and a web the interface and the optimal called of graphical user interface and I will command line interface so you will have moved people top-level certainly also if your system is rich and complicated enough so the next issue is OK so we have that think why do we tested OK unfortunately in 45 minutes I cannot but I cannot compress few hours worth of explanation of why testing is the crucial disciplines of development and I would recommend you low-lying find any other talk of a given in most of the talks given by other people and get to those to understand why you really want what you really have to test we want be covering them today that what Culver in here is the house not not very wide but the house they most of antique traditional form of testing the distinguish test into white box meaning tests that are written in full knowledge of what's inside of the components being tested and black box which is supposed to on the use of the external locus of connections made available by the importance of and that has been dropped since a long time in professional practice it's not a very useful distinction a however what how we do things that in the modern way it looks like the old way with new names and is not much more useful but we nowadays tend to have a unique test which are really white box typically looking a lot inside the component the testing and they're written by developers for developers so just to you East development nothing wrong with that but that's like 1 extreme and then we only have the other extreme integration tests which are end to end the so they do have to go from or soup to nuts so I think is British expression for the from the end of this will often have staff that cannot really be automated and therefore need a human being in the rule but if you need a human being in the loop by my lights you go really have a test you have a separate step of your software development and delivery cycle which I like to call quality assurance use a different term than testing cause for my point of view testing has to be automated so for how a complete end-to-end thing you can automate s when the top you need is and API a command line interface of a web page using something new and similar tool so if it's a graphical unique interface running locally there are some tricks to do that but you'll never group right and meanwhile what about all the other things we do like to automated and I would like to use it In a continuous integration environment so that something gets fully integrated and really deployed only when all tests passed if there have to be human in the testing group you just can't do that humans are reliable non not repeatable very bad at mechanically repeating this series of operations very small very costly there's a million reasons you must not have you in the rope of I think of completely different proposals we have the so
after system composed of components modules services micro-services of nowadays whatever it doesn't matter how old the dependencies are aware looking at which naturally forms layers why note structure our test set In the same way we inevitably naturally structure our softer assuming we make it moderate toll as opposed to 1 big merely lie program which I hope none of us would do in this the you will then of course we have a unique test they have to be very fast because the running all the time they focus strictly on of components or model or service Internal logic so that at the limit you can knock out every dependency I think they need to be fast essentially above all it's the top priority for your unit test is make them fast then build a cup on that and we'll see how we'll have higher later test but not on a single big chunk from unit test all the way to 2 way and takes forever test will do layers and layers of testing as we'll see the top you can see this as a pattern language of testing structures the pattern languages are most understood in this community for design purposes but they also apply to a lot of other human greedy creative activity and 1 of them is testing in a sense we're talking about how to design but also Clegg's acute the test sometimes I have I get interesting objections at this point about what you mean fast of all I I I think up faster when I'm doing production stuff tests at need to be fast right yes you do even a modern develop integrated development arrangement your tests your unit tests that should be running all the time in the background as they sistencies you've saved some changes to a file it should ruled that and every dependency that in every test that can be affected and rerun the mold for you in this it if that's the set up you have and I hope so because it's really multiplies the productivity in just about any ID is able to do that today today if you have that in your test what set of tests has been modified it takes 10 seconds to run then if there's any problem your other could within 10 seconds of saving the problematical so you know it's still top of your mind you can see the error in probably say exactly 0 . see immediately what you did wrong fix it and proceed if it's 5 minutes the you've lost mental context you moved on to another task you now need quite a bit of time to get back in your mind to what was I thinking when I role to that and that now you're losing all that you don't after seriously up an order of magnitude impact on your productivity because you didn't think that your test should be 1st of all fast fast fast all but integration test certainly can't afford to be so well I have a very recent case study showing why not Python 3 6 1 Release Candidate 1 sold what was it 3 months ago or something like that um the effect in days In the speaker's notes I have the URL to the discussion on Python commuters about what was going on essentially Brett can have to announce that he had turned off the gating to all the the integration tests on the continuous integration of the 3 6 1 0 Release Candidate 1 because they were taking forever so actually integrating a pull request it was getting so small that the a really easy question will probably come out around 20 23 or something like that if the URI integration tests aren't fast enough you might as well not have facts from more professors fourier integration test OK there's a difference here because if you are well funded rich should have the general sponsor or so you can be running year integration based on a million machines million would be a bit of 1 over K but on a lot of machines so as long as the slowest what is fast enough the others are running in parallel and everything is rosy most open source projects don't have unlimited amounts of machines at hand so we get charity by traders or whatever and we can use that along the x of number American doses is the a such a sponsor such as the release engineering the captain for so many reasons Python but I think it's a single digit to number of of synoptic anywhere like so you need to be fast fast fast now everything that applies to all other forms of automated testing still also I wish I had of a couple more hours to recap everything a probably more half of you but so that 1st thing that is all test must be reproducible that seems obvious but people keep running into problems 1 of them is all but I have a you when they hit that get it out of or it's not an automated but 1 example but that's my model uses random numbers well then make sure that you're able to inject the fix'd seat so that your test will actually be doing the same sequence of random numbers the there's some that existed there because maybe on in some cases the your your calling generator 5 times the other paths 7 times is so the same seed may not actually give but you know if you are using random numbers of presumably you know all of that and can and should keep it under control much more common all but my code or something different depending on what day of the week it is or what time of day that's because it needs to do something if it's between 9 AM and 5 PM Monday to Friday something different out about this well great but then you have to somehow fade out the time and make sure to test both in this case to behavior so that your program should have in office hours and out of hours because otherwise if you're just letting the time be whatever time you happen to reference test on that's essentially random because you may be running them any time all time and many other excellent mandatory Quality of test whether there are not the fundamental things apply now let's start with some a bit
more concrete to sure what I'm talking about how we tested the this that a Bayes adopted well the 1st the so
maybe I'm just testing my own logic then all I need to do is knock off of the external DB component incidentally there's a beginner's talk about Marx this afternoon which unless you're completely familiar with them mocking attaching the so is highly recommended it's kind of a prerequisite of this talk about even get getting it after is better than not getting it at all so mocking is a song as you are certain you understand a 100 per cent of the behavior of characteristics off that external DB but there's a 2nd possibility which when feasible can often be better use of fake also known as and any related form of DB text local that's control local so don't you don't have to pay for network traffic they also that things but it's only under your control will point out some details bit in memory because you don't need a gigabytes and gigabytes for test so you can make a smaller version but the crucial thing it must at of respect all the same semantic constraints of the real system you'll be running in production what's a semantic constraints are 1 example that's unfortunate become 1 among various databases after calls the calls method of being called on a connection to any other no other method must be called that connection if any other method is called a runtime error will be raised if this is the way the real DB behaves then it's absolutely crucial that the fake emulates this behavior so it keeps track of whether the connections being closed and if it's been closed ended some other method gets cold and it raises that section about a mile could of course it will not doing that not naturally and less you know all you have to specifically watch for it if you do specifically in all make sure your mark does have that because all the future maintainers of the code may miss that subtle semantic constraint and if your mark has this completeness it will help but this is an example of a general problem of mocking mall you the stuff that you write to help your of testing reflected the same understanding of the external system that your code reflects if you understood that close must be the last method called the connection that you want call anymore in your cold and your mock will check and given error if you do but if you don't understand that the test will pass anyway because the mock will not to do the check so the problem itself there is a common mode of potential failure between the test not catching something in your code having that effect the all the real solution to that is the fate which will get to again and again there's another talk specifically about verified which is from the common OK so incidentally our faith in addition to having to respect all the semantic constraints of the thing is taking made at all that's the most become example for something like a database of the fake could say no more than 32 megabytes of data just because it's in memory and that will make it this should be enough for testing now given this set of constraints the set of approaches this is where I use Python for examples and I assume that a democracy module has been taken from the unit test package I'm not going to repeat this line it applies to all my successive example amusing mock 1st of because not factors that such a great way to temporarily substitute something for a real component and then take it out a way to automatically a you they do that mock offers some unique test of bookmark offers many way to do it and I always used in this example so they wish to statement because it's such a natural way to say do this temporary OK I'll and of the with block and do whatever and so I'm mocking I'm using modal back in their mock talk I believe that prefers the specs set and all the state this these are specific to unit test well worth the pondering the and that's why it's stronger command this afternoon stock but essentially it makes it puts in place of something that will emulate most anything and for the details of that BA Europe you set it so side the fact field in particular here and I'm setting the side effect of the cursor of the connection of the database so it to be connect resource side effect and then the body of tests what that will see but it is a big chunk of code presumably split into functions that folds whatever which exercise every meaningful path that off the application could well be off the code I All the but I right now if what I'm doing is a 2nd later so using a fake instead of a mock then the typical structure is make the fake with a corporate parameters patch it into the new equal in in in batch instead of of a lot of spectral which sets an existing object in view of the other and then populate in this case the database by actually executing for example sequel statements on on the service should and then the same body of text of before because we've set up exactly the same situation except the effect is that of a lot and we can proceed infer for full integration test well I presume of I start an instance of the database yeah presumably locally like for example on the machine amusing for test so I can connect so the correction can use Unix sockets which is faster than a network socket doesn't have to use them if it was actually on the net and then populated somehow all because maybe that just makes a cuting staff for importing a dumb or something so it's got an initial situation and then the same body of text that I was using the new test which is where the novelty applies the body of test is a core
reusable part of the test for a certain it exercises all meaningful paths in that must include simulated incidentally if you're using locks or other forms of spying is we will briefly summarize all the various test doubles later but but you can also check what holds the being what arguments the be careful of not falling into the trap tofu white box I t you'll want your tests to have exactly the same structure of the code so that the summing up was chained to the cold causes the test to fail that's not the purpose of the test that give you confidence that the code is still working so they must not reflect so if it's in different whether a happens 1st and then B or vise versa make sure it's in different in your text the checks are optional marks are also always spies so they make it 3 but you can always grab anything with aspire to just final those checks security keen about the so the big point in any case is the difference between Marx and fakes and there are many other kind of test unfortunately the classic article on the matter Martin Fowler's for this URL these very Java it still as I mentioned at the start of these concepts apply to just about any programming language you might want to use except of course that to give examples in Java considering every every variable must be at least 45 characters long and with several capitals in the middle will take more pixels whatever they from my point of view rather than the important fine-grained detailed between a dummy in a fake and walking stubbornness my so on these walls at who maintains it will releases a fake the way I'm using the term the is something that is maintained and release the by the same group with who maintain soon they so after being faced so if I am part of our of open-source group for maintaining unity using a database I will have a fake version of the database is ready for testing of an example of between complete askew light which comes with the the standard library of is a perfectly usable that for reasonably small story like few few G gigabyte of stuff but it also half hour uh call memory call special low word to use instead of the filename it will make the database in memory which can be useful for very small databases but more and more particularly for text it's not complete as we'll see is not the all you want face so again there's a toll later on dated face later this morning which I strongly recommend because we will go deeper into what I can barely mention so I'm looking very flexible can simulate anything but the exactly because of that it can simulate something you think should exist but is not what actually exists the fake as a fast limited information about the exact a set of things that do exist firm they both showed in this is where the light falls short as a fake of itself of be able to simulate any that is the they should be able to be set so that the instead of giving a result that they will raise an exception of specified exceptions of that's trivial with mock it you just assign the exception to this side effect of instead of the of the result but took of for the fake the fake must have been a result white or you can find a Hackett by wrapping a mock around the fake for the sole purpose that is gets kind of gravity and and the reason is that certain errors in particular which are crucial to be handled correctly are almost impossible to simulate a to to verify that the your code makes any sense except if your block or fake whatever is able to simulate so that for example what you do it so the CPU catches fire well you presumably Katsav acq on fire exception in and proceeded to to turn down but the point is how do you test that too because it takes out a lot of CPU to be burdened and and it's hard to automate to if if you really need to do that the vast by far the best is a is more or something that raises the CPU on fire and then you check but you have to break free now moving to a middle layer 1 young what changes as well for the pure unit tests that you can lock out that level modules 1 of 2 1 which depends in this case that you have fewer risks of presumably all the modules and components were drawing our old by the same team so all that is good understanding around so you don't really risk will hopefully you don't need you just need to get your marks are reviewed by the specialists who know the 1 who to that is a whether an interesting alternative for middle layer what if I use the actual and 1 of 2 in this case there are no further dependencies so for the problem well it works at the fast enough don't you'll want to sure they are that's what time which is 4 measuring the speed of a specific fragment of code if they are fast enough you don't need to do the marking that so makes this work for you if you needed to verify at the start you need the same amount of work but then if the actual models are fast enough that you don't have to maintain than going forward remember there are some of prior mean that you need to be able to do your levels of which includes simulating errors as well as whatever else is needed for speed light equivalent to the cold memory thing units to power and this is used
they the math as again there's a prepare of with side effects and then the body or the prepare with priming into the body what about what about at high-level like
you 1 well then you have several potential chains of dependencies and you could what pure unit test by not mocking doubts layer so you can do a 2nd layer by using the accented layers that knocking down below all of sometimes you can use the real 1 of there's so many possibilities of makes sense that you have to pick a subset because if you try to do every possible combination you suffer a combinatorial explosion don't get extra useful coverage for your efforts the as so I'm taking I'm picking 1 the set off a mocking their versus actually InGaN faking in this is they called
for that single cases so this this time it's a body of test only once but that's because of this is only 1 of the many layers I recommend so what do you use the well the
decision depends a lot on the characteristic of your code but then lock it probably fastest only here actually is this work if a fast enough if it's designed to be primal for speed and other things like taking Garis fake is probably best if you're using software which releases it incidentally need not be open source for example will cloud platform services are being old being reviews do with innovators on the side so that you can run your test locally without necessarily I know because I the tech support for cloud platform and I really I know he a high yeah engineering and program manager colleague was saying it every time there's a body in a customer we could have been avoided if they had run test I go to my colleague a set C you couldn't run test because you didn't really is an major for this service and that service went down I see it otherwise it can send you the problem and so and so on and so forth anyway the end 1 of the choices is to control the complexity sometimes it's not obvious when it's a DNS Domain Name System for most people it means OK get of food of calm and translate that into 1 a P 1 2 3 4 that's known as a record unionists but the analysis of real your knowledge on records from the scene into the T. C. and maybe you need to the TEC record to validate the ownership of a domain and so on in which there is a if it's not a trivial anymore as would be if you only need a day or whatever they records before I finish I haven't been asked a very interesting question but doesn't apply to load testing well there's a whole thing about to measuring performances so it takes the whole afternoon today so you may want to call that in if you really need more tests for performance but unfortunately it does not quite fit the layering concept does not quite simply because you can't measure based on the later except in so there the end-to-end is needed you can take correctness for granted that it needs to be tested by separatist test but to speed if you need to measure precisely you need to the end to end so and different bodies test so you don't want they have the theory test correctness correctness most physicists up you want to exercise the the smallpox the heavy computational or parts of you can give boundary if if you lower level level models most depend on how a service level agreement that kind of 90 % of queries complete in less than 30 ms that kind of thing and you need to guarantee something similar to your users but there is an approach which gives you a worst-case estimate essentially you can use the thing to me attested to measure the actual time spent in your coding count the number of calls to the external services incidentally the external services don't you give you have service level agreement then you can an inter cause any single cold to 1 of those good stop for over and this is about the body of test for that tough on the other hand other question is but can I use of this approach when what I want to test is a refactoring of course you can indeed the the refactoring they incidentally is refactoring means of changing the internals of the code without theoretically changing any of the externally observable behavior or if it's all within a module this is the base case all the talk applies entirely to testing the model of hot Dave you may need to to we
could test bodies the body of test just to maintain coverage of maybe something's gone away and you need someone more sort of for moving functionality between model so the 1st thing you do is you change the code and check that the unit tests of those modules at least the 1 from which have taken things away thank you this is the typical approach of all 1st but it's automatic because you already have the test before you the refactoring remember never course without this is what the microphone the school legacy code always put some tests in place 1st before so unfortunate to have to do with lexical so you make it has failed you run the test that automatically fail no you added that test bodies and potentially model Moxon and face and they past act and then the intermediate levels showing that higher-level what the intermediate tests in the version using the actual lower levels showing the higher level the modules effective everything is happy I finally the 1 problem with the unit as much having to be faster is that sometimes not that not that just a checking that a condition was actually satisfied can be too time consuming to fit in a very short time I want my unit tests around when that happens what adults sometimes is Don like a snapshot the state of the whole system of the and jerkily was fast track and leave a nice of blob reckoned from which the whole world used the system status can be reconstructed as if I was doing snapshot infrastructure and then in the background of a synchronously just run background jobs which continues we check for sanity whatever is very small amount to check this is work so well for me but have started doing snapshot when the performance of Ford's even in that in production lots of the structure a production and Goals nobody complains everything seems to have gone well but have as much of their and with some probability from random sample I sanity check after warts once in a while this will let you catch up problem that was just barely hidden didn't didn't hard to your users but your recorded before it's called here users which is by far the best thing and this led some to question the answer the everything including the speaker's notes which have been talking about using that year on my website you have to have in mind for me question will question what they think about this video fusion on like they all database assimilated database eyes and gate is better the k yeah what do you think about this ability to use a real database as monitored database but you know that is more than the memory so in Armenia tested on step anything just thinking that it was a memory put your records there and of treason it'll be fast but at the same time it will give you kind then you can be sure that everything what this will broaden your database because they can all be where that I mean if is the material difference between what we want to go with you also than before yes of course that so you needed maybe after award to reconsider down that is the only example of a just coming in the 1 the logic of is that what you rejected along the the last you the date of the the but so it herself fast enough uh the let's say smaller order means but it still has layers would you say that skipping walking and faking is a good idea using integration either and fast because it kind of goes down the workload of writing marks and the you the the I need to stay the detailed here of and I if your code gossip pure of Internet totally CPU-bound computational issues then the speed of is constrained by a and B you R such a cold is normally best move with 2 p or similar libraries which U. S. Zumer correct because you know Eric Raymond so the most famous quote given enough eyeballs all bugs are shallow but with a million users of NumPy back still have a very long life there of
and this has led to little advantage is that by dating did it for granted that is correct you can walked out without anomalies and your whole test will be correct if as most programs your is ideal bound but then is where mocking faking and so on makes a big difference because a lot of bad you'll make all had a law to magnetic disk well if it stays in memory stead that'll be faster it may go over the network if it stays local cut will be faster and so on and so forth that they the the little bit by little bit so you can easily get an order of magnitude by sufficient simulation how much longer we have and so 1 2 more questions the so it's lovely to here that's your team's produce good thanks how common would you say this it's because my experience of latent years start up my experience with taking things is that the fakes do not exist elsewhere and you burn a lot of time failing to fully understand the system would you say that it is becoming more common you mentioned that your team is you push them halted to produce effects this work in research for you the way the 1 on the right in the so it's in the world and and what we and and I want us about integration tasks so what you think they mean what they tell would you includes online uh distributed services in your integration testing doesn't seem seem feasible ways see but then you need to use marks some of some thanks for your online services and so I'm sorry the all acoustic is heart if I understood correctly you talking about real-time softer so after with some kind of real-time constraint elects a senior DVDs in on another server somewhere it's 8 2 requires Internet connection as sockets something to err on end-to-end integration tests on CI although I'm still not sure of pinpointed the question I believe that but in general we know the more real life constrains the real softer has the more of the test will be living in our simulated universe of found to the university where things that can go well or badly in a simulated and controlled way I normally a C get asked this about 2 of your t Internet of Things applications where indeed that uh and the the big deal is how do I deal with a million teeny gadgets of all over the place where you don't let you do in your code help helper but not in your text that even the so-called end-to-end tests are you going to like have 11 million of Rumbelows the Xs going around huge room probably not too there will be some level of of simulation inevitably otherwise they test will be so courses is like a how do you test it from the so my softer the controls so rocket putting putting a man on mars the hardware tests that will not end to end the because they they said there then you have to get them back from our success of of
Programmiersprache
Selbst organisierendes System
Mathematisierung
Kombinatorische Gruppentheorie
Bit
Komponententest
Punkt
Desintegration <Mathematik>
Blackbox
Programm
Gruppenkeim
Datenflussplan
Statistischer Test
Arithmetischer Ausdruck
Einheit <Mathematik>
Statistischer Test
Code
Mustersprache
Nichtunterscheidbarkeit
Vorlesung/Konferenz
Greedy-Algorithmus
Parallele Schnittstelle
Quellencodierung
Schnittstelle
Signifikanztest
Nichtlinearer Operator
Lineares Funktional
Sichtenkonzept
Physikalischer Effekt
Datenhaltung
Machsches Prinzip
Reihe
Systemaufruf
Ähnlichkeitsgeometrie
Kontextbezogenes System
Biprodukt
Kommutator <Quantentheorie>
Arithmetisches Mittel
Software
Dienst <Informatik>
Menge
Rechter Winkel
Digitalisierer
Projektive Ebene
Benutzerführung
URL
Extreme programming
Programmierumgebung
Fehlermeldung
Folge <Mathematik>
Subtraktion
Gewicht <Mathematik>
Quader
Stab
Mathematisierung
Zahlenbereich
Kraft
Web-Seite
Term
Signifikanztest
Code
Task
Physikalisches System
Virtuelle Maschine
Loop
Multiplikation
Benutzerbeteiligung
Bildschirmmaske
Informationsmodellierung
Domain-Name
Task
Software
Inverser Limes
Zeitrichtung
Zusammenhängender Graph
Datenstruktur
Softwareentwickler
Widerspruchsfreiheit
Hilfesystem
Beobachtungsstudie
Soundverarbeitung
Einfach zusammenhängender Raum
Programmiersprache
Graph
Open Source
Zwei
Kontinuierliche Integration
Einfache Genauigkeit
Physikalisches System
Elektronische Publikation
Modul
Office-Paket
Zufallsgenerator
Integral
Objekt <Kategorie>
Fundamentalsatz der Algebra
Formale Sprache
Dreiecksfreier Graph
Modem
Gamecontroller
Direkte numerische Simulation
Größenordnung
Vektorpotenzial
Bit
Komponententest
Inferenz <Künstliche Intelligenz>
Desintegration <Mathematik>
Versionsverwaltung
Annulator
Kartesische Koordinaten
Fortsetzung <Mathematik>
Statistischer Test
Statistischer Test
Vorlesung/Konferenz
Gerade
Signifikanztest
Lineares Funktional
Parametersystem
ATM
Addition
Befehl <Informatik>
Vervollständigung <Mathematik>
Sichtenkonzept
Datenhaltung
Stellenring
Systemaufruf
Knoten <Statik>
Biprodukt
Systemaufruf
Teilbarkeit
Hoax
Softwarewartung
Dienst <Informatik>
Datenfeld
Menge
Einheit <Mathematik>
Festspeicher
Socket
Garbentheorie
Charakteristisches Polynom
Instantiierung
Fehlermeldung
Cursor
Aggregatzustand
Nebenbedingung
Gewicht <Mathematik>
Stab
Patch <Software>
Mathematische Logik
Socket-Schnittstelle
Code
Virtuelle Maschine
Bildschirmmaske
Lesezeichen <Internet>
Reelle Zahl
Zusammenhängender Graph
Datenstruktur
Einfach zusammenhängender Raum
Soundverarbeitung
Eindeutigkeit
Relativitätstheorie
Cursor
Einfach zusammenhängender Raum
Rechenzeit
Physikalisches System
Modul
Integral
Objekt <Kategorie>
Patch <Software>
Gamecontroller
Speicherabzug
Stapelverarbeitung
Emulator
Resultante
Punkt
Desintegration <Mathematik>
Applet
Versionsverwaltung
Gruppenkeim
Computeranimation
Übergang
Eins
Statistischer Test
Umwandlungsenthalpie
Einheit <Mathematik>
Statistischer Test
Anwendungssoftware
Speicherabzug
Vorlesung/Konferenz
Emulator
Umwandlungsenthalpie
Signifikanztest
Parametersystem
Sichtenkonzept
Vervollständigung <Mathematik>
Computersicherheit
Datenhaltung
Machsches Prinzip
Systemaufruf
Ausnahmebehandlung
Hoax
Systemaufruf
Arithmetisches Mittel
Menge
Einheit <Mathematik>
Festspeicher
ATM
URL
Information
Faserbündel
Fehlermeldung
Objekt <Kategorie>
Gravitation
Subtraktion
Quader
Mathematisierung
Patch <Software>
Zentraleinheit
Term
Code
Informationsmodellierung
Bildschirmmaske
Bereichsschätzung
Programmbibliothek
Äußere Algebra eines Moduls
Zusammenhängender Graph
Datenstruktur
Modul
Leistung <Physik>
Soundverarbeitung
Programmiersprache
Pixel
Modul
Mereologie
Wort <Informatik>
Vektorpotenzial
Task
Verkettung <Informatik>
Komponententest
Statistischer Test
Machsches Prinzip
Schaltnetz
Ablöseblase
Vorlesung/Konferenz
Patch <Software>
Direkte numerische Simulation
Kombinatorik
Komponententest
Physiker
Versionsverwaltung
Programm
NP-hartes Problem
Zählen
Komplex <Algebra>
Videokonferenz
Übergang
Internetworking
Statistischer Test
Last
Einheit <Mathematik>
Statistischer Test
Prozess <Informatik>
Code
Mixed Reality
Auswahlaxiom
Einflussgröße
Signifikanztest
Lineares Funktional
Vervollständigung <Mathematik>
Datenhaltung
Machsches Prinzip
Abfrage
Systemaufruf
Übergang
Ähnlichkeitsgeometrie
Störungstheorie
Biprodukt
Systemaufruf
Hoax
Web log
Entscheidungstheorie
Arithmetisches Mittel
Serviceorientierte Architektur
Dienst <Informatik>
Verknüpfungsglied
Menge
Einheit <Mathematik>
Festspeicher
Konditionszahl
Ordnung <Mathematik>
Charakteristisches Polynom
Refactoring
Versionsverwaltung
Aggregatzustand
Subtraktion
Web Site
Digital Rights Management
Zahlenbereich
Mathematische Logik
Physikalische Theorie
Code
Demoszene <Programmierung>
Systemprogrammierung
Informationsmodellierung
Domain-Name
Datensatz
Task
Computerspiel
Modul <Datentyp>
Reelle Zahl
Software
Stichprobenumfang
Programmbibliothek
Luenberger-Beobachter
Datenstruktur
Modul
Soundverarbeitung
Schätzwert
Open Source
Physikalisches System
Modul
Cloud Computing
Integral
Programmfehler
Beanspruchung
Mereologie
Direkte numerische Simulation
Vollständigkeit
Online-Dienst
Nebenbedingung
Subtraktion
Bit
Programm
Kartesische Koordinaten
Gesetz <Physik>
Socket-Schnittstelle
Code
Übergang
Internetworking
Task
Computerspiel
Statistischer Test
Software
Mini-Disc
Grundraum
Schnitt <Graphentheorie>
Hilfesystem
Einfach zusammenhängender Raum
Signifikanztest
Soundverarbeitung
Hardware
Physikalisches System
Internet der Dinge
Hoax
Programmfehler
Integral
Dienst <Informatik>
Echtzeitsystem
Rechter Winkel
Festspeicher
Server
Gamecontroller
Simulation
Größenordnung

Metadaten

Formale Metadaten

Titel Testing in Layers
Serientitel EuroPython 2017
Autor Martelli, Alex
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
DOI 10.5446/33664
Herausgeber EuroPython
Erscheinungsjahr 2017
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Testing in Layers [EuroPython 2017 - Talk - 2017-07-10 - PythonAnywhere Room] [Rimini, Italy] The role of automated testing at the heart of modern development and operations is a given. However, the traditional approach to testing, separating too-developer-focused unit testing and (often only semi-automated) end-to-end integration testing—is not optimal in the modern, fluid world of DevOps. Nothing short of full automation is suitable for continuous integration; any “testing” requiring humans has a drastically different place in the continuum of development and deployment and should best be called by a completely different name like quality assurance. Within the realm of fully automated testing, the best approach, just as for other kinds of software, is modular and layered. This talk highlights the proper design of components for testing purposes and explains how such a design lets you compose multiple, layered testing suites that span the gamut from fast, light-weight unit tests meant to run all the time during development, to full-fledged end-to-end tests of whole systems—and, crucially, the often-neglected intermediate layers, bridging the thoroughness of end-to-end tests with unit tests’ speed and ability to pinpoint the location of any problems that emerge, enabling rapid fixes of most such problems. The talk also discusses the use of modular, layered testing components to validate software refactoring, and (when deployed in a load-testing arrangement) identify and validate software (and architectural) optimizations

Ähnliche Filme

Loading...
Feedback