We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

But how do you know your mock is valid? Verified fakes of web services

00:00

Formal Metadata

Title
But how do you know your mock is valid? Verified fakes of web services
Title of Series
Number of Parts
160
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
But how do you know your mock is valid? Verified fakes of web services [EuroPython 2017 - Talk - 2017-07-10 - Arengo] [Rimini, Italy] If your code calls a third party service then you may want to test that your code works but you don't want to call the service in your tests. It may be expensive, slow or impossible to call that service. For example, if you are making a Slack bot, you want to create tests which don't make calls across the network to Slack. One approach is to create a mock of that service. Our tests can now run quickly, cheaply and reliably. But if we copy the service incorrectly, or if the service changes, our tests will pass while our code does not work. Verified fakes solve this problem. You can write tests which confirm that your mock is an accurate representation of the service being mocked. Those tests can be a small subset of your test suite and they can be run periodically, to verify the validity of the many tests which use the mock. This talk will follow the example of VWS-Python, a verified fake for a proprietary web service. It will discuss the practicalities of creating such a fake and it will focus on the trade-offs, tooling and approaches involved. By the end of this talk the audience will understand how to tie together pytest, Travis CI, requests and Responses to create a verified fake. The talk is aimed at people who have an interest in writing correct software. It is assumed that the audience is familiar with basic testing techniques
Web serviceSoftwareIntelCode division multiple accessHoaxComputer-generated imageryClient (computing)Statistical hypothesis testingWorld Wide Web ConsortiumDatabaseCASE <Informatik>DisintegrationEstimationCovering spaceCodeCAN busNormed vector spaceOperating systemData centerFigurate numberMobile appFront and back endsFourier seriesInternetworkingPhysical system1 (number)Statistical hypothesis testingLimit (category theory)Structural loadNumberCodeStatistical hypothesis testingSubsetSystem callTask (computing)Process (computing)Software developerPerimeterSuite (music)MathematicsMereologyMultiplication signSoftware bugString (computer science)Functional (mathematics)Interface (computing)HypothesisDatabaseMedical imagingCASE <Informatik>Client (computing)Web 2.0Web serviceSoftwareINTEGRALDependent and independent variablesDigital photographyLibrary (computing)Execution unitPattern recognitionObject (grammar)Image registrationCartesian coordinate systemMusical ensembleSoftware frameworkUnit testingLocal ringPrototypeData storage devicePoint cloudMatching (graph theory)Right angleBitSlide ruleLevel (video gaming)Semiconductor memoryXMLUMLLecture/Conference
Dependent and independent variablesAerodynamicsLibrary (computing)CAN busFormal languageWritingDigital photographyMultiplication signSlide ruleVideo gameOpen sourceError messageInterface (computing)Web serviceState of matterPoint (geometry)Functional (mathematics)Product (business)MathematicsFourier seriesConjugacy classRevision controlData managementContext awarenessCASE <Informatik>Shared memoryPhysical systemFormal languageProjective planeClient (computing)MereologyLibrary (computing)INTEGRALArithmetic progressionSoftware developerLoginStatistical hypothesis testingServer (computing)CodeEndliche ModelltheorieDifferent (Kate Ryan album)Dependent and independent variablesExecution unitVector spaceObject (grammar)BitStatistical hypothesis testingRight angleProgramming languageSelf-organizationMedical imagingExterior algebraSoftwareFormal grammarString (computer science)Set (mathematics)Normal (geometry)Software frameworkSystem callMatching (graph theory)Design by contractMobile appRoutingSuite (music)JSONXMLUML
Statistical hypothesis testingLibrary (computing)Execution unitError messageSuite (music)SubsetImplementationPersonal digital assistantReal numberWeb serviceProduct (business)Formal languageStatistical hypothesis testingSuite (music)Statistical hypothesis testingImplementationHoaxReal numberCodeSet (mathematics)Point (geometry)Error messageException handlingRepresentation (politics)Library (computing)Clique-widthSubsetComputer configurationParameter (computer programming)Medical imagingHypothesisOpen sourceSource codeUnit testingMathematicsConfidence intervalStructural loadResultantMultiplication signSoftwareScheduling (computing)Physical systemBuildingCartesian coordinate systemExecution unitINTEGRALProduct (business)LoginEmailFunctional (mathematics)Message passingSoftware developerAuthorizationOpen setCore dumpInterface (computing)Maxima and minimaTable (information)Web serviceLocal ringDivisorLine (geometry)WeightVariancePhase transitionState of matterFourier seriesComputer animation
Web serviceStatistical hypothesis testingContext awarenessSource codeCASE <Informatik>State of matterObject (grammar)Volume (thermodynamics)Dependent and independent variablesBefehlsprozessorMatching (graph theory)Parameter (computer programming)Data managementWorkstation <Musikinstrument>Connectivity (graph theory)Observational studyGame controllerStatistical hypothesis testingCoefficient of determinationMathematicsUniform resource locatorEvent horizonCodeBitView (database)WordInsertion lossAnnihilator (ring theory)Formal verificationServer (computing)Error messageComputer simulationLecture/Conference
Coma BerenicesContext awarenessMedical imagingEvent horizonUniform resource locatorData structureStatistical hypothesis testingServer (computing)System callRight angleDependent and independent variablesLecture/ConferenceMeeting/Interview
Cartesian coordinate systemDependent and independent variablesFuzzy logicLimit (category theory)Point (geometry)CodeCASE <Informatik>HypothesisImplementationReal numberStatistical hypothesis testingRandomizationCategory of beingKey (cryptography)Lecture/ConferenceMeeting/Interview
Dependent and independent variablesTraffic reportingRight angleLibrary (computing)PlanningStatistical hypothesis testingBeta functionRow (database)Lecture/ConferenceMeeting/Interview
Set (mathematics)Cache (computing)Physical systemConnectivity (graph theory)Dependent and independent variablesWeb serviceBitExterior algebraSimilarity (geometry)Row (database)Right angleMedical imagingMultiplication signGastropod shellCASE <Informatik>Lecture/Conference
Transcript: English(auto-generated)
Thank you and thank you everyone for coming so my name is Adam dangle I work at a company called Mises fear Building an operating system for data centers, but last year. I was working on something quite different I was working on the back end of an iPhone app
Now what you do is as a user you would take a photo of a wine label With your phone and the app would tell you all kinds of details about that wine or at least it was something like that I'm going to protect my NDA here today And our app was a flask app now if you don't know flask. It's a really simple web framework
And it looks something like this Now a really cool thing about flask is that it provides and give me a second a vert Zoid Test client if I got that right and what that means is that you can make requests against an in-memory Application and you can get response objects, which you can inspect
So if we look at this test here, it kind of looks like we've made an HTTP request But actually everything's being done in memory in our tests now in our wine recognition app we use something as well called the foria web services and
Basically what before your web services is is it's a tool that lets you upload a whole bunch of images Let's say in our case images of wine labels and then when a user uploaded a photo to us Well, we could send that image to the foria and then before you would tell us which one of our previously uploaded images their photo
most closely matched and then what we could do is we could fetch details about that wine from our database and Tell the users details about the wine like how much it should cost how well rated it was from Exactly where it's from that that kind of thing
But when we built our prototype Well, we kept finding loads of problems loads of bugs and in particular those bugs came from assumptions that we'd made about the foria which weren't quite right often that came from reading their documentation and trusting that it was truthful and full and Well, you can't always make those assumptions
And so what we wanted to do we wanted to add tests for our matching workflow and that matching workflow Of course you used foria and we wanted those tests to be in our existing test suite Now for foria here was accessed over HTTP and that's what I'm going to focus on today
But the general idea is really on specific to HTTP because you might want to test code Let's say that Uses a database for local storage or you might want to test a deployment workflow which uses Docker or maybe you even want to test code which uses
Amazon s3 or some other cloud storage as a storage backend Now we were lucky we had a very clear idea of what we wanted our first test to be I know this is quite a lot of code to have on a slide but Simply what we wanted to test was that if a user uploaded a photo of a wine label Which matched a photo that we had already added. Well, then they would get details about that wine
So I wrote a test that looked a little bit like this. I had two wines here Add wine. Let's say it adds it to our database, but it also uploads it to foria and Then I check that I get the right one back when I query the match function that much function uses foria on the back
end Now with some third-party tools Maybe even some of the ones I mentioned like doctor. You might be totally fine totally cool to call that real tool in your test suite But when we called before in our tests, we actually hit some problems
Now first of all, we were at the mercy of the network And what that meant is when our CR a system had a little network glitch Well, then our whole test suite would fail Because our tests made HTTP requests against the internet and we didn't know if Those failures were because of the network failure or because there was some kind of flakiness in our code
But also we were at the mercy of a foria So similarly when the foria went down or went down temporarily our test suite would fail And it really does slow down development if you're constantly worrying. Have you made a mistake or is it on their end?
Now Say you're using a real service like s3 S3 might be pretty stable probably even more stable than your software. So you might not have to worry too much about flakiness But s3 charges you per megabyte use So if you want to use it in your test suite, it might actually become really expensive to run your tests
you might have to pay per megabyte and Just spend quite a lot of money And another problem that you might run into is resource limits This is definitely something that I've hit a lot of services have Resource limits the certain number of requests that your account can make and so if you call Something in your test suite very heavily, especially if you let's say you're doing
Performance benchmarking you're making a load of calls Well, then you might hit those resource limits and you can't run your tests anymore and you're pretty much blocked on development And even when those things weren't problems Everything was really slow So before a it's quite advanced software
It does a lot of processing magic so that it can do the image matching and that means that after you've uploaded An image. Well, it takes a few minutes until that image can be matched. That's totally reasonable I don't think that I could really expect them to do it instantly But in our test suite Well, I didn't really want to have to wait a few minutes to know if our get match code worked
So we called these tests like the one that I showed you before Integration tests because well they tested the integration of our software with before is I think a lot a lot of people Get confused about the terminology some people call these things acceptance tests or end-to-end tests
but I think we can agree that they're high level tests and They were definitely useful and they really did help us track down some bugs But we also wanted unit tests because unit tests give us a lot of benefits over integration tests And in particular they tell us if our code calls before you correctly in this case, even when before it is down
and unit tests are also really small in scope and what that means is Well, let's say one fails not all the time but often, you know exactly which part of your code failed And if you change that bit of your code to make the unit test pass, well that can be a small isolated change
And when you've got unit tests that run quickly and are small well You can even use some tools maybe like hypothesis to generate a whole bunch of unit tests So What we want to do we want to turn a code base which can currently be tested only by
integration tests into one which can also be tested with unit tests and One way that some people achieve this is by using mocks now roughly a mock is some code which provides the same interface as Something that your code calls, but it reduces or it removes some cost
And in this case the main costs that we cared about like I mentioned were time We cared about those slow tests and flakiness But again, you might want to avoid financial costs resource limits or all kinds of other costs that can come into your test suite
So my goal was that wherever code under test made a request of a foria at least in our unit test suite The tests would make sure that that request that HTTP request was actually handled by a mock function Rather than going over the web Now we're very fortunate. We were using the requests library that I'm sure some of you at least are familiar with and
There are a few ways with Python to get requests which are made with the requests library to point to some mock code And the tool I chose is this one. It's called requests mock. I know there's also another one By the folks who make sentry called responses. There's also something called
HTTP get if you're on Python 2 and maybe you're not using the requests library Now the simple requests mock example is this one? So what you can say is here when I make a get request to test.com Return the string that says data
It's pretty simple And at the same time as using requests mock I'm sorry person who tried to take a photo the slides will be online and At the same time as using requests mock. We were also using pytest now What pytest is is it's a test runner which gives you a really neat way to do setup and tear down for test requirements
Now that feature is called fixtures and we have a fixture right here and what this one says is hey if I use this fixture Then Requests in this test will be handled by mock code. You can see we yield when we're in the context manager
But I'm sure that if you're using a more traditional Tests framework you can use just the normal kind of setup and tear down methods now What I wanted I didn't want just want to return the string data or something like that I wanted some quite advanced features in my mock and
In particular I wanted to have a stateful mock And that would allow me to give different responses based on previous requests So I could give a different match response if someone had already uploaded to the mock a picture of a matching label So I use a requests mock feature which let me use a callable instead of a predefined response and that callable takes a
Request like object that gives me all the details of the request So we created a whole bunch of small mock functions for every endpoint we used and At this point we'd pretty much achieved our goal, right? We could test our code without touching the real before it
But then we hit some more problems problems when we were using that mock and I actually think that these are problems that a lot of mocks face and Sometimes we found that we'd copied the interface correctly, you know, it can be pretty hard. There are lots of edge cases
What if the image is too big do we give the right error back that kind of thing and? Humans make mistake even with code review. And so we found that we'd copied a lot of things incorrectly But then even when we were extra careful, we found that the mock quickly became updated whenever the foria changed And if they sent out a really nice change log we could change our mock to match it
But that's not always the case especially for very minor minor things and this isn't you know a Python library where you can even inspect the code changes. This is a web service Now when you have an outdated mock you have quite a serious problem or at least what was serious for us
which is our tests pass but our software is actually failing in production and When you've got that you can have a real difficult time tracking down exactly why your code is broken because Everything looks like it should be working and you have to find. Oh, actually my mock is wrong
Where is it wrong trying to basically remake those manual requests to check your market? It's very tedious so that was a contract gig and that contract ended and I kind of felt like I'd built an okay solution. It was working. All right for the client, but I really felt like the
Well, like the problem could be tackled in a better way and that I could have provided a better solution if I'd had more time And in particular because we kept hitting those issues of the foria changing and of human error And at the same time I really believed that the foria and I still do
Could be a genuinely useful Tool for a bunch of people and it could be especially useful if it was easy to develop against So I set out to make VWS Python which is basically an open source library For using the before your web services with Python it's in progress hopefully coming very soon to pi pi
But I also had another goal I started testing it with an open source mock part part of that library But I realized that the mock itself is very useful whether or not you're using my library And I wanted to ship that mock to people so that if they were writing code which used foria
Well, then they could have the mock for their own tests So I wrote some integration tests for the library and I wrote some unit tests for that library which used the mock And I put the test suite on Travis CI Because well because I knew it and because it was free for open source projects and one really cool feature of Travis
I'm sure a lot of other CI systems share it though Is that I can give it the credentials for foria and I don't have to have those credentials show up in the code base Where someone can abuse them, but I also don't get have to have them show up in the logs so I could really use the real service even from a CI system and
Every time I made a change to the library the tests will run and those integration tests run against the real before you But if you remember the goal I set I wanted people to be able to use my mock to test their code whether or not they were using my library and There's a cool way to let even people who use different programming languages not just Python
to use your mock while you're still keeping the interface really nice and pleasant if you remember we had a pytest fixture or if you're not using pytest just a context manager or a decorator so you want to keep that for Python users, but you want to let other people use your code as well and
The way that I did this is well I Built the mock in a way that meant it could be run as a standalone server And what that meant is ditching the requests mock syntax that we had before but at the same time Well, no, I'll move on so I wrote this little bit of code. I'm not gonna get into it too too deeply because
Maybe I'm a little bit embarrassed. It's a bit of a hairy hack, but really it let me rewrite the mock as a flask app and Keep using it with requests mock. So that means that I've got a flask app That I can just run as a standalone server
But if I get if I use this code it ties it into requests mock So let's say what it does is it translates those request objects from request mock Into something that used that can be used by the I guess the vectors like test client again But then you also translate responses from that test client into something that requests mock can use all those code will be online later
So if you're not using Python then what you can do is you can spin up a flask app Let's say in a Docker container for every test and then you can route your requests to that container using whatever kind of requests mock Alternative your language has and that can be particularly useful even especially even if you're on an old Python version
That doesn't support my mocks code So I'd say this if you're in an organization and you're writing a mark and you want that mark to be used across your organization Even if people there use different languages, this is a really cool way to do it So back to writing the mark this time around
The mark was definitely part of my product. So I didn't want to just do it in an ad hoc manner I wanted to test it thoroughly and I wanted to write those tests that confirmed it was doing what I wanted So if you think about it at this point I'm kind of probably duplicating a lot of the work that the people at the foria did right?
I'm rewriting a bit of their service and I'm also thinking about edge cases for it. And I'm what I'm doing is is very manual. I'm Making requests to their servers with those kind of edge cases that I'm thinking about then I'm noting the responses down in tests and then I'm making sure that that test passes for my mock
And I test things especially that aren't mentioned in the documentation So let's say one example is they take a width for the image in centimeters Well, what happens if you give it a name and negative width? well, I Did it I found that they gave an error I copied that exact error into my mock and then the library
Which is the kind of the main product Handles that error and raises an appropriate Python a nice Python exception for that error So at this point I have three sets of tests So I have a few integration tests which use the test library with the real before air. I
Have a whole bunch of unit tests for the library maybe maybe hundreds and thousands if you count those which are generated by hypothesis and those use the mock and Then I have some unit tests for the mock itself But I'm still vulnerable to those problems that I mentioned earlier copying incorrectly And before a changing which will render my mock inaccurate and now my library possibly even broken
So they're turning a mock into a verified fake which is the title of this talk is All about avoiding those problems Now what a verified fake is roughly is it's a fake Implementation which is verified against the subset a subset of the same test suite as the real implementation
Now I don't have the before a code and I definitely don't have their test suite if they've even got one So if I wanted to make a verified fake, which I did I needed to have my own test suite
So turning the mark into a verified fake Really meant making a test suite which ran both against the mark and the real thing So if you recall that simple pytest fixture from before well, I expanded it. So pytest has this really cool feature Called parameter ization and you can parameterize fixtures
So that tests which use those fixtures around once with each parameter option. So here I've got a simple true false And I map that to use real before a or not And so any test which uses this fixture is run twice So it's run once with with the real before a and then once with the mock before a so
These are the test results they look something like this you can see each test runs twice and Fortunately, I already had at least the start of a test suite for that mock So the first thing I did was I applied this to those tests So they ran against the mock and the real thing and of course, I found that I'd made a whole bunch of mistakes
So now we've got a verified fake and we have a test suite which runs against both the fake Implementation and the real implementation Now because the mocks being turned into verified a verified fake we actually trust that it's representative of the real before
So we have loads of confidence in those hundreds of tests that we had for the library And we know that they don't just rely on an unrealistic mock, but we also had another problem if you remember We were worried that before you would change and that that would make our mock inaccurate Well now whenever these tests pass I know that the mock is still a faithful representation of the foria and
We only incur the cost of running a hundred tests against beforeia But we get almost the better whole benefit of running thousands of tests against before it So we lessen that kind of cost of flakiness and slow tests But at this point our tests only run when we make a change to the code which might not be that often especially once
It's quite mature So we want to know what happens if a foria changes at that point Well a cool feature of Travis and I'm sure a lot of other build systems is that you can actually set tests to run on a schedule So there's this trade-off if you run them all the time you find out problems quickly
But you hit those those costs if you run them very rarely it takes you a long time to find out the problems So the trade-off that I chose was to trigger them every night But you can do them every release every week. Well, just whatever works for your particular situation Now back to that width example In the wine application I talked about at the beginning. We really didn't care about the physical width of a wine label
It wasn't a differentiating factor But and also it was actually really hard to get that's why we didn't care about it that much But we told the foria all the time that the width was zero Didn't matter to us and that always worked and our mock supported it and when we get to the verified fake now a few months Later, the verified fake also supports it and has a test that a width of zero is is okay
No error is is returned the image is at added But one morning I get an email from Travis and it looks something like this And it tells me that the build failed So I look at the logs and I see that we actually have a very precise data point of exactly what's changed in before
Yeah, so the mock passes for this test, but the real implementation fails and the test is well What if I add a an image with a width of zero? So now what I do, I just changed the mock function And the test so that the behavior new behavior is represented by the mock and that's very easy
But now if you remember the library's tests they themselves depended on the mock so now the library expects That a width of zero is valid but it's invalid so as soon as I change the mock well then the library's tests
immediately started failing so I could change the library to give a nice Python exception when use a width of zero and What that really demonstrates is that really within a few hours for foria made an undocumented change and that? Introduced an incompatibility with my library and then this incompatibility was fixed without any real complex debugging
and To me that shows the value of having a verified fake to it any developer really who's using who's writing code? Which integrates with third-party software? so now that you can imagine that building a verified fake when you have the original source code is
much simpler than when you don't and a lot of the fake can share well, I can share code with the real implementation and And Hardly any open web services are open source so this can be really valuable if you're shipping Software to people if you're shipping software to people which they might want to call in tests well you can actually add tremendous value
To that software by shipping your own verified fake and it might even cause someone like me to choose to use your software over competitors and If you make a verified fake as the author of the software well It's much easier because well because you can get told before merging any changes that it would make the fake unrealistic
So you know when to make changes to your? Code without the need for that once per day test run So I'm hoping that maybe in the future having an API which is easily tested against will become kind of table stakes And One cool thing about making a verified fake well
You don't really have to ship your secret sauce you can just ship something that does the bare minimum of your API interface Let's say you're making something like before yeah, you can just have a really rubbishy kind of image matching thing That's your the core of your business. You don't need to ship that to people So I hope now that you have a
rough idea at least of what a verified fake is why it might be useful and how you can start making one for yourself and For your users, maybe So thank you very much Okay
some questions Hi, very great great talk and 80% Overlapping with the one I gave two dogs ago, but you're you've got a case study, which is great I don't I had the the general discussion and
They I think your ending is exactly has the where it should be is like there is no justification for releasing a component without a fake the terminology I I'm trying to use to use Martins, so the distinction between fake and and mock. So for example, they one thing
One thing left says is that Essentially that a fake should be a spy They that's not in the original I walk should it be a spy a fake should it provide an introspection API perhaps and with this case
I didn't worry about it so much because The API itself provides introspection abilities, right? The big thing that's missing here in my view is the ability to simulate error Example I give is CPU on fire You don't really want to be there with a lighter to give fire to your CPU to check that your code is handling it
They mock fake whatever should be programmable to raise an error. So actually I've got a response to that So first of all, it's very difficult for the on fire case to verify it Right, because how do you have a test that checks against and thank you for your question as well your comment
How do you have a test that test that when you say this is going to give a 500? It will give a 500 just like when their servers are down because their servers aren't down right now but actually if you check out the source code for VWS Python and It takes a state object
and so I have various states like on fire, but not quite and So you can say just like I had this verify mock fixture or verify before a fixture or verified before a Context manager I you can give that a parameter which says broken inactive slow
And then you're you can see if your tests work even when there is a five-minute delay in the matching ability I hope that gives you a little bit of insight into how I've dealt with that that issue I really appreciate the talk, but I was wondering
In this kind of service you were mocking basically the The response was depending on the data you put before How would you go about mocking a service for which you don't have the ability to specify the data? For example, if I want to know the events in a specific location, they change every day
They are not in my control and how can I write tests against this data, which I don't have sure So you can imagine that that API that event Consuming API that you're doing Let's say I think eventbrite is one of those those companies or meetup.com
That they also have an API add event Right, but that API might not be public to you So what you've got to do is act as if you are the meetup.com Person right you're the meetup.com servers and you just make some ability to add to the add an event
Even if it doesn't have a mocked API that will be exactly like this and then you can know Okay, given that I've already added an event it works in this same structure now If you want to verify it Well what you can do you can have a test account that has an event in a particular location
with you know particular image and then you can make your test run against that test account and Have uploaded that kind of event into your mock already and then you can say, okay I want to check this event and check that the response is exactly the same I hope that roughly answers your question, but you're right. It's not a solved issue. It's not
Always that easy and it is context is specific Thank you Is there any way that you could integrate this with fuzzing to find out the API responses that you may not be able to think
of the applications not using sure so I mentioned hypothesis before that's the closest tool that I've personally used to fuzzing if anyone doesn't know it It's it's a property-based testing tool and it like it generates a lot of tests, which is kind of what fuzzing is
I haven't actually done it for this because the request limits were so slow It's just so low and the request took so long Actually the point of doing this for me was so that I could add fuzzing to my code But you can imagine that if those problems weren't the case. Well, you could say hey hypothesis or my fuzzing tool Please run random requests against my mark and the real implementation and check that they
Either are exactly the same in response or share some properties like they have the same keys that that would be ideal But it really wasn't suitable in this case
Hi a really nice talk, thank you I wonder you have like libraries like VCR or beta marks, which is ported from Ruby, right? and they you can like record a response like and it's recorded in JSON and
I wonder why You wouldn't use just like for day-to-day testing like that and then at midnight or once a day Just disable the cache and see if the test passed then So yeah, VCR tools are definitely something that I've used a bit but
How do you know that the I'll put it this way. Maybe You have a very similar case, right that the API can change and then when you disable the cache Then you have to update your VCR responses and then you've kind of got a very similar thing
But you might not have the add Component if I want to hear add an image What do I do in a VCR system it I kind of have It Sorry, I don't have a great answer for that. I'm going to pass on to the next one
This is an alternative I guess to a VCR system. No, I think that people use VCR to VCR some other service
I've certainly very briefly Contributed to PI github a github API and what they do is they record responses from VCR Really I tried to avoid it because it came with its own set of problems And it was more painful for me to use than the system
Thank you