Merken

Observing Chance: A Gold Master Test in practice

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
teeth and wash and morning everyone it hello and welcome thank you for being here so I recently revisited 1 of the most sophisticated applications that ever been a part of a key feature of its application to a complex and put in this case a huge production database connected to his him as invalid data through a Ruby function consisting of hundreds of lines of rock of stress to took that data and spit it out into a CSV file that was between 5 and 10 thousand lines long representing every possible permutation of the data and so wait too long to be rather or reasonably understood by humans so this was a complex feature in a complex applications now given that I was 1 of the original authors of this feature you might think that it was easy for me to just jump right back and start working on again but that was not my initial experience in support of this uh I reference something known as Eagle some fall any
code of your own but you haven't looked at for 6 or more months might as well have been written by somebody else and the my point is that all code is challenging and doesn't matter if you wrote it or somebody else we call this code
legacy code and many must deal with it every day for those of us who are not experienced it let me walk through some of the hallmarks of legacy code base 1st see old interfaces in this application we were dealing with an aging versions of Ruby on Rails the open up the chance that you're going to see gemcitabine fork gems that are no longer being maintained and gems and have no path to upgrade and each of these provides its own unique obstacle to future development the C vulnerabilities because you not updating the software knocking security patches in your code is a more vulnerable and you'll see dead code poor clean up over time leads to code it never ever gets call this the saying you are going to need it well that wasn't followed instead waterfall was we have code that is really a reliant on abandoned design patterns and things that we now consider the n-type patterns to using the downside legacy code but there some benefits to you and I I would was these as profit and users the so starting with profit a legacy application is very often an application is making somebody some money and if you're a developer working in a business the legacy code base the fact that you're there are strongly suggests that somebody's is making some money the and you hopefully have users users who care they've invested in your product and have a contract expect me to do a certain thing time and time again and that contract is very very special thank 1 thing I know for sure is that
developing legacy code is inevitable and I mean this in 2 ways but the 1st way is it's inevitable for you if you are not working on a legacy code base now have to standard on rails and that will be part of your career and it's also a NEB unavailable for the applications themselves I I believe that no application is ever truly feature complete were going to always be wanting to develop and add features and the the progress is going to continue when that happens we hopefully have
tests In the application I was working on we did luckily and we had coverage and design that's still made sense to us that a year down the road but what happens if we don't when our talking of something a lot worse and that is
untested legacy code yeah if you're going to continue developing a Ruby on Rails out it doesn't have test you're going to have to retroactively find a way to add tests or you're even a f features in event negate that contract with the application the user the
3 times of test most of us are familiar with the unit test API test and user interface test unit test test individual functions and classes API test test the weighted different parts the application talk to each other and to external parties and user interface test also known as integration test feature test test so behavior from a high level so we need to start from scratch testing and untested legacy code base these 3 types of test enough well they each have their own individual shortcomings for API tests there could be thousands of functions and M points and you have to know which ones are actually are actually being used or even a waste your time running a lot of tests for user interface test we have to know what a user actually doesn't site figuring out which types of tests to right and in what water is hard and pretty subjective and each type of test has its own unique blind spots so I have a metaphor that like to introduce now that I'm would be using throughout my talk and that is taken a watermelon and throwing it into a fan a big fan that can chop up the water on and splattered going on to wall let me explain so we start with a
complex input that is a watermelon the metaphor In this case that's the production database connected and as a large fraction database we take the watermelon we throw as fast as we can into the fan and fan can chop off the warmer following the metaphor the fan here is that Ruby function will the sequel and then splatters a watermelon onto the wall and that is a complicated output over 5 thousand + line CSV the file as a feature of this type of system that's interesting changes to the fan are really hard to detect so if I take a watermelon afterward to fan today and take another 1 wrote tomorrow provides to examine 2 splatters on the wall is very difficult to tell the fan changed at all but detecting changes to the fan are the only thing that the stakeholders really care about they wanna know that we can guarantee consistent output time and time again which leads me to your question
are any of the traditional test unit test API testing user interface test really equipped to cover this feature well the closer to the unit test but the isolation of a test database is never going to come close to the complexity of the production database are the watermelon the closest thing I yes so but we have to test and the reason we must test is because so we wanna keep adding features while we also must visit behavior this is a problem and so where we can do I have a solution and it's something I built the
production call the goal master test my name is Sheik worth and I'm a developer has struck in Chicago this talk will be 38 minutes total and 61 slides and this is not a real stock is a general programming talk is my agenda I'll stop by
financial NASA test then I'll talk about writing tests and finally I was just working with the test part 1
defining the gold Master tests so to define this test i want talk about tested is similar to will NASA test and then use that definition to herald the definition for the goal master test the seeds of this idea come from book of the came out in around 2005 called working
effectively with legacy code by C. feathers In the preface to his book feathers defines legacy code as code without tests and that perfectly fits our working definition of untested legacy code he sums up a key idea in the book with the following quote
in nearly every legacy system what the system does is more important than what it's supposed to do so the behavior of the legacy system isn't right or wrong and those terms don't really have a meaning in the sense the air it it simply is what it is it does what it does and that is a contract with the user this comes from some chapter in the book about a type of test call a character isation test and here's the definition a character
isation test is a test that characterizes the actual behavior of the code once again is an right or wrong it simply does what it does and that is a contract the users have come to expect In order right that's like this feathers introduces a process an unnamed process which I'm calling the character isation test process here it is step 1 use a piece of code in a test science step 2 right assertion that you know will fail step 3 with the failure tell you what the behavior is instead for change the past so that it expects the behavior the code produces here's what such a test might look like so we start
off we say expected something to equal to we run the test 1 time and it fails as as I expected to and I got 1 then we change the test say expect something to equal 1 run it again and passes has anyone ever in a test this way that in any other context this is a very lazy way to write test because you just and avoiding all the upfront work of trying to figure out what the code does um but if you accept the premise is that the premise that all that matters is what the legacy code does and been this actually makes perfect sense the feathers goes further with a heuristic to explain when such a test is aplicable and i've abridged it to fit on the slide so here we are the
heuristic for writing character isation tests step 1 write test where you will make changes right as many cases as you feel you need yes step to look at the specific things you're going to change an attempt to write tests for those step 3 right additional test on a case by case basis so if this is a character isation test here's how it differs from goal master test the categorization task focuses on areas where you will make changes as you can see the 1st bullet it cares about everything on the micro level a cares about the specific things and that you're going to change from the 2nd bullet that is so this these are not black box test we can actually black box testing this test where you make assertions about the dimensions of the black box but you can open and see what's inside so he says that he vision test is not a black box test now this is all the opposite of a goal master test a goal NASA test focuses on the entire application as a whole it only cares about the macro level not the micro level and it is intentionally ignorant to what's happening inside the black box with all that in mind here is my definition of a goal master test
Gold master test is a regression test for complex untested systems that asserts a consistent macro-level behavior image on the right is 1 of the Voyager Golden Records which was launched in this space in 1977 to show galaxy the sounds we had produced up to that point on it so of the breakdown by definition a little here regression tests we have features that we like and we don't want them to go away so we're writing a test that tries to prevent that and it is for complex untested systems that asserts a consistent macro-level behavior the just macro-level behavior for me means application works in the broadest possible sense and we want to continue to work in the broadest possible sense as I said this definition is mine and that's because the helical ideas in computer software it seems like this came from many different places at once it's hard to find 1 canonical definition of but this is this is what I'm going with those are going to sample workflow goal master test the 1st 1 is kind of boring
a step 1 we restore a production database into our test database that's the lot in step 2 we run the ball master test that talks about the data like the fan in step 3 we capture the output which is the spot on the wall and step 4 we ignore the output so that's all that happens in the 1st run and so you basically set up the artifact that you will need for your subsequent run which is more interesting is that because of the winter run we do the same thing we restore the production database in the test database we run the gold acid test again the capture the output and then we compare that output to the previous output and if there is a failure then we get down the step 5 which is we have to change the code or we can note that the new goal master as the new standard that were holding everything to failure is going to prompt some sort of decision and if you read the test creation be able to bypass that fill you have to decide what to do the ideal application for goal NASA test has 3 things in common it is mature it is complex and we expect minimal change the output so is mature and there's behavior in there that we think is important but it's not covered sufficiently by test test it's complex complex enough that if you unit test and integration tests on top are not going to be sufficient and that we expect minimal change the output is a contract establishment user that we want to persist there's some benefits to adding a test like this you could base here they are 1st we get a rigorous development standards this is a very high bar for the developers and team you're basically saying nothing in the entire application or a giant wing of the application should change in any way and the end if you are using test running tests and you should you add that tiered testing cycle where the tester green the green the green and suddenly the red and you you realize that you change something it will expose surprising behavior if you have code is non determinis that were returned a different result based off the operating system it's trying on a bull NASA test is gonna catch that much more quickly I would argue that an a unit test for an integration test because of the how granular the test actually it's and it's useful for forensic analysis because they test covers the whole application if something breaks we can go back through time using a tool like it bisects and can figure out exactly when it broke so once again here's my definition of
Goal master test a goal NASA test is a regression test for complex untested systems but serves a consistent macro-level behavior so now we have a working definition of a goal NASA test onto writing 1 so but a
so part to running test remember looking little code now in Ruby are spec and posters but quickly back to the future so
we have a feature here they have a large production database it's fed through a complex posters function and its output into a large CSV file and this makes this application pretty much the ideal application verbal master test when I write a test like this or like to break into 3 phases preparation testing and evaluation so starting off the preparation we start off
by we have to build that watermelon and the way this works is we're acquire a dump from production so you get that from you production database server people down to your local machine the very 1st step is to scrub the database of sensitive information so you wanna get rid of e-mail addresses IP addresses sessions scripted information financial information this is really really important because at the end of a step or when a check in some or all of that database into version control so if you don't scrub the data you're units of vulnerability very very important the way I recommend doing that is to use a local database of utility database that you can dump the data in and then ran a scrub script against it which will make the process very repeatable because this is something you will have to do more than once so once we have that scrub data we need to dump it out as plain text equal and on our team we wrote a small rate past just for that export so here it is this
call create we name the destination file which is called goal underscore master that's equal and then we shell out using the PG dump utility we list our sanitize utility database that's where the production databases currently stored so we passes posters flags and then we send it to our destination notice the post the slides is a bit of a handwaving inside those 2 carrots sideways cares there and that is because of dumping approaching database into test database is going to take us a small not the massaging the not us the same environment and that's going to differ based on your application but I'll post a link to get hold of branch at the end of this which shows an example of some flags that we found useful so when this is done we check that plaintext equal into version control and this is the artifact that starts off the test so next thing moving into our preparation and your test phase so I want to start with an empty test file anytime running test just ability to set up and here's what that antitrust follow look like so I'm describing 80 a classical stand in a function called shred and I'm saying that it produces a consistent result and and this is this is how I would start and it should pass something in the test the 1st thing we have to do is take the production database and dump it into the test database so here's a way that you could do that these application record connection execute this opacity here Doc we start by truncating the schema migrations table and we found that to be a promoter guarantee conflict any time we try to do this so we just empty of that table and then we read our goal mass adults sequel into the test database the result of this is you have a test database that is full of your production data and this is a richer testing environment than most of us have probably ever gotten to use before the next thing we do is we perform the transformation of the fan of the metaphor so we call a function we call the and that's shred and we return it to a variable called actual now the usherette is written in such a way that it return something meaningful something we can make an assertion about in this case the returns that C is the output and so that's something you have to decided if you were writing such a test we assigned to a variable call actual and that's what we're going to make our assertions about so here's my strategy for making those assertions the test can do 2 things on the 1st run it can generate the gold master and on subsequent runs it can compare the current result to the goal master and this is a literal interpretation of that flow chart I showed earlier where the task and you've all things it can make the goal compare article master I like that because I like the developers use it has to be able to run it every time without knowing what it is without our having knowledge about what a goal master test can be I think it also makes it more easier to generate the goal master over time if you ever lose that inside to changes in the so that's a decision that we made I would rather the test is always wrong rather than a bunch of information has to be absorbed 1st so now that we have actual variable assigned start making some assertions about it so we list the file called gold underscore Master . text that is going to be the location for the present and future gold masters the 1st thing we do is we check to see if it exists if it does not exist then we writer actual to that file and this is gonna work on the 1st pass because the stable return true it's it's kind of a no no-op in a way which is made of file for us to use and that is the end of the 1st run so the test passes in all was well the 2nd pass again is where things get a little more interesting so are if that we have we are it is not true so we move into or else and if the goal master exists we read the file then we compare the goal master to actual if the goal master file and does not match the actual then we write to the goal master file and if you check in the goal master then this adds up on stage changes to your your version control which you'll have to deal with and that's a deliberate decision all talk about a 2nd finally make assertions it actually does not equal goal master than the test fails and it will fail pretty loudly depending on how you've written the test this is the entire test file for people watching on in the future is 19 lines next we move into our evaluation phase so unstable system this test just be passing in passing in passing and if it fails that is all alarm that you have broken the contract with the user and like any good regression test is simply trying to prevent that type of thing from happening if it does fail the test have the checked in the gold Master is going to be a noted by your version control you would change that file so you're going to have to make a decision what happens then well here's a flow chart to explain
so we start off with a test failure we look at the failure and say is it a valid change is a desired change if it is a valid change then we check in the new goal master and we can continue our way it is not a valid changed that we need to pause and re-evaluate we need to grab a wrench and open up the fan and figure out what it is that we broke but because we have broken the contract with the user so now we know what it's like to write a test what is it like to work in a code base it has such a test part 3 working with the
test so I'd like to look at a simple workflow for a developer who has this test in this week that developers me and then I would like to explore some advanced applications the 1st the real world example
this is their learned uh available at TI that has struck . com as open source project that I help maintain has struck and allows my co-workers to publish a 200 word postwar less about things working on every day with code samples and written now the the the site is always listing the newest post on top and that gives it that constantly refreshing the like a social media site In this incentivizes people on my team to always try to generate new content because that's that the prime position to be in on the site so let's say I want to write a goal master test rich valence 1st off is that even make sense well here's a checklist for ideal
application and is to be mature it today learned is over 2 years old so in the world of web development that's not really new anymore and and I feel that it is close enough to mature for our purposes and is the big complex beneath the veneer of today I learned this the basically 1 page that anyone looks at is a pretty complicated user interface that allows people the right post in a way that that we like a lot and beneath the you know the sum was simple web application you have or else you have Ruby you have the entire technology stack and that is a very complicated system so we will say that yesterday learn is complex and finally we expect minimal since output this is very true for me I don't as I said nothing is truly the future complete but people come to the site for a couple years and my co-workers you that ideally basis the expected to do certain things and I would be upset by that were to break so here's my assertion the yeah the so here's my assertion of the homepage given the same data should not change without us knowing why and if you check out a has rockets get home of 100 h arterial repository is the branch called gold master demo where the test so they show that is currently available to check out so the example test people look at so this is my assertion by was in the home they shouldn't change without us knowing why and how would I go about writing a test that does that the 1st we have to prepare we have to get the production data based on it discovered of sensor information we have to dump it is plain text equal and we have to check in the sequel this is the scrub script that I wrote for this test a skull sanitized underscore production that's equal and it touches 3 tables in the database developers often sessions and post developers are obviously ah uses so I go through user name email Twitter handle admin and slackening I set to so somewhat innocuous values that are still unique for each developer there's nothing in that table that's really sensitive but I I feel as a best practice just be scrubbing stuff as much as you can because M. climb project this probably would have sense information we use a gem called often for systems management that is both relevant and I wanna check into an open source project so mentally everything from there and then finally I delete the post where the idea is greater than 200 and this is just data massaging we have like 30 hundred posts I don't wanna dump all those into the for of the test database only way down to 100 and and that as a compromise that I'm making I'm choosing a facet tests and and giving up having a perfectly accurate representation of the production database but I know this application very well and I know that nothing in the application cares particularly if there's more than 50 posts which is the pagination breakpoint on pages in a pattern so as far as their learners concerned 200 posts is about the same as in 1200 post so the widest test of we're going to take almost exact same pattern tessellation before except the thing that we capture has to be a little bit different to start off for storing the data I don't have to read this because it's similar to my previous slide we dump that production data and the test database so we have the watermelon next we visit the roof that music kappa bear method to visit the route path which is aliased to the post path and and following rails convention that is the POS controller index action In this kicks off a very complex chain of events that ends with the of the browser having to their learned available to look at so once you've done that we need to make an assertion about what what comes back and we use page . html another cover bear method to assign the entire HTML for the web page to a variable called page underscore HTML once this is done we work to a similar type of conditional on the 1st test run we can generate the goal master and on subsequent test runs we can compare that to previous test runs here that has filed once again it's on get under the age of TIR if any HTML changes at all on the page in any way of this test what is going to fail and I'm easily worked on a project where I had to come back and work underneath the test we they had a test like this that I had written and to my surprise is actually a really great experience the test copies that cat should got them and allowed me to develop when I wanted to develop a so that was a very reassuring so there look at a example workflow and of a developer working with this type of test so this is a
video of myself working in the marks a couple weeks ago is just like life coding except there's no way I can make a mistake OK so a quick orientation an upper left via the terminal and the lower left them I'm running the what's command and I'm watching LS AL which shows all the files in the specfic fixtures directory and it's every 2 seconds it's going of update it a new file right now the only thing in there's school NASA of sequel but we expect it to be a goal nasa . text after the 1st test run that becomes to of all master and the right we have a test so the 1st thing we do is to run the test and if
this works at the test pass because save returns true or right returns true and will put a new found that directory called goal mastered or text OK so there's so that passed on the 1st run and if we run it again and again for code is a terministic on any machine that we run this on it should continue to past and that will give us that nice virtuous development cycle that we have when working in a test harness to test passes 2nd run and the goal mass about text should not change because this doesn't change at this point we wanna check in the goal master this is an artifact that we will not be using for every subsequent test runs so a 2nd Ch my short commit message and so I want work on a tribe change the there it was good causes us to break and there's lots of ways that we can do that we can change the template we could change some of the data we could change many different things but for me I think an interesting thing to change would be to change a controller action so this is the post controller index action this is the thing that actually creates the homepage and if you look line 25 we assign an instance variable posts we get from our posts with some stuff that's even loaded we limited by a school called published 28 we order the posts by published at that is the thing that puts the newest post at the top of the page and if we were to change that the goal masseter should definitely break because all those poster now in a different order the titles a developer names that this should and there's no way that this should possible master test so is imagine an enterprising young developer comes in the team is as I think we show orders by lights and among what the most like post is much more interesting than the new exposed for this because of messages to fail and fail loudly at the and it does not output is in great is comparing 2 largest e-mail files you can do a lot of things to make that better but the conclusion is that the goal NASA test failed now an obvious question is the goal Methodist has failed other tested failed to this is a test with 107 examples I was developed TDD so we have cucumber integration testing our spect unit test and surely other test should fail with with such a significant change so this test that the Yale by running all the other tests and will set master so that it is the way that was on untreated on a previous test run they're gonna run cucumber and rate which starts off our specs so that covers all of our tests and fast forward because of some machine OK so we had 107 at test examples run only 1 test failed the goal master test and this as I mentioned this was that this was an application that according the code coverage gems had somewhere between 95 to 100 per cent test coverage of something that we tried aggressively test from the beginning and we put all our trust in our test suite and yet only the goal NASA test failed all the other tested good job ship you so it matters to the goal master tests a more important question always does is actually matter to a user well here's into their when site would
that change the poster seeing at the top was published on July 15 20 50 so somebody comes to this spot to this site today with such a change would think that nobody is published to a site in over 2 years this is just the most like post not the most recent post it I think it was written before we had syntax highlighting so somebody comes here and sees this is gonna have a really confusing experience visiting today I learned that only the goal mass test copper this example little bit contrived but I hope to gives you a sense of what a goal master test could look like in practice this test is not without
challenges increase it requires maintenance anytime this the schema changes you're gonna wanna generate new goal master test and that's why I would advocate for automating the process as much as you possibly can it's an investment like a type test new put in the time to make it work but it could be slower but as a demonstrated you can optimize the data in a way that makes sense and I think that some people might say it implies correctness if coming in singers testing might think what it's doing is right but as an opportunity to property about what a goal mass test is because that's not what it is it's simply saying this is its behavior as the future plans but I'm working on a test for today I learned that takes a screenshot of the page on the 1st test strong and speech on on the 2nd test run using kappa bears screenshot method and I'm not the 1st person arteritis like this but it's been a fun experience and then compares the 2 images using image magic the compare function and discover the problem with my example what happened in the CSS changes the test is not a case that it will only catch the know here's what such an image would look like so here's to their learned
uh failing on a subsequent test run and the stuff that is different is in green the thing that I change here was are removed a global font that was included and was shown on every site so so that since the browser did have a global font tried to give that use whatever font about what's best and that change the way that the header looks in almost every part of the post so that's a small tiny what changes someone could make accidentally but with this type of test you would catch it so to wrap up if you have a mature
complex stable application considerable Nasser testing it can simulate a much larger tests we if you don't have 1 to start with and from my experience just writing the test is going to tell you some surprising things about your code and if I could go for a slightly broader conclusion I would say that applications of the future are going to require creative testing strategies many many rails applications are in this situation that we've talked about today is becoming legacy applications with no tests we or partial tests we and development has to go forward on those there are new frameworks ideas also coming along that are continuing to challenge the boundaries of what it has can be that's what I came here to rails come to talk about and I would love to talk more people who share that interest for I don't like to say thank you thank you to will start to pass rocket to Brian done no rapine sending that's and Jennifer heart at the University of Chicago and thank you for coming here in the past if the
1
Autorisierung
Lineares Funktional
Permutation
Datenhaltung
Mereologie
Kartesische Koordinaten
Elektronische Publikation
Biprodukt
Normalspannung
Gerade
Computeranimation
Schnittstelle
Ruby on Rails
Punkt
Computersicherheit
Entwurfsmuster
Versionsverwaltung
Systemaufruf
Kartesische Koordinaten
Code
Computeranimation
Design by Contract
Patch <Software>
Softwareschwachstelle
Software
Code
Mustersprache
Softwareentwickler
Schnittstelle
Softwaretest
Softwaretest
Arithmetische Folge
Mereologie
Kartesische Koordinaten
Code
Computeranimation
Web Site
Komponententest
Punkt
Wasserdampftafel
Klasse <Mathematik>
Toter Winkel
Kartesische Koordinaten
Code
Computeranimation
Eins
Übergang
Negative Zahl
Softwaretest
Dämpfung
Fächer <Mathematik>
Typentheorie
Code
Datentyp
Softwaretest
Lineares Funktional
Ruby on Rails
Benutzeroberfläche
Ereignishorizont
Inverser Limes
Integral
Design by Contract
Einheit <Mathematik>
Mereologie
Softwaretest
Bruchrechnung
Lineares Funktional
Benutzeroberfläche
Komponententest
Datenhaltung
Mathematisierung
Fortsetzung <Mathematik>
Physikalisches System
Elektronische Publikation
Ein-Ausgabe
Biprodukt
Komplex <Algebra>
Computeranimation
Fächer <Mathematik>
Datentyp
Widerspruchsfreiheit
Gerade
Funktion <Mathematik>
Softwaretest
Rechenschieber
Hash-Algorithmus
Total <Mathematik>
Mereologie
Biprodukt
Softwareentwickler
Optimierung
Computeranimation
Arithmetisches Mittel
Softwaretest
Physikalisches System
Code
Datentyp
Physikalisches System
Term
Code
Computeranimation
Gruppenoperation
Design by Contract
Softwaretest
Prozess <Physik>
Prozess <Informatik>
Mathematisierung
Mathematisierung
Kontextbezogenes System
Code
Computeranimation
Design by Contract
Rechenschieber
Rechter Winkel
Code
Ordnung <Mathematik>
Heuristik
Kategorizität
Punkt
Hausdorff-Dimension
Blackbox
Mathematisierung
Kartesische Koordinaten
Komplex <Algebra>
Raum-Zeit
Computeranimation
Übergang
Task
Softwaretest
Software
Lineare Regression
Stichprobenumfang
Maschinelles Sehen
Widerspruchsfreiheit
Bildgebendes Verfahren
Softwaretest
Addition
Schraubenlinie
Physikalisches System
Flächeninhalt
Rechter Winkel
Basisvektor
Makrobefehl
Resultante
Komponententest
Atomarität <Informatik>
Mathematisierung
Kartesische Koordinaten
Komplex <Algebra>
Analysis
Code
Computeranimation
Standardabweichung
Fächer <Mathematik>
Lineare Regression
Netzbetriebssystem
Kontrollstruktur
Softwareentwickler
Funktion <Mathematik>
Softwaretest
Datenhaltung
Ideal <Mathematik>
Physikalisches System
Biprodukt
Quick-Sort
Design by Contract
Integral
Entscheidungstheorie
Motion Capturing
Funktion <Mathematik>
Komplex <Algebra>
Dreiecksfreier Graph
Multi-Tier-Architektur
Standardabweichung
Softwaretest
Lineares Funktional
Leistungsbewertung
Datenhaltung
Kartesische Koordinaten
Ideal <Mathematik>
Elektronische Publikation
Biprodukt
Komplex <Algebra>
Code
Computeranimation
Datenhaltung
Softwaretest
Funktion <Mathematik>
Komplex <Algebra>
Mereologie
Phasenumwandlung
Phasenumwandlung
Funktion <Mathematik>
Leistungsbewertung
Resultante
Sensitivitätsanalyse
Bit
Prozess <Physik>
Leistungsbewertung
Adressraum
Versionsverwaltung
Fortsetzung <Mathematik>
Kartesische Koordinaten
Computeranimation
Strategisches Spiel
Softwaretest
Einheit <Mathematik>
Fahne <Mathematik>
Lineare Regression
Fahne <Mathematik>
Skript <Programm>
Phasenumwandlung
Gerade
Funktion <Mathematik>
Softwaretest
Schreiben <Datenverarbeitung>
Interpretierer
Lineares Funktional
Datenhaltung
Güte der Anpassung
Stellenring
Ruhmasse
Systemaufruf
Biprodukt
Bitrate
Entscheidungstheorie
Rechenschieber
Wurzel <Mathematik>
Strategisches Spiel
URL
Information
Programmierumgebung
Message-Passing
Tabelle <Informatik>
Stabilitätstheorie <Logik>
Mathematisierung
Transformation <Mathematik>
Datenhaltung
Virtuelle Maschine
Datensatz
Task
Fächer <Mathematik>
Migration <Informatik>
Datentyp
Strom <Mathematik>
Softwareentwickler
Ganze Funktion
Leistungsbewertung
Einfach zusammenhängender Raum
Programmablaufplan
Softwarewerkzeug
Verzweigendes Programm
Physikalisches System
Paarvergleich
Elektronische Publikation
Binder <Informatik>
Design by Contract
Differenzkern
Softwareschwachstelle
Speicherabzug
Mini-Disc
Softwaretest
Reelle Zahl
Fächer <Mathematik>
Mereologie
Mathematisierung
Kartesische Koordinaten
Softwareentwickler
Figurierte Zahl
Code
Computeranimation
Design by Contract
Demo <Programm>
Bit
Gewichtete Summe
Weg <Topologie>
Browser
Web-Applikation
Selbstrepräsentation
Fortsetzung <Mathematik>
Kartesische Koordinaten
Information
Komplex <Algebra>
Computeranimation
Homepage
Gruppe <Mathematik>
Mustersprache
Speicherabzug
Skript <Programm>
E-Mail
Funktion <Mathematik>
Softwaretest
Dokumentenserver
Datenhaltung
Ideal <Mathematik>
Biprodukt
Ereignishorizont
Checkliste
Rechenschieber
Generator <Informatik>
Verkettung <Informatik>
Funktion <Mathematik>
Twitter <Softwareplattform>
Automatische Indexierung
Konditionszahl
Projektive Ebene
Information
Computerunterstützte Übersetzung
Tabelle <Informatik>
Maschinenschreiben
Web Site
Ortsoperator
Gruppenoperation
Mathematisierung
Keller <Informatik>
Web-Seite
Code
Kappa-Koeffizient
Datenhaltung
Homepage
Überlagerung <Mathematik>
Datentyp
Stichprobenumfang
COM
Inhalt <Mathematik>
Softwareentwickler
Demo <Programm>
Benutzeroberfläche
Open Source
Verzweigendes Programm
Systemverwaltung
Gasströmung
Routing
Physikalisches System
Netzwerktopologie
Differenzkern
Komplex <Algebra>
Parkettierung
Hypermedia
Basisvektor
Gamecontroller
Web-Designer
Wort <Informatik>
Orientierung <Mathematik>
Web Site
Punkt
Komponententest
Mathematisierung
Gruppenoperation
Kartesische Koordinaten
Fortsetzung <Mathematik>
Code
Computeranimation
Homepage
Videokonferenz
Virtuelle Maschine
Variable
Prozess <Informatik>
Radikal <Mathematik>
Softwareentwickler
Gerade
Funktion <Mathematik>
Leck
Softwaretest
Suite <Programmpaket>
Videospiel
Physikalischer Effekt
Template
Zwei
Güte der Anpassung
Ruhmasse
Elektronische Publikation
Bitrate
Integral
Wechselsprung
Rechter Winkel
Automatische Indexierung
Dreiecksfreier Graph
Gamecontroller
Ordnung <Mathematik>
Verzeichnisdienst
Message-Passing
Instantiierung
Softwaretest
Lineares Funktional
Bit
Web Site
Prozess <Physik>
Kategorie <Mathematik>
Mathematisierung
Automatische Handlungsplanung
Ruhmasse
Sprachsynthese
Kappa-Koeffizient
Computeranimation
Homepage
Softwarewartung
Generator <Informatik>
Softwarewartung
Primzahlzwillinge
Datentyp
Bildgebendes Verfahren
Softwaretest
Web Site
Stabilitätstheorie <Logik>
Hash-Algorithmus
Browser
Mathematisierung
Kartesische Koordinaten
Komplex <Algebra>
Framework <Informatik>
Computeranimation
Randwert
Font
Mereologie
Datentyp
Strategisches Spiel
Softwareentwickler
E-Mail

Metadaten

Formale Metadaten

Titel Observing Chance: A Gold Master Test in practice
Serientitel RailsConf 2017
Teil 67
Anzahl der Teile 86
Autor Worth, Jake
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/31262
Herausgeber Confreaks, LLC
Erscheinungsjahr 2017
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract It’s what everyone is talking about: cyber security, hacking and the safety of our data. Many of us are anxiously asking what can do we do? We can implement security best practices to protect our user’s personal identifiable information from harm. We each have the power and duty to be a force for good. Security is a moving target and a full team effort, so whether you are a beginner or senior level Rails developer, this talk will cover important measures and resources to make sure your Rails app is best secured.

Ähnliche Filme

Loading...