We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Taking charge of your race conditions

Formal Metadata

Title
Taking charge of your race conditions
Title of Series
Number of Parts
112
Author
License
CC Attribution - NonCommercial - ShareAlike 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Anybody working with concurrency (threads, processes, or more abstractly workers) eventually encounters a _race condition_. This kind of hair-pulling problem is a staple in computer science. Sometimes you can offload the problem to a lower layer, usually to the database engine, bundling it with atomic transactions. But there are times when you have to take charge of worst-case concurrency scenarios, when your architecture and constraints do not leave room for implicit concurrency management. Then, you need to carefully craft the paths in and out of the critical section, avoiding deadlocks along the way. In this talk, we will focus on how to test your program with a manufactured race condition. If you know it can happen, you'd better cover it in your test suite. We will define a server program called in parallel by multiple clients to illustrate the problem at hand. Then, we will manually set-up a race condition during testing to explore broken behaviour and problematic code. The Python standard library offers all the tools we need to cater for race conditions. We hope the tour we will give allows you to exercise your own race conditions more easily. Why, we even think you will have all the cards in hand to counter them!