pytest is undoubtedly the most popular test framework for python. Its fixture and parametrization mechanisms, as well as its detailed hook API and vibrant plugin ecosystem make it a must-know for any developer wishing to create quality software. Some key limitations in its engine and API, however, prevent users from truly unleashing their testing scenarii. Creating tests with complex parametrization is complicated, and users trying to explore this direction may loose the legendary elegance and maintainability of pytest tests on the way. pytest-cases extends pytest so that users can manage their parameters the same way they manage tests: as python functions. This separates test cases from test functions in an elegant way, encouraging developers to add more cases without decreasing readability. With this new paradigm it becomes ridiculously simple to create tests where several "kind" of parameters coexist nicely: datasets come both from files, databases and simulations ; distinct algorithms can be evaluated and compared, with possible variants ; etc. Fixtures can be leveraged by any of these, in an intuitive manner. Finally pytest-cases can also be used in combination with pytest-harvest to generate scientific results tables in a reproducible manner. Reproducible research projects may therefore wish to use it to replace an "ad-hoc" benchmark engine. That way, adding datasets and algorithms in the benchmark becomes as easy as creating new python functions. This presentation is for python developers and data scientists at ease with python, with basic experience of pytest. |