We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Powerful tests and reproducible benchmarks with `pytest-cases`

Formal Metadata

Title
Powerful tests and reproducible benchmarks with `pytest-cases`
Title of Series
Number of Parts
115
Author
Contributors
License
CC Attribution - NonCommercial - ShareAlike 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
pytest is undoubtedly the most popular test framework for python. Its fixture and parametrization mechanisms, as well as its detailed hook API and vibrant plugin ecosystem make it a must-know for any developer wishing to create quality software. Some key limitations in its engine and API, however, prevent users from truly unleashing their testing scenarii. Creating tests with complex parametrization is complicated, and users trying to explore this direction may loose the legendary elegance and maintainability of pytest tests on the way. pytest-cases extends pytest so that users can manage their parameters the same way they manage tests: as python functions. This separates test cases from test functions in an elegant way, encouraging developers to add more cases without decreasing readability. With this new paradigm it becomes ridiculously simple to create tests where several "kind" of parameters coexist nicely: datasets come both from files, databases and simulations ; distinct algorithms can be evaluated and compared, with possible variants ; etc. Fixtures can be leveraged by any of these, in an intuitive manner. Finally pytest-cases can also be used in combination with pytest-harvest to generate scientific results tables in a reproducible manner. Reproducible research projects may therefore wish to use it to replace an "ad-hoc" benchmark engine. That way, adding datasets and algorithms in the benchmark becomes as easy as creating new python functions. This presentation is for python developers and data scientists at ease with python, with basic experience of pytest.