We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Production-time Profiling for Python

Formal Metadata

Title
Production-time Profiling for Python
Title of Series
Number of Parts
490
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Learn how to scrutinize your Python application in order to optimize them and make them run faster. Getting inside knowledge of how your Python application runs is critical in order to achieve the best performance. Profiling is a mean to achieve this: by gathering all the runtime information available about the execution of your program, you might be able to understand how to optimize it. However, profiling running code in production might be a real challenge as it requires the profiler to be noninvasive and having low overhead. Therefore, to profile production services, statistical profiling is the favorite analysis method. By regularly checking your program activity, you’ll be able to find production code bottlenecks down to the line of code. Profiling services that are running with real workload makes sure that you are collecting valuable data and that you are not guessing what the performance barrier might be. This talk explains how it’s possible to build a statistical profiler that collects information about CPU time usage, memory allocation, and other information — all that while respecting the need for low overhead, data export format, and granularity. We’ll dig into some of the operating systems and CPython internals to understand how to build the best profiler possible.