We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Walking the Random Forest and boosting the trees

Formal Metadata

Title
Walking the Random Forest and boosting the trees
Title of Series
Number of Parts
132
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Deep Learning is all the rage, but ensemble models are still in the game. With libraries such as the recent and performant LightGBM, the Kaggle superstar XGboost or the classic Random Forest from scikit-learn, ensembles models are a must-have in a data scientist’s toolbox. They’ve been proven to provide good performance on a wide range of problems, and are usually simpler to tune and interpret. This talk focuses on two of the most popular tree-based ensemble models. You will learn about Random Forest and Gradient Boosting, relying respectively on bagging and boosting. This talk will attempt to build a bridge between the theory of ensemble models and their implementation in Python.