(Boosted) Decision Tree
YouTube search... ...Google search
- AI Solver ... Algorithms ... Administration ... Model Search ... Discriminative vs. Generative ... Train, Validate, and Test
- Multiclassifiers; Ensembles and Hybrids; Bagging, Boosting, and Stacking
- Gradient Boosting Machine (GBM)
- Boosting | Wikipedia
- Introduction to Boosted Trees | XGBoost
- What is a Decision Tree? | Daniel Nelson - Unite.ai
- What Is Machine Learning and Why Does It Matter? | Jeremy Fischer - Gear Patrol
A boosted decision tree is an ensemble learning method in which the second tree corrects for the errors of the first tree, the third tree corrects for the errors of the first and second trees, and so forth. Predictions are based on the entire ensemble of trees together that makes the prediction. For further technical details, see the Research section of this article. Generally, when properly configured, boosted decision trees are the easiest methods with which to get top performance on a wide variety of machine learning tasks. However, they are also one of the more memory-intensive learners, and the current implementation holds everything in memory. Therefore, a boosted decision tree model might not be able to process the very large datasets that some linear learners can handle.
Two-Class Boosted Decision Tree