Lookahead mechanisms in decision trees can produce better predictions

Image by Steve Buissinne from Pixabay

TL;DR: I show that decision trees with a single-step lookahead mechanism can outperform standard, greedy decision trees (no lookahead). No overfitting or lookahead pathology is observed in the sample dataset.

Suppose we are trying to predict if a potential job candidate can be successful in his job.


Getting Started

Can sales of vanilla ice cream overtake chocolate?

Image by Nicky • 👉 PLEASE STAY SAFE 👈 from Pixabay

Table of contents:

  • Introduction
  • Problem Statement
  • Data preparation
  • Wrong method 1 — Independent simulation (parametric)
  • Wrong method 2 — Independent simulation (non-parametric)
  • Method 1 — Multivariate distribution
  • Method 2— Copulas with marginal distributions
  • Method 3— Simulating historical combinations of sales growth
  • Method 4— Decorrelating store sales growth using PCA

Monte Carlo simulation is a great forecasting tool for sales, asset returns, project ROI, and more.

In a previous article, I provide a practical introduction of how monte Carlo simulations can be used in a business setting to predict a range of possible business outcomes and their associated probabilities.

In this…


Crossover/recombination oversampling adds novelty to a dataset and can score well on classification metrics vs. SMOTE and random oversampling

Image by liyuanalison at Pixabay

TL;DR — There are many ways to oversample imbalanced data, other than random oversampling, SMOTE, and its variants. In a classification dataset generated using scikit-learn’s make_classification default settings, samples generated using crossover operations outperform SMOTE and random oversampling on the most relevant metrics.

  • Introduction
  • Dataset preparation
  • Random oversampling and SMOTE
  • Crossover oversampling
  • Evaluation of performance metrics
  • Conclusion

Many of us have been in the situation of working on a predictive model with an imbalanced dataset.

The most popular approaches to handling the imbalance include:

  • Increasing class weights for the underrepresented class(es)
  • Oversampling techniques
  • Undersampling techniques
  • Combinations of over and under…


Assess probabilities of various business outcomes

Photo by Mark de Jong on Unsplash

Monte Carlo simulation is a computational technique that can be used for a wide range of functions such as solving some of the more difficult mathematical problems as well as risk management.

We will go through 2 examples to demonstrate how Monte Carlo simulations can help you quantify risks in your next project or business decision.

Suppose you have an innovative product that you have been selling for the past year.


Model Interpretability

Tree-based ensembles and other popular algorithms often lead to counter-intuitive predictions when kept unchecked

Photo by Jose Vega from Pexels
  • Intro to model controllability
  • Preparing a sample dataset (House Sales in King County, USA)
  • Finding the model with the top cross-validation score (CatBoost)
  • Linear model’s outperformance in sanity checks
  • Conclusion

Gradient boosted trees have been widely used to win several competitions on Kaggle. It is no surprise that for most tabular datasets you are working with, you would likely find XGBoost or another implementation of boosted decision trees as the model with the best cross-validation score on your metric(s).

Question — How many times have you deployed a gradient boosted trees model with a supposedly good cross-validation score, but your…

Bassel Karami

Leading a data science team building retail analytics for shopping malls in the MENA region. MSc Econometrics | CFA, FRM, and CMA.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store