Learning Decision Trees with Gradient Descent - Cornell University

Decision Trees (DTs) are commonly used for many machine learning tasks due to their high degree of interpretability. However, learning a DT from data is a difficult optimization problem, as it is non-convex and non-differentiable. Therefore, common approaches learn DTs using a greedy growth algorithm that minimizes the impurity locally at each internal node. Unfortunately, this greedy procedure can lead to suboptimal trees. In this paper, we present a novel approach for learning hard, axis-aligned DTs with gradient descent. The proposed method uses backpropagation with a straight-through operator on a dense DT representation to jointly optimize all tree parameters. Our approach outperforms existing methods on binary classification benchmarks and achieves competitive results for multi-class tasks.

Authors:

  • Sascha Marton - Institute for Enterprise Systems, University of Mannheim
  • Stefan Lüdtke - ScaDS.AI, University of Leipzig
  • Christian Bartelt - Institute for Enterprise Systems, University of Mannheim
  • Heiner Stuckenschmidt - Data and Web Science Group, University of Mannheim

Read the paper: https://doi.org/10.48550/arXiv.2305.03515

Commenti

Post popolari in questo blog

Building a high-performance data and AI organization - MIT report 2023

AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity. - IFM blog

Dove trovare raccolte di dati (dataset) utilizzabili gratuitamente