Résumé : We investigate the asymptotic behaviour of gradient boosting algorithms when the learning rate converges to zero and the number of iterations is rescaled accordingly. To this aim, we introduce a new class of regression trees, that we call $(\beta,K,d)$-regression trees and work in a suitable function space that we call the space of tree functions. Our main result is a deterministic limit in the vanishing learning rate asymptotic and the characterization of the limit as the unique solution of a differential equation in an infinite dimensional function space.
Travail en collaboration avec Jean-Jil Duchamps
References:
Exposé en mode hybride : au choix au LMV ou en visio (lien Zoom sur demande)