[SFdS] Information du groupe BFA
WG Risk - 5 January 2023 - Dr. Julien CHHOR

Dear All,

Our warmest Season's Greetings from the ESSEC CREAR, joined by our partners: the ESSEC IDS dpt, Institut des Actuaires, LabEx MME-DII, and the group BFA (SFdS). We thank once again all the speakers of the Fall term for their nice contribution.
We invite you to look online at the program of the WG Risk for the Winter term, animated by Olga Klopp: the first seminar will be given by

Dr. Julien CHHOR
Harvard University, USA

Date: Thursday, January 5th at 12:30 pm (Paris) and 6:30 pm (Singapore)

Dual format: ESSEC Paris La Défense (CNIT), Room TBA, and via Zoom, please click here
(Password/Code : WGRisk)

« Benign overfitting and adaptive nonparametric regression »

Benign overfitting is a counter-intuitive phenomenon recently discovered in the deep-learning community. In specific cases, it has been experimentally observed that deep neural networks can perfectly overfit a noisy training dataset, while having excellent generalization performances to predict new data points. This goes against the conventional statistical viewpoint that there should be a necessary tradeoff between bias and variance. This talk aims to understand benign overfitting in the simplified setting of nonparametric regression. We propose using local polynomials to construct an estimator of the regression function with the following two properties. First, this estimator is minimax-optimal over Hölder classes. Second, it is a continuous function interpolating the set of observations with high probability. We then propose a further overfitting estimator that attains optimality adaptively to the unknown Hölder smoothness. Our results highlight that in the nonparametric regression model, interpolation can be fundamentally decoupled from the bias-variance tradeoff.

Kind regards,
Jeremy Heng, Olga Klopp and Marie Kratz
and Riada Djebbar (Singapore Actuarial Society - ERM)

SFdS - Société Française de Statistique
©2023 SFdS