A recent series of theoretical works showed that the dynamics of neural networks with a certain initialisation are well-captured by kernel methods. Concurrent empirical work demonstrated that kernel methods can come close to the performance of neural networks on some image classification tasks.These results raise the question of whether neural networks only learn successfully if kernels also learn successfully, despite neural nets being more expressive. Here, we show theoretically that two-layer neural networks (2LNN) with only a few neurons can beat the performance of kernel learning on a simple Gaussian mixture classification task. We study the high-dimensional limit, i.e. when the number of samples is linearly proportional to the dimension, and show that while small 2LNN achieve near-optimal performance on this task, lazy training approaches such as random features and kernel methods do not.Our analysis is based on the derivation of a closed set of equations that track the learning dynamics of the 2LNN and thus allow to extract the asymptotic performance of the network as a function of signal-to-noise ratio and other hyperparameters. We finally illustrate how over-parametrising the neural network leads to faster convergence, but does not improve its final performance.

Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed / Refinetti, M.; Goldt, S.; Krzakala, F.; Zdeborova, L.. - 139:(2021), pp. 8936-8947. (Intervento presentato al convegno International Conference on Machine Learning, 18-24 July 2021, Virtual nel 18-24 July 2021).

Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed

Goldt S.;Krzakala F.;Zdeborova L.
2021-01-01

Abstract

A recent series of theoretical works showed that the dynamics of neural networks with a certain initialisation are well-captured by kernel methods. Concurrent empirical work demonstrated that kernel methods can come close to the performance of neural networks on some image classification tasks.These results raise the question of whether neural networks only learn successfully if kernels also learn successfully, despite neural nets being more expressive. Here, we show theoretically that two-layer neural networks (2LNN) with only a few neurons can beat the performance of kernel learning on a simple Gaussian mixture classification task. We study the high-dimensional limit, i.e. when the number of samples is linearly proportional to the dimension, and show that while small 2LNN achieve near-optimal performance on this task, lazy training approaches such as random features and kernel methods do not.Our analysis is based on the derivation of a closed set of equations that track the learning dynamics of the 2LNN and thus allow to extract the asymptotic performance of the network as a function of signal-to-noise ratio and other hyperparameters. We finally illustrate how over-parametrising the neural network leads to faster convergence, but does not improve its final performance.
2021
International Conference on Machine Learning
139
8936
8947
https://arxiv.org/abs/2102.11742
Refinetti, M.; Goldt, S.; Krzakala, F.; Zdeborova, L.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11767/135751
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 25
  • ???jsp.display-item.citation.isi??? 0
social impact