Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data. We study this phenomenon by analysing the dynamics and the performance of over-parameterised two-layer neural networks in the teacher-student setup, where one network, the student, is trained on data generated by another network, called the teacher. We show how the dynamics of stochastic gradient descent (SGD) is captured by a set of differential equations and prove that this description is asymptotically exact in the limit of large inputs. Using this framework, we calculate the final generalisation error of student networks that have more parameters than their teachers. We find that the final generalisation error of the student increases with network size when training only the first layer, but stays constant or even decreases with size when training both layers. We show that these different behaviours have their root in the different solutions SGD finds for different activation functions. Our results indicate that achieving good generalisation in neural networks goes beyond the properties of SGD alone and depends on the interplay of at least the algorithm, the model architecture, and the data set.

Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup / Goldt, Sebastian; Advani, Madhu; Saxe, Andrew; Krzakala, Florent; Zdeborová, Lenka. - 32:(2019). (Intervento presentato al convegno 33rd Conference on Neural Information Processing Systems (NeurIPS) tenutosi a Vancouver , Canada nel 08-14/12/2019).

Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup

Goldt, Sebastian
;
Krzakala, Florent;
2019-01-01

Abstract

Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data. We study this phenomenon by analysing the dynamics and the performance of over-parameterised two-layer neural networks in the teacher-student setup, where one network, the student, is trained on data generated by another network, called the teacher. We show how the dynamics of stochastic gradient descent (SGD) is captured by a set of differential equations and prove that this description is asymptotically exact in the limit of large inputs. Using this framework, we calculate the final generalisation error of student networks that have more parameters than their teachers. We find that the final generalisation error of the student increases with network size when training only the first layer, but stays constant or even decreases with size when training both layers. We show that these different behaviours have their root in the different solutions SGD finds for different activation functions. Our results indicate that achieving good generalisation in neural networks goes beyond the properties of SGD alone and depends on the interplay of at least the algorithm, the model architecture, and the data set.
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS vol. 32
32
Goldt, Sebastian; Advani, Madhu; Saxe, Andrew; Krzakala, Florent; Zdeborová, Lenka
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11767/117831
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 81
  • ???jsp.display-item.citation.isi??? 19
social impact