Neural networks extract features from data using stochastic gradient descent (SGD). In particular, higher-order input cumulants (HOCs) are crucial for their performance. However, extracting information from the p th cumulant of d -dimensional inputs is computationally hard: the number of samples required to recover a single direction from an order-p tensor (tensor PCA) using SGD grows as dp−1 , which is prohibitive for high-dimensional inputs. This result raises the question of how neural networks extract relevant directions from the HOCs of their inputs efficiently. Here, we show that correlations between latent variables along the directions encoded in different input cumulants speed up learning from higher-order correlations. We show this effect analytically by deriving nearly sharp thresholds for the number of samples required by a single neuron to recover these directions using online SGD from a random start in high dimensions. Our analytical results are confirmed in simulations of two-layer neural networks and unveil a new mechanism for hierarchical learning in neural networks

Sliding down the stairs: how correlated latent variables accelerate learning with neural networks / Bardone, Lorenzo; Goldt, Sebastian. - 235:(2024), pp. 3024-3045. (Intervento presentato al convegno International Conference on Machine Learning, tenutosi a Wien nel 21-27 July 2024).

Sliding down the stairs: how correlated latent variables accelerate learning with neural networks

Lorenzo Bardone;Sebastian Goldt
2024-01-01

Abstract

Neural networks extract features from data using stochastic gradient descent (SGD). In particular, higher-order input cumulants (HOCs) are crucial for their performance. However, extracting information from the p th cumulant of d -dimensional inputs is computationally hard: the number of samples required to recover a single direction from an order-p tensor (tensor PCA) using SGD grows as dp−1 , which is prohibitive for high-dimensional inputs. This result raises the question of how neural networks extract relevant directions from the HOCs of their inputs efficiently. Here, we show that correlations between latent variables along the directions encoded in different input cumulants speed up learning from higher-order correlations. We show this effect analytically by deriving nearly sharp thresholds for the number of samples required by a single neuron to recover these directions using online SGD from a random start in high dimensions. Our analytical results are confirmed in simulations of two-layer neural networks and unveil a new mechanism for hierarchical learning in neural networks
2024
International Conference on Machine Learning
235
3024
3045
https://proceedings.mlr.press/v235/bardone24a.html
Bardone, Lorenzo; Goldt, Sebastian
File in questo prodotto:
File Dimensione Formato  
bardone24a.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Non specificato
Dimensione 679.25 kB
Formato Adobe PDF
679.25 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11767/143214
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact