Neural networks excel at discovering statistical patterns in high-dimensional data sets. In practice, higher-order cumulants, which quantify the non-Gaussian correlations between three or more variables, are particularly important for the performance of neural networks. But how efficient are neural networks at extracting features from higher-order cumulants? We study this question in the spiked cumulant model, where the statistician needs to recover a privileged direction or "spike'' from the order- cumulants of -dimensional inputs. We first discuss the fundamental statistical and computational limits of recovering the spike by analysing the number of samples required to strongly distinguish between inputs from the spiked cumulant model and isotropic Gaussian inputs. Existing literature established the presence of a wide statistical-to-computational gap in this problem. We deepen this line of work by finding an exact formula for the likelihood ratio norm which proves that statistical distinguishability requires samples, while distinguishing the two distributions in polynomial time requires samples for a wide class of algorithms, i.e. those covered by the low-degree conjecture. Numerical experiments show that neural networks do indeed learn to distinguish the two distributions with quadratic sample complexity, while ``lazy'' methods like random features are not better than random guessing in this regime. Our results show that neural networks extract information from higher-order correlations in the spiked cumulant model efficiently, and reveal a large gap in the amount of data required by neural networks and random features to learn from higher-order cumulants.

Learning from higher-order statistics, efficiently: hypothesis tests, random features, and neural networks / Székely, E; Bardone, L; Gerace, F; Goldt, S. - (2024). (Intervento presentato al convegno NeurIPS 2024 The Thirty-Eighth Annual Conference on Neural Information Processing Systems tenutosi a Vancouver, Canada nel 10-15 december 2024).

Learning from higher-order statistics, efficiently: hypothesis tests, random features, and neural networks

L Bardone;F Gerace;S Goldt
2024-01-01

Abstract

Neural networks excel at discovering statistical patterns in high-dimensional data sets. In practice, higher-order cumulants, which quantify the non-Gaussian correlations between three or more variables, are particularly important for the performance of neural networks. But how efficient are neural networks at extracting features from higher-order cumulants? We study this question in the spiked cumulant model, where the statistician needs to recover a privileged direction or "spike'' from the order- cumulants of -dimensional inputs. We first discuss the fundamental statistical and computational limits of recovering the spike by analysing the number of samples required to strongly distinguish between inputs from the spiked cumulant model and isotropic Gaussian inputs. Existing literature established the presence of a wide statistical-to-computational gap in this problem. We deepen this line of work by finding an exact formula for the likelihood ratio norm which proves that statistical distinguishability requires samples, while distinguishing the two distributions in polynomial time requires samples for a wide class of algorithms, i.e. those covered by the low-degree conjecture. Numerical experiments show that neural networks do indeed learn to distinguish the two distributions with quadratic sample complexity, while ``lazy'' methods like random features are not better than random guessing in this regime. Our results show that neural networks extract information from higher-order correlations in the spiked cumulant model efficiently, and reveal a large gap in the amount of data required by neural networks and random features to learn from higher-order cumulants.
2024
Advances in Neural Information Processing 37
https://openreview.net/pdf?id=uHml6eyoVF
Székely, E; Bardone, L; Gerace, F; Goldt, S
File in questo prodotto:
File Dimensione Formato  
17639_Learning_from_higher_ord.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 2.12 MB
Formato Adobe PDF
2.12 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11767/143215
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact