Large-scale neural recordings have established that the transformation of sensory stimuli into motor outputs relies on low-dimensional dynamics at the population level, while individual neurons exhibit complex selectivity. Understanding how low-dimensional computations on mixed, distributed representations emerge from the structure of the recurrent connectivity and inputs to cortical networks is a major challenge. Here, we study a class of recurrent network models in which the connectivity is a sum of a random part and a minimal, low-dimensional structure. We show that, in such networks, the dynamics are low dimensional and can be directly inferred from connectivity using a geometrical approach. We exploit this understanding to determine minimal connectivity required to implement specific computations and find that the dynamical range and computational capacity quickly increase with the dimensionality of the connectivity structure. This framework produces testable experimental predictions for the relationship between connectivity, low-dimensional dynamics, and computational features of recorded neurons. Neural recordings show that cortical computations rely on low-dimensional dynamics over distributed representations. How are these generated by the underlying connectivity? Mastrogiuseppe et al. use a theoretical approach to infer low-dimensional dynamics and computations from connectivity and produce predictions linking connectivity and functional properties of neurons.

Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural networks / Mastrogiuseppe, Francesca; Ostojic, Srdjan. - In: NEURON. - ISSN 0896-6273. - 99:3(2018), pp. 609-623. [10.1016/j.neuron.2018.07.003]

Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural networks

Mastrogiuseppe, Francesca;
2018-01-01

Abstract

Large-scale neural recordings have established that the transformation of sensory stimuli into motor outputs relies on low-dimensional dynamics at the population level, while individual neurons exhibit complex selectivity. Understanding how low-dimensional computations on mixed, distributed representations emerge from the structure of the recurrent connectivity and inputs to cortical networks is a major challenge. Here, we study a class of recurrent network models in which the connectivity is a sum of a random part and a minimal, low-dimensional structure. We show that, in such networks, the dynamics are low dimensional and can be directly inferred from connectivity using a geometrical approach. We exploit this understanding to determine minimal connectivity required to implement specific computations and find that the dynamical range and computational capacity quickly increase with the dimensionality of the connectivity structure. This framework produces testable experimental predictions for the relationship between connectivity, low-dimensional dynamics, and computational features of recorded neurons. Neural recordings show that cortical computations rely on low-dimensional dynamics over distributed representations. How are these generated by the underlying connectivity? Mastrogiuseppe et al. use a theoretical approach to infer low-dimensional dynamics and computations from connectivity and produce predictions linking connectivity and functional properties of neurons.
2018
99
3
609
623
https://arxiv.org/abs/1711.09672
Mastrogiuseppe, Francesca; Ostojic, Srdjan
File in questo prodotto:
File Dimensione Formato  
mastrogiuseppe_neuron_2018.pdf

non disponibili

Descrizione: pdf editoriale
Tipologia: Versione Editoriale (PDF)
Licenza: Non specificato
Dimensione 3.89 MB
Formato Adobe PDF
3.89 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11767/148430
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 232
  • ???jsp.display-item.citation.isi??? 219
social impact