Sensory systems appear to learn to transform incoming sensory information into perceptual representations, or "objects," that can inform and guide behavior with minimal explicit supervision. Here, we propose that the auditory system can achieve this goal by using time as a supervisor, i.e., by learning features of a stimulus that are temporally regular. We will show that this procedure generates a feature space sufficient to support fundamental computations of auditory perception. In detail, we consider the problem of discriminating between instances of a prototypical class of natural auditory objects, i.e., rhesus macaque vocalizations. We test discrimination in two ethologically relevant tasks: discrimination in a cluttered acoustic background and generalization to discriminate between novel exemplars. We show that an algorithm that learns these temporally regular features affords better or equivalent discrimination and generalization than conventional feature-selection algorithms, i.e., principal component analysis and independent component analysis. Our findings suggest that the slow temporal features of auditory stimuli may be sufficient for parsing auditory scenes and that the auditory brain could utilize these slowly changing temporal features.

Time as a supervisor: temporal regularity and auditory object learning / Ditullio, Ronald W; Parthiban, Chetan; Piasini, Eugenio; Chaudhari, Pratik; Balasubramanian, Vijay; Cohen, Yale E. - In: FRONTIERS IN COMPUTATIONAL NEUROSCIENCE. - ISSN 1662-5188. - 17:(2023), pp. 1-16. [10.3389/fncom.2023.1150300]

Time as a supervisor: temporal regularity and auditory object learning

Piasini, Eugenio;Balasubramanian, Vijay;
2023-01-01

Abstract

Sensory systems appear to learn to transform incoming sensory information into perceptual representations, or "objects," that can inform and guide behavior with minimal explicit supervision. Here, we propose that the auditory system can achieve this goal by using time as a supervisor, i.e., by learning features of a stimulus that are temporally regular. We will show that this procedure generates a feature space sufficient to support fundamental computations of auditory perception. In detail, we consider the problem of discriminating between instances of a prototypical class of natural auditory objects, i.e., rhesus macaque vocalizations. We test discrimination in two ethologically relevant tasks: discrimination in a cluttered acoustic background and generalization to discriminate between novel exemplars. We show that an algorithm that learns these temporally regular features affords better or equivalent discrimination and generalization than conventional feature-selection algorithms, i.e., principal component analysis and independent component analysis. Our findings suggest that the slow temporal features of auditory stimuli may be sufficient for parsing auditory scenes and that the auditory brain could utilize these slowly changing temporal features.
2023
17
1
16
1150300
10.3389/fncom.2023.1150300
https://www.biorxiv.org/content/10.1101/2022.11.10.515986v3.abstract
Ditullio, Ronald W; Parthiban, Chetan; Piasini, Eugenio; Chaudhari, Pratik; Balasubramanian, Vijay; Cohen, Yale E
File in questo prodotto:
File Dimensione Formato  
fncom-17-1150300 (1).pdf

accesso aperto

Descrizione: pdf editoriale
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 1.89 MB
Formato Adobe PDF
1.89 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11767/135630
Citazioni
  • ???jsp.display-item.citation.pmc??? 3
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact