The aim of this thesis is to explore language mechanisms in two aspects. First, the statistical properties of syntax and semantics, and second, the neural mechanisms which could be of possible use in trying to understand how the brain learns those particular statistical properties. In the first part of the thesis (part A) we focus our attention on a detailed statistical study of the syntax and semantics of the mass-count distinction in nouns. We collected a database of how 1,434 nouns are used with respect to the mass-count distinction in six languages; additional informants characterised the semantics of the underlying concepts. Results indicate only weak correlations between semantics and syntactic usage. The classification rather than being bimodal, is a graded distribution and it is similar across languages, but syntactic classes do not map onto each other, nor do they reflect, beyond weak correlations, semantic attributes of the concepts. These findings are in line with the hypothesis that much of the mass/count syntax emerges from language- and even speaker-specific grammaticalisation. Further, in chapter 3 we test the ability of a simple neural network to learn the syntactic and semantic relations of nouns, in the hope that it may throw some light on the challenges in modelling the acquisition of the mass-count syntax. It is shown that even though a simple self-organising neural network is insufficient to learn a mapping implementing a syntactic- semantic link, it does however show that the network was able to extract the concept of 'count', and to some extent that of ‘mass’ as well, without any explicit definition, from both the syntactic and from the semantic data. The second part of the thesis (part B) is dedicated to studying the properties of the Potts neural network. The Potts neural network with its adaptive dynamics represents a simplified model of cortical mechanisms. Among other cognitive phenomena, it intends to model language production by utilising the latching behaviour seen in the network. We expect that a model of language processing should robustly handle various syntactic- semantic correlations amongst the words of a language. With this aim, we test the effect on storage capacity of the Potts network when the memories stored in it share non trivial correlations. Increase in interference between stored memories due to correlations is studied along with modifications in learning rules to reduce the interference. We find that when strongly correlated memories are incorporated in the storage capacity definition, the network is able to regain its storage capacity for low sparsity. Strong correlations also affect the latching behaviour of the Potts network with the network unable to latch from one memory to another. However latching is shown to be restored by modifying the learning rule. Lastly, we look at another feature of the Potts neural network, the indication that it may exhibit spin-glass characteristics. The network is consistently shown to exhibit multiple stable degenerate energy states other than that of pure memories. This is tested for different degrees of correlations in patterns, low and high connectivity, and different levels of global and local noise. We state some of the implications that the spin-glass nature of the Potts neural network may have on language processing.

Exploring Language Mechanisms: The Mass-Count Distinction and The Potts Neural Network / Kulkarni, Ritwik. - (2014 May 29).

Exploring Language Mechanisms: The Mass-Count Distinction and The Potts Neural Network

Kulkarni, Ritwik
2014-05-29

Abstract

The aim of this thesis is to explore language mechanisms in two aspects. First, the statistical properties of syntax and semantics, and second, the neural mechanisms which could be of possible use in trying to understand how the brain learns those particular statistical properties. In the first part of the thesis (part A) we focus our attention on a detailed statistical study of the syntax and semantics of the mass-count distinction in nouns. We collected a database of how 1,434 nouns are used with respect to the mass-count distinction in six languages; additional informants characterised the semantics of the underlying concepts. Results indicate only weak correlations between semantics and syntactic usage. The classification rather than being bimodal, is a graded distribution and it is similar across languages, but syntactic classes do not map onto each other, nor do they reflect, beyond weak correlations, semantic attributes of the concepts. These findings are in line with the hypothesis that much of the mass/count syntax emerges from language- and even speaker-specific grammaticalisation. Further, in chapter 3 we test the ability of a simple neural network to learn the syntactic and semantic relations of nouns, in the hope that it may throw some light on the challenges in modelling the acquisition of the mass-count syntax. It is shown that even though a simple self-organising neural network is insufficient to learn a mapping implementing a syntactic- semantic link, it does however show that the network was able to extract the concept of 'count', and to some extent that of ‘mass’ as well, without any explicit definition, from both the syntactic and from the semantic data. The second part of the thesis (part B) is dedicated to studying the properties of the Potts neural network. The Potts neural network with its adaptive dynamics represents a simplified model of cortical mechanisms. Among other cognitive phenomena, it intends to model language production by utilising the latching behaviour seen in the network. We expect that a model of language processing should robustly handle various syntactic- semantic correlations amongst the words of a language. With this aim, we test the effect on storage capacity of the Potts network when the memories stored in it share non trivial correlations. Increase in interference between stored memories due to correlations is studied along with modifications in learning rules to reduce the interference. We find that when strongly correlated memories are incorporated in the storage capacity definition, the network is able to regain its storage capacity for low sparsity. Strong correlations also affect the latching behaviour of the Potts network with the network unable to latch from one memory to another. However latching is shown to be restored by modifying the learning rule. Lastly, we look at another feature of the Potts neural network, the indication that it may exhibit spin-glass characteristics. The network is consistently shown to exhibit multiple stable degenerate energy states other than that of pure memories. This is tested for different degrees of correlations in patterns, low and high connectivity, and different levels of global and local noise. We state some of the implications that the spin-glass nature of the Potts neural network may have on language processing.
29-mag-2014
Treves, Alessandro
Kulkarni, Ritwik
File in questo prodotto:
File Dimensione Formato  
1963_7323_thesis.pdf

accesso aperto

Tipologia: Tesi
Licenza: Non specificato
Dimensione 3.81 MB
Formato Adobe PDF
3.81 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11767/4130
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact