Transformers are neural networks that revolutionized natural language processing and machine learning. They process sequences of inputs, like words, using a mechanism called self-attention, which is trained via masked language modeling (MLM). In MLM, a word is randomly masked in an input sequence, and the network is trained to predict the missing word. Despite the practical success of transformers, it remains unclear what type of data distribution self-attention can learn efficiently. Here, we show analytically that if one decouples the treatment of word positions and embeddings, a single layer of self-attention learns the conditionals of a generalized Potts model with interactions between sites and Potts colors. Moreover, we show that training this neural network is exactly equivalent to solving the inverse Potts problem by the so-called pseudolikelihood method, well known in statistical physics. Using this mapping, we compute the generalization error of self-attention in a model scenario analytically using the replica method.

Mapping of attention mechanisms to a generalized Potts model / Rende, Riccardo; Gerace, Federica; Laio, Alessandro; Goldt, Sebastian. - In: PHYSICAL REVIEW RESEARCH. - ISSN 2643-1564. - 6:2(2024), pp. 1-10. [10.1103/physrevresearch.6.023057]

Mapping of attention mechanisms to a generalized Potts model

Rende, Riccardo;Gerace, Federica;Laio, Alessandro;Goldt, Sebastian
2024-01-01

Abstract

Transformers are neural networks that revolutionized natural language processing and machine learning. They process sequences of inputs, like words, using a mechanism called self-attention, which is trained via masked language modeling (MLM). In MLM, a word is randomly masked in an input sequence, and the network is trained to predict the missing word. Despite the practical success of transformers, it remains unclear what type of data distribution self-attention can learn efficiently. Here, we show analytically that if one decouples the treatment of word positions and embeddings, a single layer of self-attention learns the conditionals of a generalized Potts model with interactions between sites and Potts colors. Moreover, we show that training this neural network is exactly equivalent to solving the inverse Potts problem by the so-called pseudolikelihood method, well known in statistical physics. Using this mapping, we compute the generalization error of self-attention in a model scenario analytically using the replica method.
2024
6
2
1
10
023057
https://doi.org/10.1103/PhysRevResearch.6.023057
https://arxiv.org/abs/2304.07235
Rende, Riccardo; Gerace, Federica; Laio, Alessandro; Goldt, Sebastian
File in questo prodotto:
File Dimensione Formato  
PhysRevResearch.6.023057.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 497.5 kB
Formato Adobe PDF
497.5 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11767/143211
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
social impact