Abstract: Attention plays a fundamental role in both natural and artificial intelligence systems. In deep learning, several attention-based neural network architectures have been proposed to tackle problems in natural language processing (NLP) and beyond, including transformer architectures which currently achieve state-of-the-art performance in NLP tasks. In this presentation we will:
- identify and classify the most fundamental building blocks (quarks)
of attention, both within and beyond the standard model of deep learning;
- identify how these building blocks are used in all current attention-based architectures, including transformers;
- demonstrate how transformers can effectively be applied to new problems in physics,
from particle physics to astronomy; and 4) present a mathematical theory of attention capacity where, paradoxically, one of the main tools in the proofs is itself an attention mechanism.
Joint work with Roman Vershynin.