Researchers are figuring out how large language models work

August 11, 2024

https://www.economist.com/science-and-technology/2024/07/11/researchers-are-figuring-out-how-large-language-models-work
QT:{{”
A sparse autoencoder is, essentially, a second, smaller neural network that is trained on the activity of an LLM, looking for distinct patterns in activity when “sparse” (ie, very small) groups of its neurons fire together. Once many such patterns, known as features, have been identified, the researchers can determine which words trigger which features. The Anthropic team found individual features that corresponded to specific cities, people, animals and chemical elements, as well as higher-level concepts such as transport infrastructure, famous female tennis players, or the notion of secrecy. They performed this exercise three times, identifying 1m, 4m and, on the last go, 34m features within the Sonnet LLM.
“}}