What Is Claude? Anthropic Doesn’t Know, Either | The New Yorker
https://www.newyorker.com/magazine/2026/02/16/what-is-claude-anthropic-doesnt-know-either
QT:{{” When the model predicts the next word, it is not doing so just on the basis of the words that came before. It is also “keeping in mind” all the words that might plausibly come after. It predicts the immediate future in the light of its predictions of the more distant future. Anthropic’s techniques verify this. When Batson clicked on the words “grab it,” at the end of the prompt, the network lit up with possibilities for not only the next word (“His”) but also those on the more distant horizon—the endgame of “habit” or “rabbit.” Batson compared Claude to a veteran backpacker on the Appalachian Trail: “Experienced through-hikers know to mail themselves peanut butter at some further stage. What the model is doing is like mailing itself the peanut butter of ‘rabbit.’ ”
“}}