QT:{{”
Users ask questions, prompting the A.I. to produce individual units of intelligence called “tokens.” A token might be a small square of pixels or a fragment of a word. To write a college term paper, an A.I. might produce about five thousand tokens, consuming enough electricity to run a microwave oven at full power for about three minutes. As A.I. fields increasingly complex requests—for video, for audio, for therapy—the need for computing power will increase many times over.
Multiply that by the more than eight hundred million people who use ChatGPT every week, and the data-center explosion makes sense. ChatGPT is now more popular than Wikipedia
…
Data centers “are perhaps bigger, by an order of magnitude, than anything we’ve connected to the grid before,” he said. “If you think about the city of Philadelphia, its load is about one gigawatt. Now imagine adding one-gigawatt-sized data centers to the grid, and not just one, but multiples of them.”
…
The modern approach to A.I. development has been to vacuum up any online data available—including audio, video, practically all published work in English, and more than three billion web pages—and let lawyers sort through the mess.
But there is now talk of a data shortage. There are thought to be about four hundred trillion words on the indexed internet, but, as the OpenAI co-founder Andrej Karpathy has noted, much of that is “total garbage.” High-quality text is harder to find. If trends continue, researchers say, A.I. developers could exhaust the usable supply of human text between 2026 and 2032.
“}}
Witt, S. (2025, October 27). Inside the data centers that train A.I. and drain the electrical grid. The New Yorker.
https://www.newyorker.com/magazine/2025/11/03/inside-the-data-centers-that-train-ai-and-drain-the-electrical-grid
Nice numbers on energy usage