Is AI making us dumb? No.
MIT researchers recently discovered that ChatGPT users showed dramatically reduced brain activity compared to unassisted writers, news outlets pounced of course on the simple narrative: "AI is making us dumber". Yet this interpretation fundamentally misunderstands what the data actually reveals about intelligence and adaptation.
The experiment's context matters. Short, twenty minutes, formulaic SAT essays, participants encountering ChatGPT for the first time, restricted solely to that tool. What the researchers captured wasn't cognitive decline but immediate surrender, novices offloading their mental effort to a powerful system. These weren't aspiring writers seeking mastery but paid participants completing a narrow task. The study design practically guaranteed superficial use.
This distinction is crucial because we're conflating neural activity with cognitive engagement.
Neuroscience has long understood that expertise manifests as neural efficiency, not excess. Chess grandmasters show less brain activity than novices during play. Professional pianists exhibit minimal motor cortex activity where beginners show scattered activation. High brain activation often signals struggle and adaptation not sophistication.
The study's own citations as well reveal a more nuanced picture. Yang et al. found that "higher-competence learners engaged more deeply with content through reading-intensive behaviours" when using AI. Strategic users maintained substantive engagement whilst leveraging assistance, achieving genuine cognitive partnership rather than wholesale delegation. They knew what to offload and what to retain. The MIT participants had no incentive to reach this level of engagement, no pathway to become these high-competence learners.
The concerning patterns in the MIT study, fragmented ownership, inability to quote their own work, reflect the shallow engagement of both first contact and low incentives. When humans encounter cognitive augmentation without foundation or proper motivation, they often surrender rather than collaborate. Yet even this immediate capitulation reveals something profound, how readily we redistribute cognitive effort when tools appear.
Just as calculators freed engineers from basic arithmetic or high level languages freed programmers to tackle complex algorithmic challenges, AI tools promise similar liberation. Not from thinking itself, but from the mechanical aspects that constrain our capacity to solve harder problems. The question isn't whether we'll think less, but whether we'll think better about more ambitious challenges.
The study captures one end of a spectrum between mental efficiency and mental offloading. Testing basic arithmetic with a calculator produces the same low neural firing we see here, someone delegating simple tasks to a tool. We don't panic about reduced brain activity when people use phone contacts instead of memorising numbers. We recognise it as sensible resource allocation.
Yang's strategic users and those chess grandmasters occupy the other end of this spectrum. They demonstrate that reduced neural activity can signal sophisticated adaptation, not abandonment. The difference lies in understanding what you're delegating and why.
The study's limitations of course don't invalidate its findings; they reveal we're witnessing the messy beginning of a cognitive partnership whose potential we've barely begun to explore. The crude measurement of neural activity doesn't mean we're all going to, or indeed should be required to surrender our mental faculties to AI tools.