The Jagged Frontier
The most dangerous misconception about artificial intelligence isn't that it will replace us. It's that we likely misunderstand where it excels and where it fails. We imagine AI capability as a smooth gradient, easy tasks should be easy and hard tasks should be hard. However, in reality AI competence forms what researchers call a "jagged frontier", an unpredictable landscape of peaks and valleys that defies our intuitions about intelligence itself.
This jagged topology reveals itself in surprising ways. The same system that writes sophisticated poetry might struggle with basic arithmetic. An AI that passes a legal exam might fail to beat us in a game of noughts and crosses (tic-tac-toe for my Americans reading). These paradoxes force us to reconsider what capability actually means in the age of machine learning.
The creative potential of these systems have proved particularly surprising. When ChatGPT scored in the 99th percentile on the Torrance Test of Creative Thinking, it challenged the traditional assumption that only humans were capable of true creativity. Other studies where humans and AI competed to generate product ideas, saw AI generate 7x more top-rated product ideas. This suggests creativity might be less about some sort of "divine" inspiration and more about systematically exploring possibilities.
Recently I finished "Co-Intelligence", by Ethan Mollick - where he draws a compelling parallel with the Wright brothers, who succeeded where others failed, by combining different knowledge domains. Their observations of birds merged with practical experience as bicycle mechanics to unlock flight. AI serves a similar function today, connecting ideas from the entire corpus of human knowledge. The technology doesn't replace human insight but amplifies it, becoming the comprehensive reference the Wright brothers never had.
Equipped with this understanding, it transforms how we work with these systems. Traditional approaches position humans at best as quality control, but Mollick's framework for the future suggests something richer, which I am far more fond of. Rather than just catching errors, we become conductors that orchestrate intelligent systems in pursuit of a goal, weaving machine competence with human judgment and values. The relationship evolves from just tool-use into genuine collaboration.
AI generates options and identifies patterns at scale. Humans contribute something different but equally vital. We spot the detail, recognise emotional resonance and grasp implications that machine learning might miss about the context of our particular goal. Our value lies not in competing with this ever advancing computational power but in providing wisdom that transforms raw computation into meaningful output.
People rightly worry about AI flooding our world with mediocre content, yet this misses the deeper opportunity. When we shift from viewing AI as a production machine to seeing it as a thinking partner, the dynamic changes entirely. The question becomes not how much we can generate, but how deeply we can explore ideas that matter. We all now have the ability to explore complex ideas in a way that we have never done before, in a transformation that can only be compared to the dawn of the computer and later the internet.
Critics might also rightly argue that future AI systems will work without humans at all. Why worry about being in the loop when we'll be obsolete? However, this fatalistic view robs us of the agency to use these tools as they exist today. Waiting for some hypothetical future where AI needs no human input means missing the immediate benefits of collaboration. The jagged frontier exists now and it's not going anywhere, anytime soon.
There's a particular hubris in dismissing AI's creative potential - a failure to recognise that these systems have absorbed more human knowledge than any individual could comprehend in a thousand lifetimes. Just as we've always stood on the shoulders of books (as I am arguably here in the case of Co-Intelligence), mentors, and collective wisdom of the internet, we now have access to a different kind of intelligence. One that doesn't replace human insight but amplifies it in ways we're only beginning to understand.
The transformation of work through AI isn't a question of if, but how we choose to engage with this jagged frontier. Not as adversaries competing for relevance, but as collaborators exploring uncharted territory. The future belongs not to human or machine alone, but to the unexpected possibilities we create together.