It's me!

Karl Daniel

Tales of a Vibe Coder

I've been on a journey to explore and understand the fate of engineers, indulging myself in a future where traditional software developers like myself no longer need to comprehend the code they write generate. Fully relinquishing their entire responsibility of implementation to the AI.

Alas. That future does not yet exist, nor do I see it being a simple inevitability.

Having forked out now a few hundred £s on AI tools, including the very impressive Claude Code (other tools such as Codex and Friday are also available) - I'm here to give you a rundown on what works, what doesn't and why you should probably be using AI more. Especially as an engineer, even if that doesn't mean it's going to let you play the latest Doom game while your AI agent picks up the day job.

(Obligatory Notice: Given AI's breakneck development pace, some of these issues might be fixed by next Tuesday)

Eager to Please

As an engineer, naturally I’m sceptical at embracing a fully agentic approach to writing code - however my interactions with Claude Code (plus various MCP integrations) were incredibly impressive. Especially when compared to some of my early experiences with Co-Pilot like assistants.

Thanks to the backing of Anthropic's latest models, particularly Opus 4 - Claude Code is incredibly good at understanding the wider context of the codebase. However, what really elevates and augments this capability in such a powerful way are the tools it has access to. With full control over the CLI it can navigate around directories, run commands, obtaining the relevant context to then effectively implement changes.

It's able to stand up basic UIs within minutes, implement functions and even entire features in some cases, all in the blink of an eye. Powerful stuff. Maybe vibe coding really does work I thought?

Unfortunately, not quite. The issue is this rapid generation comes often at the expense of architectural decision making. That is to say, AI in its current form tends to be all too happy to comply without much resistance, this makes it an excellent tool but not always a great engineer.

Vault as Lunchbox

There are a number of common missteps that I noticed from my experience. Now I'm going to skip over the obvious mention of hallucinations / or generating errors - which are often as easily corrected as they are produced. Instead I want to focus on the fundamental flaws AI needs to overcome to become a fully autonomous coder.

Firstly, AI tends to lean towards verbosity, it finds a solution, a bit like using a vault as a lunchbox is a solution - sure, it'll keep your sandwich secure, but you're going to throw out your back carrying it around. Endless amounts of code seems to come at the expense of principles such as DRY.

Indeed, due to current limitations on the context window, the AI will often need to compact. Meaning it can often fail to cleanup previous changes it’s made. Leading to yet more bloat and verbosity. I’ve seen an AI vibe itself to a monolithic JS file so large it couldn’t fit properly inside its own context window.

AI often takes such a solution first approach and runs with it, seeming keen to reinvent the wheel rather than reusing robust packages that already exist - even going so far as in an example I was working on, to invent an entire non-working OCR solution.

AI Gibberish Generator

Thanks, I guess?

I should be quick to point out here that the guiding hand of an experienced engineer could easily have kept this AI on track, but I'm just here for the vibes right now... and it feels bad man. Nevertheless, I persevered.

Context Monopoly

Finally, the most fundamental flaw with vibe coding requires an understanding of how these AI models operate. LLMs like ChatGPT and Claude are auto-regressive: they predict the next token in a sequence based only on what came before it. This distinguishes them from Masked Language Models, which can predict missing tokens using the full context around them.

This means in order to effectively leverage the most value from an auto-regressive LLM you need to be able provide it with sufficient context, such that it can effectively sequence the tokens (code) that you want.

Context however, is still our domain - humans maintain a monopoly on the subjectivity of value (at least for now). Sure, the AI will dutifully create your red button, but it doesn't understand why red matters more than blue for your purpose. That 'why' is the context only we can provide.

With the danger of becoming as verbose here as the AI models I'm referencing, the principle issue this causes is that the less you understand about the code you are writing - the harder it is to provide the necessary context to direct the AI effectively. How can you provide effective context if you don't understand the thing you are giving context about?

Guardrails vs Chaos

Early on, pasting error logs to the AI might be enough. However, as complexity grows, it needs explicit guidance: 'Don't put everything in one file. Don't build OCR from scratch. Use existing packages.'* Individual missteps are manageable with oversight, but their collective damage can be catastrophic. As engineers, we take these interventions for granted - but it's precisely this guardrail experience that unlocks AI's true value.

*(I'm being intentionally brief here)

This guardrail is what separates a viber saying 'fix this or you go to jail' whilst on their 129th iteration of a modal from a skilled engineer implementing a microservice in a few hours.

This brings me though to the most important aspect this has demonstrated to me and hopefully to you - that the fundamental value engineers bring is not their ability to write code. Hell, I've written code in JavaScript, TypeScript, Go, Java, C#, Bash, Lua, Visual Basic, Liberty Basic (anyone remember that one?) and countless more. The point is languages serve a purpose in pursuit of solving a problem. They are tools, as is AI - an incredibly powerful one at that but not beyond comprehension or yet free from fault.

Despite everything I've just complained about, my bank statement shows I'm still subscribed. Why? Simple: in the hands of an engineer who gets architecture and design patterns, these tools are force multipliers, not chaos generators.

I've learned to take full responsibility for my main branch not despite the AI, but because of it. When I catch those critical moments where context matters - guiding it toward the right package or pattern instead of letting it build a 10k-line monstrosity - that's when the real productivity gains emerge.

These tools quite simply are reshaping engineering, but not by replacing skilful engineers, but by amplifying what they can accomplish. It isn't about AI agents writing all the software right now - it's about engineers leveraging AI like a tradesperson using a power tool, knowing exactly when to guide, when to override, and when to let it run. The engineers who thrive will be those who embrace this partnership, providing the judgment and context that transforms raw AI capability into elegant, maintainable solutions.

#ai #development #thoughts