It's me!

Karl Daniel

Human AI

As Dr. Ian Malcolm famously said in Jurassic Park, "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." This warning resonates deeply with AI deployment today, particularly in domains that still fundamentally require human partnership.

The capability to automate doesn't itself justify automation. While I remain broadly optimistic about AI's potential, genuine progress lies in establishing cognitive partnerships with these systems, particularly for tasks that benefit from human judgement. The value here emerges not from wholesale replacement but from thoughtful integration, where AI amplifies rather than supplants human capability.

An example of this we are seeing, is in the ever increasing proliferation of LLMs as replacement for customer service agents, which appear deceptively straightforward at first. Given the ubiquity of systems like ChatGPT, the technical path seems obvious. Yet the challenge isn't teaching LLMs to act like support agents, it's the architectural constraints we impose to prevent these systems from veering into inappropriate territory, whether that's spontaneously generating Python scripts or wandering into entirely unrelated domains.

The paradox of AI development is how do we construct meaningful boundaries around a technology without destroying the very intelligence that makes it valuable? ChatGPT's apparent capability stems from its generality, its ability to engage with nearly any domain or task. The same flexibility that enables sophisticated reasoning becomes a liability when we need predictable, constrained behaviour.

Attempting to deploy an LLM as a customer service replacement through simple prompting fundamentally misunderstands both technologies and human nature. When organisations hire customer service representatives, certain behavioural constraints come pre-installed. No training manual needs to specify "don't respond to billing queries with literary essays" or "avoid recommending restaurants when troubleshooting software." Human agents possess contextual awareness that emerges naturally from their understanding of social norms and professional boundaries.

Language models lack these implicit constraints. They operate, ultimately as sophisticated pattern-matching systems, equally prepared to generate the next token in a Python script as they are to address a refund request. Any sufficiently motivated user can prompt or manipulate them beyond their intended boundaries.

The most compelling applications recognise AI as an augmentation layer rather than a replacement strategy. Consider a support interaction where a human agent handles the conversation while AI systems work in parallel by transcribing the dialogue, surfacing relevant documentation through semantic search, identifying patterns across historical cases. This partnership preserves what humans excel at while leveraging computational capabilities no individual could match.

Beyond just the technical considerations though, we need to examine the ergonomics of customer service itself. LLMs with strict guardrails often just become points of frustration that trap users in circular conversations. Deploying advanced technology and then essentially trying to narrowly constrain it is like using a nuclear reactor to power your car. Chat bots as common as they are have become essentially a modern archetype of poor design.

When customers reach out for help, they've already encountered friction. They've faced a problem beyond their ability to resolve independently. Adding additional layers of automation particularly those that might misunderstand context or respond with generic solutions only compounds rather than alleviates this frustration. The sophistication of your technical stack becomes irrelevant if the customer experience deteriorates at this critical junction.

Klarna's recent reversal provides an instructive case study. After initially championing AI-driven customer service, they've acknowledged that customers prefer human interaction. This shift doesn't represent a failure of AI capability but rather a recognition of customer psychology and how AI, or more specifically LLMs are not the answer to every problem.

The path forward isn't about choosing between human and artificial intelligence but leveraging their respective strengths. AI excels at information retrieval, pattern recognition, and handling routine queries at scale. Humans bring emotional intelligence, creative problem solving and the inherent ability to navigate ambiguity. Rather than viewing AI as purely a cost-reduction mechanism, we should approach it as a force multiplier for human capability, enabling agents to operate at unprecedented efficiency while preserving the empathy that defines meaningful service interactions.

The lesson here doesn't just apply to customer service bots though, the enthusiasm for automation obscures a simpler truth. Tools remain tools, however sophisticated. Their value emerges not from raw capability but from how thoughtfully we integrate them. Success will belong to those who resist letting technical possibility override human need, who understand that the most elegant solution isn't always the most technologically advanced one, but the one that best serves the people it's designed to help.

#ai #development #thoughts