Human AI
As Dr. Ian Malcolm famously said in Jurassic Park, "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." A warning which I think resonates particularly deeply with AI deployment in it's current state.
I say this in the context that, the capability to automate doesn't itself justify automation. While I remain broadly optimistic about AI's potential, progress lies in establishing a cognitive partnership with these systems, particularly for tasks that benefit from human judgement. Rather than wholesale replacement - a thoughtful integration, where AI can amplify rather than supplant.
An example of this we are seeing, is in the ever increasing proliferation of LLMs as replacement for customer service agents, which appear deceptively straightforward at first. Given the ubiquity of systems like ChatGPT, the technical path seems obvious. Yet the challenge isn't teaching LLMs to act like support agents, it's the architectural constraints we impose to prevent these systems from veering into inappropriate territory.
The paradox of AI development is how do we construct meaningful boundaries around it without destroying the very intelligence that makes it valuable. ChatGPT's capability stems from its generality, its ability to engage with nearly any domain or task. The same flexibility that enables diverse reasoning becomes a liability when we need predictable, constrained behaviour such as for a support agent.
Attempting to deploy an LLM to completely supplant a human support agent fundamentally misunderstands the respective strengths of the technology and human nature. When organisations hire customer service representatives, certain behavioural constraints come pre-installed. No training manual needs to specify "don't respond to billing queries with literary essays" or "avoid recommending restaurants when troubleshooting software." Human agents possess contextual awareness that emerges naturally from their understanding of social norms and professional boundaries.
Language models lack these implicit constraints. They operate, ultimately as sophisticated pattern-matching systems, equally prepared to generate the next token in a Python script as they are to address a refund request. Any sufficiently motivated user can prompt or manipulate them beyond their intended boundaries.
The most compelling application is to recognise AI as an augmentation layer rather than a replacement strategy. Consider a support interaction where a human agent handles the conversation while AI systems work in parallel by transcribing the dialogue, surfacing relevant documentation through semantic search, identifying patterns across historical cases. This partnership preserves what humans excel at, interacting with other humans, while leveraging computational capabilities an individual couldn't match.
Beyond just the technical considerations though, we need to examine the ergonomics of customer service itself. LLMs with strict guardrails often just become points of frustration that trap users in circular conversations. Deploying advanced technology and then essentially trying to narrowly constrain it in this manner is like using a nuclear reactor to power your car. Chat bots as common as they are have become essentially a modern archetype of poor design.
When customers reach out for help, they've already encountered friction. They've faced a problem beyond their ability to resolve independently. Adding additional layers of automation particularly those that might misunderstand context or respond with generic solutions only compounds rather than alleviates this frustration. The sophistication of your technical stack becomes irrelevant if the customer experience deteriorates at this critical junction.
Klarna's recent reversal provides a case study. After initially championing AI-driven customer service, they've acknowledged that customers prefer human interaction. This shift doesn't represent a failure of AI capability but rather a recognition of customer psychology and how AI, or more specifically LLMs are not the answer to every problem.
The path forward isn't about choosing between human and artificial intelligence but leveraging their respective strengths. AI excels at information retrieval, pattern recognition, and handling routine queries at scale. Humans bring emotional intelligence, creative problem solving and the inherent ability to navigate ambiguity. Rather than viewing AI as purely a cost-reduction mechanism, we should approach it as a force multiplier for human capability, enabling agents to operate at unprecedented efficiency while preserving the empathy that defines meaningful service interactions.
The lesson here doesn't just apply to customer service bots though, the enthusiasm for automation obscures a simpler truth. Tools remain tools, however sophisticated. Their value emerges not purely from raw capability but from how thoughtfully we can integrate them. Success will belong to those who resist letting technical possibility override human need, who understand that the most elegant solution isn't always the most technologically advanced one, but the one that best serves the people it's designed to help.