It's me!

Karl Daniel

AI Meets Social Simulation

AI is creeping into policy-making. Not dramatically, but incrementally - decision by decision, algorithm by algorithm. As these systems begin shaping social outcomes, we're left with an uncomfortable question: how do we trust qualitative assessments from systems we can't properly interrogate?

During my exploration of agent-based models with Julia (because apparently that's what passes for a hobby these days), I stumbled onto something interesting. What if we could combine ABMs with large language models?

The EU AI Act demands transparency in high-risk applications. Transparency, though, isn't just ticking boxes and filing documentation. Real transparency means cracking open the decision making process itself - letting people see not just what an AI concluded, but how it actually reasons about the world.

Black Boxes

Agent-based models (ABMs) simulate complex systems from the ground up. They create thousands of individual agents which represent stakeholders in a certain simulation - for example people, businesses, vehicles - each following simple rules. Let these agents play out interactions, and sophisticated patterns emerge: traffic flows, market dynamics, social movements. It's a pretty literal expression of social dynamics through computational form.

Now picture this, an AI doesn't just spit out recommendations. It builds a simulation showing its logic:

AI: "Increase bus frequency on Route 47 by 30%"

Also AI: "Week 6: commuters switch from cars to buses. 
Property values rise near stations. Foot traffic up 23%. 
New bottlenecks form where we didn't expect. 
The simulation shows what we missed."

Traditional AI systems are fascinating, but suffer from opacity. Layers of neural connections form during training - mathematical relationships so tangled that even their creators can't fully explain them. These networks encode decisions in ways fundamentally alien to how humans think. The ultimate black box, if you will.

Agent-based models work differently. They tell stories we can follow. Combine them with AI, and suddenly those opaque recommendations become transparent simulations. Black boxes with windows - imagine that.

Take this Julia code - readable by anyone with basic programming knowledge:

function choose_transport(commuter::Agent, model::CityModel)
    # Look, actual logic you can argue with
    if distance_to_work(commuter) < 1.0  # miles
        return :walk
    elseif has_parking(commuter.workplace) && commuter.income > median_income(model)
        return :car
    elseif distance_to_transit(commuter) < 0.5
        return :transit
    else
        return :car  # the British default
    end
end

The AI could generate these rules from patterns in data. They're not perfect - but at least they're debatable and editable. Each parameter represents an assumption about human behaviour that we can actually check against reality.

Understanding

Modern language models are unsurprisingly, very good at extracting structure from natural language. Feed them a policy challenge, and they could construct a living simulation where assumptions are explicit and outcomes emerge from interactions. Not predictions carved in stone, but explorable possibilities.

This approach could generate multiple models - each representing different assumptions about how people (or whatever it is simulating) behave, how quickly they adapt, what constraints they face. Instead of one "optimal" answer, policymakers could explore a range of futures.

The stakes transcend academic debate here. AI systems trained on historical data don't just reflect past biases - they amplify and entrench them. When an algorithm learns that certain postcodes correlate with loan defaults, it doesn't question whether that's due to systemic discrimination. It just denies more loans to those areas, creating the very reality it predicted.

Recent research into LLM-augmented ABMs reveals a deeper concern: the risk of "illusions of understanding" where sophisticated AI outputs mask our ignorance of underlying social dynamics.

ABMs crack this cycle open. Instead of hiding these assumptions in neural weights, they expose each rule: "if income < X then mobility = limited". Suddenly we can see - and challenge - the logic before it becomes policy.

KISS

Critics aren't wrong though - ABMs do simplify complex systems. That's precisely their value. A simplified model that shows its workings beats a black box built off opaque and flawed reasoning every time. At least with ABMs, we know what we're missing and can allow humans to intervene and iterate upon them to get better simulations.

Agent-based modelling when combined with LLMs offers something radical, as a way for black boxes to operate transparently. Rather than hiding decisions in neural networks, ABMs expose the mechanics - each agent's rules visible and the results reproducible.

What's more, these simulations create a new form of machine-readable insight. Other AI systems can consume and learn from these outcomes, building layers of interpretable decision-making.

Fundamentally, AI will influence policy whether we like it or not. That ship has sailed. The engineering question is how do we architect them to remain comprehensible to those they affect. Rather than treating them like oracles of faith.

#abm #ai #modelling #social #thoughts