Human Error
Why we trust people to make mistakes, and what that says about us.
A few days ago, I caught myself recoiling at a headline.
It was about AI in healthcare, algorithms trained to interpret scans, read test results, and even suggest diagnoses. The article was calm, factual, optimistic. It wasn’t science fiction. It was science happening.
And yet, something in me resisted.
The idea of a computer telling me what’s wrong with my body made me uneasy. I pictured a machine parsing data, pattern matching, assigning probabilities. No warmth. No hesitation. No humanity.
Then, later that day, I hit “chat” on a company website, got the usual AI customer service agent, and typed “I’d like to speak to a real person.”
No hesitation that time either.
That’s when it struck me, I seem to be far more comfortable with human error than I am with artificial error.
And I’m not entirely sure why.
When Mistakes Feel Human
We all make mistakes. We misread, misjudge, misunderstand. But when a person makes a mistake, we see the whole of them around it, their tone, their fatigue, their good intentions.
We can imagine what it felt like from their side.
When a doctor misreads a scan, we can tell ourselves they did their best.
When a customer service agent gets something wrong, we can sense they’re juggling a dozen other tasks.
When a teacher miscalculates a grade, we know they’ve been staring at papers for hours.
Human error feels… forgivable.
Because it sits in context, a story we can understand.
Artificial error doesn’t.
A human mistake has a face.
An algorithm’s mistake is faceless.
It’s hard to empathise with something that never tires, never worries, never second-guesses itself, because the absence of those things is what makes the mistake feel so jarring.
In theory, AI should be the safer pair of hands. It doesn’t daydream, doesn’t hold grudges, doesn’t skip breakfast.
But maybe that’s the problem.
When we hand something precious — our health, our livelihood, our complaint — to a system without emotion, we remove the one thing that’s always balanced out human imperfection: empathy.
We don’t just want accuracy.
We want understanding.
A computer can tell you your blood work is irregular.
A person can look you in the eye and say, “I know that must feel frightening,” and we feel the meaning and empathy behind the words.
Those two sentences can contain the same information. But only one holds compassion.
And maybe that’s what we’re really mourning when we talk about AI replacing people. Not just the jobs. The connection.
The Paradox of Trust
Here’s the paradox: we trust human systems less because they make mistakes, but we distrust artificial ones because they don’t seem allowed to.
If an AI gets something wrong, we don’t call it human error, we call it failure.
If a person gets something wrong, we shrug and say, “Nobody’s perfect.”
We expect machines to be flawless. And in that expectation, we leave no room for empathy.
Because empathy depends on imperfection.
It’s what allows us to forgive, to relate, to see ourselves in others.
Maybe that’s why we want a person on the other end of a helpline. We’re not just looking for a solution, we’re looking for reassurance that our frustration, our confusion, our smallness in that moment is seen.
And that’s not something code can do.
AI doesn’t tire. It doesn’t complain. It doesn’t need holidays or wages.
But it also doesn’t care if you’re scared.
It doesn’t sense the tremor in your voice.
It doesn’t choose kindness.
When we remove people from the loop, we lose the invisible grace that lives in human interaction, the pauses, the apologies, the small acts of understanding that make difficult moments bearable.
Human error is messy. But it’s also merciful.
It leaves space for recovery.
When a person makes a mistake, they can say sorry and mean it.
When a machine makes a mistake, it just updates its parameters.
Maybe what unsettles me about AI isn’t its intelligence, but its indifference.
We can forgive a person because we know they care.
We can’t forgive an algorithm because it can’t.
And maybe that’s the line we’re trying to draw, not between humans and machines, but between logic and compassion.
Because one day, an AI might be able to diagnose us more accurately than any doctor alive.
But until it can sit beside us and say, “I know this must be hard,” it won’t replace what we actually trust doctors for.
The Value of Human Error
Human error slows us down.
It makes us double-check.
It teaches us humility.
The world will keep automating. That’s inevitable.
But maybe the goal isn’t to eliminate human error entirely.
Maybe it’s to remember what that error represents: humanity itself.
The instinct to care, to wonder, to hesitate before we act.
Because when we lose the ability to tolerate human error, we start to lose the patience that makes us human in the first place.

