4.3 241 Wednesday! (?)(!)
Today’s podcast
worksheet
SETH GODIN Trusting AI
For generations, humans have been entrusting their lives to computers. Air Traffic Control, statistical analysis of bridge resilience, bar codes for drug delivery, even the way stop lights are controlled. But computers aren’t the same as the LLMs that run on them.
Claude.ai is my favorite LLM, but even Claude makes errors. Should we wait until it’s perfect before we use it?
If a perfect and reliable world is the standard, we’d never leave the house.
There are two kinds of tasks where it’s clearly useful to trust the output of an AI:
Recoverable: If the AI makes a mistake, you can backtrack without a lot of hassle or expense.
Verifiable: You can inspect the work before you trust it.
Having an AI invest your entire retirement portfolio without oversight seems foolish to me. You won’t know it’s made an error until it’s too late.
On the other hand, taking a photo of the wine list in a restaurant and asking Claude to pick a good value and explain its reasoning meets both criteria for a useful task.
This is one reason why areas like medical diagnosis are so exciting. Confronted with a list of symptoms and given the opportunity for dialog, an AI can outperform a human doctor in some situations–and even when it doesn’t, the cost of an error can be minimized while a unique insight could be lifesaving.
Why wouldn’t you want your doctor using AI well?
Pause for a second and consider all the useful ways we can put this easily awarded trust to work. Every time we create a proposal, confront a decision or need to brainstorm, there’s an AI tool at hand, and perhaps we could get better at using and understanding it.
The challenge we’re already facing: Once we see a pattern of AI getting tasks right, we’re inclined to trust it more and more, verifying less often and moving on to tasks that don’t meet these standards.
AI mistakes can be more erratic than human ones (and way less reliable than traditional computers), though, and we don’t know nearly enough to predict their patterns. Once all the human experts have left the building, we might regret our misplaced confidence.
The smart thing is to make these irrevocable choices about trust based on experience and insight, not simply accepting the inevitable short-term economic rationale. And that means leaning into the experiments we can verify and recover from.
You’re either going to work for an AI or have an AI work for you. Which would you prefer?
January 22, 2025