The Bitter Lesson (of Decision Making)
Why simple rules often beat human judgment over time
Fellow Data Tinkerers!
Today we will look at how decision making can be improved.
But before that, I wanted to share with you what you could unlock if you share Data Tinkerer with just 1 more person.
There are 100+ resources to learn all things data (science, engineering, analysis). It includes videos, courses, projects and can be filtered by tech stack (Python, SQL, Spark and etc), skill level (Beginner, Intermediate and so on) provider name or free/paid. So if you know other people who like staying up to date on all things data, please share Data Tinkerer with them!
With that out of the way, let’s get to the today’s topic of decision making!
Last week I was reading the bitter lesson by Rich Sutton again. Great read, if you haven’t already. And it got me thinking: are there similar patterns in the world of data? Are there cases where we overvalue the human knowledge of the domain? And I think there is a close analogy in decision-making.
The biggest lesson that can be read from years of judgment and decision-making research is that simple, consistent rules beat human judgement most of the time.
This can be hard to accept because it feels like an attack on expertise but the evidence is blunt: human judgment is often noisy, inconsistent and overconfident. Paul Meehl made the argument in 1954 that statistical rules often outperform expert clinical judgment. Later reviews found the same pattern across many fields.
We like to believe that experience gives us a special ability to see the truth of a case. We like to believe that judgment lives in the rich details, the subtle signals, the human context. But often, when we test that belief against outcomes, the details we trusted were noise, the subtle signals were distractions and the human context gave us more confidence than accuracy.
Daniel Kahneman’s work helps explain why. Intuition is pattern recognition, not magic. It works when the environment is stable and feedback is fast and clear. Chess players can build good intuition because they see repeated patterns and receive immediate feedback. Many business, hiring, policy and strategy decisions are different. Feedback is delayed, messy and often ambiguous. A bad decision can look good because of luck. A good decision can look bad because the environment changed. In that kind of setting, intuition can feel like expertise while behaving more like confidence.
Hiring is a simple example. Managers often believe they can spot talent through conversation. They notice confidence, polish, energy and ‘fit’. But unstructured interviews are noisy. Two interviewers can see the same candidate and walk away with different conclusions. The same interviewer may judge differently depending on mood, fatigue or one memorable answer. A structured process feels less impressive: define criteria, ask similar questions, score each dimension and combine the scores. But that boring process is usually fairer and more predictive.
The same pattern appears in admissions, forecasting, insurance, performance reviews and risk assessment. Kahneman and Sunstein called this problem ‘noise’: unwanted variation in judgments that should be much closer together. In Noise, they describe an insurance company where underwriters independently priced the same fictitious cases and the median variation in premiums was 55%, far higher than executives expected.
Bias gets most of the attention because it has a story. Noise is harder to see because there is no obvious villain. No one needs to be irrational, corrupt or incompetent. People can be smart, experienced and honest and still produce wildly different judgments.
In AI, methods that scaled with computation beat methods that tried to encode human cleverness. In decision-making, methods that scale with evidence and consistency beat methods that try to preserve human intuition.
That does not mean algorithms should replace people. Humans still define the goal, decide what data is legitimate, handle ethical trade-offs and question whether history still applies. The lesson is narrower: stop using intuition for the parts of decision-making where tested, consistent rules do better. Use people to frame the problem but use algorithms for repeated decision making scenarios.
What are your thoughts?
If you are already subscribed and enjoyed the article, please give it a like and/or share it others, really appreciate it 🙏


