Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes...
npx skills add defi-naly/skillbank --skill "thinking-fast-and-slow"
Install specific skill from multi-skill repository
# Description
Daniel Kahneman's cognitive psychology framework for understanding decision-making, biases, and judgment.
# SKILL.md
name: thinking-fast-and-slow
description: "Daniel Kahneman's cognitive psychology framework for understanding decision-making, biases, and judgment."
dimensions:
domain: [decision-making, psychology, risk-assessment]
phase: [diagnosis, evaluation, post-mortem, planning]
problem_type: [cognitive-bias, overconfidence, prediction, judgment-under-uncertainty]
contexts:
- situation: "making a prediction or forecast"
use_when: "need to calibrate confidence, check for overconfidence, consider base rates"
- situation: "evaluating a past decision"
use_when: "identifying which biases may have influenced the outcome"
- situation: "assessing risk"
use_when: "checking if intuition (System 1) is reliable or if deliberate analysis (System 2) is needed"
- situation: "someone is very confident"
use_when: "testing whether confidence is justified by evidence or just coherent storytelling"
- situation: "planning a project"
use_when: "applying reference class forecasting to avoid planning fallacy"
combines_with:
- think-again # for updating beliefs after identifying bias
- black-swan # for tail risk and prediction limits
- antifragile # for building systems that survive prediction failures
- skin-in-the-game # for evaluating whose predictions to trust
contrast_with:
- skill: think-again
distinction: "TF&S diagnoses HOW we think wrong; Think Again focuses on CHANGING beliefs and intellectual humility"
- skill: hidden-potential
distinction: "TF&S is about judgment/decisions; Hidden Potential is about growth and skill development"
Thinking, Fast and Slow Framework
The Two Systems
The mind operates through two distinct modes of thinking:
SYSTEM 1 (Fast) SYSTEM 2 (Slow)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Automatic Effortful
Unconscious Conscious
Fast Slow
Parallel processing Serial processing
Associative Rule-based
Skilled/habitual Flexible
Effortless Controlled
Pattern recognition Logical reasoning
Always on Lazy, avoids work
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Key insight: System 1 generates impressions, intuitions, and feelings. System 2 endorses, rationalizes, or overrides them. Most of our judgments and choices originate in System 1.
WYSIATI: What You See Is All There Is
System 1 constructs the best possible story from available information. It doesn't flag missing information—it works with what it has.
Consequences:
- Overconfidence: Coherent story = high confidence (regardless of quality of evidence)
- Framing effects: Same information, different presentation = different conclusions
- Base-rate neglect: Vivid specifics override statistical reality
- Jumping to conclusions: Fast, but prone to error
Defense: Ask "What would I need to know to change my mind?" and "What information am I missing?"
Cognitive Biases & Heuristics
Anchoring
The first number you encounter heavily influences subsequent estimates.
| Scenario | Anchor Effect |
|---|---|
| Negotiation | First offer sets the range |
| Pricing | Original price makes discount seem larger |
| Estimation | Random numbers affect "rational" estimates |
Defense: Consider the opposite. Generate your own anchor first. Ask "If this anchor didn't exist, what would I estimate?"
Availability Heuristic
Probability judgments based on how easily examples come to mind.
Leads to:
- Overweighting recent, vivid, or emotional events
- Underweighting statistical base rates
- Risk perception driven by media coverage, not actual frequency
Defense: Ask "Am I judging frequency, or just memorability?" Seek base rate data.
Representativeness Heuristic
Judging probability by similarity to stereotypes, ignoring base rates.
Example: "Steve is meek, tidy, needs order, and has a passion for detail. Is he a librarian or a farmer?"
Most say librarian—but there are 20x more male farmers than librarians. The base rate dominates.
Defense: Always ask "What's the base rate?" before considering the specific case.
Substitution
When facing a hard question, System 1 substitutes an easier one.
| Target Question | Substituted Question |
|---|---|
| How happy are you with life? | What's my mood right now? |
| Should I invest in this stock? | Do I like this company? |
| How likely is this project to succeed? | Can I imagine it succeeding? |
Defense: Identify the actual question. Resist the easy substitute.
Prospect Theory
People don't evaluate outcomes absolutely—they evaluate gains and losses relative to a reference point.
VALUE
│
│ Gains
│ ╱
│ ╱
│ ╱ (diminishing sensitivity)
│ ╱
──────────────────┼──────────────────► OUTCOME
╱│
╱ │
╱ │
╱ │ Losses
╱ │ (steeper = loss aversion)
│
Key Principles
1. Reference Dependence
Outcomes coded as gains/losses relative to reference point (status quo, expectation, aspiration).
2. Loss Aversion
Losses loom larger than equivalent gains. Ratio ≈ 2:1.
- Losing $100 hurts about 2x as much as gaining $100 feels good
- Explains risk aversion for gains, risk seeking for losses
- Explains endowment effect (owning something increases its value to you)
3. Diminishing Sensitivity
The difference between $100 and $200 feels larger than between $1,100 and $1,200.
Four-Fold Pattern of Risk Attitudes
| GAINS | LOSSES | |
|---|---|---|
| High probability | Risk AVERSE (take the sure gain) | Risk SEEKING (gamble to avoid sure loss) |
| Low probability | Risk SEEKING (lottery tickets) | Risk AVERSE (insurance) |
This explains why people buy both lottery tickets AND insurance.
Overconfidence
The most significant cognitive bias. Manifests in three forms:
1. Overestimation
Thinking you're better than you are.
- 90% of drivers think they're above average
- Entrepreneurs overestimate success probability
2. Overplacement
Thinking you're better than others.
- Easy tasks: overplacement common
- Hard tasks: underplacement common (everyone assumes they're bad)
3. Overprecision
Excessive certainty in accuracy of beliefs.
- 90% confidence intervals contain truth ~50% of the time
- Experts often no better than novices at calibration
Defense: Use reference class forecasting. Track your predictions. Widen confidence intervals.
Planning Fallacy
Systematic underestimation of time, costs, and risks; overestimation of benefits.
Why it persists:
- Focus on the specific case, not the reference class
- Anchoring on best-case scenarios
- WYSIATI: don't consider what could go wrong
Reference Class Forecasting (the cure):
1. Identify appropriate reference class
2. Obtain statistics of that class
3. Use as baseline
4. Adjust for specifics of current case (minimally!)
Example: Don't estimate "how long will MY kitchen renovation take?" Ask "How long do kitchen renovations typically take?" (Answer: almost always longer than estimated.)
The Halo Effect
Global evaluation (like/dislike) influences perception of specific traits.
"I like the CEO" ──► "Their strategy must be good"
"The product succeeded" ──► "The decision was wise" (even if it was lucky)
"They're attractive" ──► "They must be competent"
Defense: Evaluate traits independently. Collect judgments from multiple people before discussion.
Hindsight Bias
"I knew it all along" after learning the outcome.
Problems:
- Makes us overestimate predictability of events
- Prevents learning from surprises
- Unfair to decision-makers ("obvious" in hindsight)
Defense: Record predictions BEFORE outcomes. Conduct pre-mortems, not just post-mortems.
Regression to the Mean
Extreme performance is followed by less extreme performance—not because of causation, but statistical inevitability.
Misinterpretations:
- "Punishment works better than praise" (performance naturally regresses after extremes)
- "The Sports Illustrated cover jinx"
- "Treatment worked!" (patient was measured at their worst)
Defense: Expect regression. Don't over-explain it with causal stories.
Pre-Mortem Technique
Before starting a project, imagine it has failed. Ask: "What went wrong?"
Benefits:
- Overcomes groupthink
- Legitimizes doubt
- Surfaces risks that optimism suppresses
- Uses prospective hindsight
Application Framework
When making decisions:
1. Identify System 1 Signals
- Strong intuition or gut feeling?
- Immediate answer that "feels right"?
- Emotional response?
2. Engage System 2 Checks
- [ ] What's the base rate for this type of outcome?
- [ ] What information am I missing? (WYSIATI)
- [ ] Am I anchored on an irrelevant number?
- [ ] Am I substituting an easier question?
- [ ] What's the reference class forecast?
- [ ] What would a pre-mortem reveal?
3. Calibrate Confidence
- [ ] Would I bet money at these odds?
- [ ] Widen your confidence interval
- [ ] Remember: coherence ≠ truth
4. Protect Against Loss Aversion
- [ ] Am I avoiding a wise risk because losses loom large?
- [ ] Would I take this deal if framed differently?
- [ ] Evaluate gains and losses by their actual size, not their feeling
Quick Reference: Common Biases
| Bias | Definition | Trigger Question |
|---|---|---|
| Anchoring | First number influences estimates | "What's my anchor? Ignore it." |
| Availability | Easy to recall = seems common | "Memorable or actually frequent?" |
| Representativeness | Similarity over base rates | "What's the base rate?" |
| WYSIATI | Conclusions from incomplete data | "What's missing?" |
| Loss Aversion | Losses hurt 2x gains | "Am I overweighting potential loss?" |
| Overconfidence | Excessive certainty | "What's my track record?" |
| Planning Fallacy | Underestimate time/cost | "What's the reference class?" |
| Halo Effect | Global impression bleeds to specifics | "Am I conflating traits?" |
| Hindsight | "Knew it all along" | "Could I have predicted this before?" |
| Sunk Cost | Past investment influences future | "Would I choose this if starting fresh?" |
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.