defi-naly

black-swan

0
0
# Install this skill:
npx skills add defi-naly/skillbank --skill "black-swan"

Install specific skill from multi-skill repository

# Description

Apply Nassim Taleb's Black Swan principles for navigating extreme uncertainty. Use when assessing risks, making predictions, designing for rare events, evaluating expert forecasts, or building systems that must survive unpredictable shocks. Also use when you notice over-reliance on historical data or models.

# SKILL.md


name: black-swan
description: Apply Nassim Taleb's Black Swan principles for navigating extreme uncertainty. Use when assessing risks, making predictions, designing for rare events, evaluating expert forecasts, or building systems that must survive unpredictable shocks. Also use when you notice over-reliance on historical data or models.
tags: [decide, risk]


Black Swan Thinking

Prepare for extreme, unpredictable events that dominate outcomes.

What Is a Black Swan?

An event that is:

  1. Rare — Outside normal expectations, nothing in the past pointed to it
  2. Extreme impact — Massive consequences
  3. Retrospectively predictable — After it happens, we concoct explanations making it seem obvious

Examples: 9/11, 2008 financial crisis, COVID-19, the internet, Google, World War I

Key insight: Black Swans drive most of history, economics, and personal life. Yet we spend most time on the predictable and routine.

Mediocristan vs Extremistan

Two types of domains with fundamentally different behavior:

Mediocristan Extremistan
Physical quantities Information, wealth, fame
Bounded variation Unbounded variation
One observation can't change the total much One observation can dominate
Bell curve applies Power laws apply
Past predicts future Past misleads

Mediocristan examples: Height, weight, calories consumed, car mechanic income

Extremistan examples: Book sales, startup outcomes, pandemic spread, social media virality, wealth

Critical question: "Am I in Mediocristan or Extremistan?" If Extremistan, normal statistics lie.

The Narrative Fallacy

We compulsively create stories to explain random events, then believe the stories.

Mechanism:
- Event happens (often randomly)
- We create causal explanation post-hoc
- Explanation feels true and inevitable
- We believe we could have predicted it

Consequence: False confidence in our ability to predict and explain.

Defense:
- Notice when you're creating explanatory stories
- Ask "What would I have predicted before the event?"
- Distrust neat narratives, especially about complex events

The Ludic Fallacy

Mistaking the map for the territory—using clean models for messy reality.

"Ludic" = game-like. In games, rules and probabilities are known. In life, they aren't.

Examples of the fallacy:
- Using casino probability math for financial markets
- Using historical volatility to predict future volatility
- Assuming distributions are normal when they're not
- Backtesting trading strategies

Real world:
- Rules change
- Distributions have fat tails
- Unknown unknowns exist
- The game itself can transform

Heuristic: If someone presents a precise probability for a complex real-world event, be skeptical.

Silent Evidence

We see survivors, not the dead. This systematically distorts our understanding.

Examples:
- Successful entrepreneurs share advice; failed ones are invisible
- Standing buildings, not collapsed ones
- Published studies, not failed experiments
- Ancient texts that survived, not those destroyed

Consequence: We overestimate the effectiveness of what survivors did.

Defense:
- Actively ask "What am I not seeing?"
- Seek out failure cases
- Assume survivorship bias until proven otherwise

Prediction Is (Mostly) Impossible

In Extremistan, prediction fails. Yet we keep trying.

Why prediction fails:
- Fat tails: Rare events dominate
- Reflexivity: Predictions change behavior
- Complexity: Too many interacting variables
- Unknown unknowns: Can't predict what you can't imagine

What to do instead:
- Focus on exposure, not prediction
- Ask "What happens to me if X occurs?" not "Will X occur?"
- Reduce downside exposure to negative Black Swans
- Increase upside exposure to positive Black Swans

Rule: You can't predict Black Swans, but you can position for them.

Exposure Over Prediction

Since you can't predict, manage your exposure.

Framework:

Negative Black Swan Positive Black Swan
Fragile Exposed to harm No benefit
Antifragile Protected Exposed to gain

Practical steps:
1. List what could go catastrophically wrong (negative Black Swans)
2. Reduce exposure to those—even if "unlikely"
3. List what could go massively right (positive Black Swans)
4. Increase exposure to those—many small bets

Example: Don't predict which startup will succeed. Make many small investments to ensure exposure to the winner.

Fat Tails

In Extremistan, distributions have "fat tails"—extreme events are far more common than bell curves suggest.

Implication:
- The "once in a century" event happens every decade
- Averages are meaningless (one billionaire skews the average)
- Standard deviation understates true risk
- Historical maximums will be exceeded

Heuristic: Whatever the worst historical case is, assume worse is possible.

Experts and Forecasters

Track record: Expert predictions in complex domains are barely better than random.

Why experts fail:
- Overconfidence in their models
- Narrative fallacy
- No skin in the game
- Incentives to sound confident

Which experts to trust:
- Surgeons, pilots (immediate feedback, skin in game)
- NOT economists, political pundits, long-range forecasters

Rule: The more complex and long-range the prediction, the less you should trust it.

Application Checklist

When assessing risk or making decisions:

  1. [ ] Am I in Mediocristan or Extremistan?
  2. [ ] Am I relying on historical data that may not apply?
  3. [ ] What's my exposure to negative Black Swans? How can I reduce it?
  4. [ ] What's my exposure to positive Black Swans? How can I increase it?
  5. [ ] Am I falling for a narrative that makes randomness seem predictable?
  6. [ ] What am I not seeing? (silent evidence)
  7. [ ] Does this expert have skin in the game?
  8. [ ] What if the worst case is 10x worse than I imagine?

Anti-Patterns

  • "It's never happened before" → Irrelevant in Extremistan
  • "The model says..." → Models miss Black Swans by definition
  • "Experts agree..." → Experts have poor track records on complex predictions
  • "The probability is only X%" → Probabilities in Extremistan are unreliable
  • "We've stress-tested for..." → You tested for what you imagined, not what you didn't
  • "Past performance..." → Silent evidence + fat tails = misleading

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.