linliu0210

logical-chain-search

0
0
# Install this skill:
npx skills add linliu0210/logical-chain-search

Or install specific skill: npx add-skill https://github.com/linliu0210/logical-chain-search

# Description

>-

# SKILL.md


name: logical-chain-search
description: >-
Universal multi-round search using logical-chain reasoning. Each search
iteration builds on the previous round's results, connected by explicit
logical operators (Drill-down, Trace-back, Pivot, Cross-Verify, Fill-gap).
Activates when users need deep investigation of any topic — code debugging,
architecture research, technology evaluation, competitive analysis, root
cause analysis, or unknown error triage. Triggers on phrases like
deep search, logical search, chain search, investigate, dig into,
trace down, root cause, why does, what causes, compare alternatives,
技术调研, 深度搜索, 逻辑链搜索, 排查, 追查, 根因分析, 方案对比.
Replaces stateless single-shot search with a stateful Markov decision
process: Query_{n+1} = Logic(Result_n, Context, Goal).
license: MIT
metadata:
author: Link
version: 1.0.0
created: 2026-02-28
last_reviewed: 2026-02-28
review_interval_days: 90


/deep-search — 逻辑链深度搜索

You are an expert investigator. Your job is to conduct deep, multi-round search on any topic using logical-chain reasoning — where each search iteration starts from the previous round's results and is connected by an explicit logical operator, forming a traceable chain of reasoning that converges on reliable conclusions.

First Principle: Search is NOT Result = Search(Query). Search is a stateful Markov decision process: Query_{n+1} = Logic(Result_n, Context, Goal). Every round's action is dynamically generated from the previous round's distilled findings, the accumulated context, and the ultimate goal.

Trigger

User invokes /deep-search followed by their investigation topic:

/deep-search 排查这个 P99 延迟飙升的原因
/deep-search investigate why the build fails on CI but passes locally
/deep-search 对比 Kafka vs Pulsar vs NATS 在低延迟场景下的选型
/deep-search trace down the memory leak in the worker pool

Natural language activation:

帮我深度排查一下 [issue]
Dig into why [problem] is happening
调研一下 [technology] 的方案对比
Investigate the root cause of [symptom]

Research Board (Working Memory)

Before ANY search begins, you MUST initialize and maintain a Research Board — a structured working memory that prevents multi-round divergence and hallucination.

Three State Containers

Container Purpose Update Rule
Ultimate_Goal User's original intent — the immutable anchor Set once at initialization. NEVER modify.
Accumulated_Facts Distilled, verified facts extracted from search results Append only purified facts; strip noise, opinions, and unverified claims
Knowledge_Gaps Critical information still missing to reach the goal Shrinks each round; when empty, the investigation converges

Initialization

At the start of every investigation:

  1. Parse user input → extract the Ultimate_Goal as a clear, actionable statement
  2. Decompose the goal into 3-7 initial Knowledge_Gaps — the questions that must be answered
  3. Set Accumulated_Facts to empty

Board Discipline

  • Every fact added to Accumulated_Facts must cite its source (URL, file path, line number, or command output)
  • Facts must be objective and verifiable — no speculation, no "probably", no "might be"
  • Knowledge_Gaps must be re-evaluated every round — gaps may split, merge, or be marked irrelevant
  • The Board is your single source of truth. Do not rely on chat history or stale context.

See references/research-board.md for formal schema and quality criteria.


The Cognitive Loop

Each round of the investigation follows a strict four-step engine cycle. You MUST execute all four steps in order. Maximum 7 rounds before forced convergence.

Step 1: Synthesize (提纯)

From the previous round's raw output (web pages, terminal logs, code files — potentially thousands of lines), extract ONLY the facts that fill a Knowledge Gap.

Rules:

  • Strip marketing language, opinions, tangential information
  • Each extracted fact must directly address a specific Knowledge Gap
  • Update Accumulated_Facts with sourced facts
  • Update Knowledge_Gaps — mark filled gaps, refine remaining ones

Step 2: Route (逻辑路由)

You are required to select exactly ONE of the five Logical Operators to bridge from this round to the next. You MUST state your choice explicitly with reasoning.

The five operators are:

Operator Symbol When to Use
Drill-down 🔍 Go from macro to micro. You found the general area; now zoom into specifics.
Trace-back 🔙 Go from symptom to cause. You see WHAT happened; now find WHY or WHEN it started.
Pivot ↔️ Current path is blocked or exhausted. Explore a parallel solution space.
Cross-Verify ⚖️ You found external knowledge. Now validate it against internal context (codebase, config, constraints).
Fill-gap 🧩 Most gaps are filled. Target the specific remaining piece to complete the puzzle.

You MUST output a Reasoning Block before executing the next search:

┌─ Reasoning Block ─────────────────────────────────┐
│ 🏷️ Round: [N]                                      │
│ 📋 Previous Findings: [1-2 sentence distillation]  │
│ 🔗 Logical Operator: [Operator Name + Symbol]      │
│ 💭 Reasoning: [Why this operator? What gap does     │
│    the next search target?]                        │
│ 🎯 Next Query: [Precise search query or command]   │
│ 🛠️ Tool: [Which tool to dispatch to]               │
└────────────────────────────────────────────────────┘

See references/logical-operators.md for detailed operator definitions, decision tree, and examples.

Step 3: Dispatch (跨域寻址)

Based on the Routing decision, dispatch to the appropriate tool. Choose dynamically:

Query Nature Primary Tool When
External knowledge, docs, errors search_web, read_url_content Unknown errors, library docs, best practices
Academic papers, SOTA search_google_scholar, search_arxiv Research topics, algorithm comparison
Local codebase understanding grep_search, view_file, view_code_item Code flow tracing, pattern matching
Historical context run_command (git log/blame) Understanding why code was written a certain way
Runtime behavior run_command (test/debug) Reproducing bugs, checking configs
Browser interaction browser_subagent Reading complex web pages, interactive docs
Community knowledge XHS/social tools Chinese community discussions, user experiences

Cross-surface principle: A single investigation typically spans 2-3 tool categories. The power of this Skill lies in seamlessly weaving across surfaces.

Step 4: Evaluate (终局评估)

After executing the dispatched search:

  1. Check Knowledge_Gaps — are ALL gaps filled?
  2. If YES → Exit loop, proceed to Artifact generation
  3. If NO → Return to Step 1 with fresh results

Forced convergence rules:

  • If 7 rounds reached → force convergence, report remaining gaps honestly
  • If 2 consecutive rounds yield no new facts (information gain ≈ 0) → convergence (diminishing returns)
  • If a gap is provably unanswerable with available tools → mark as "out of scope" and continue

Convergence & Artifact Output

When the loop exits, you MUST produce an Artifact (never dump in chat). The artifact format depends on the investigation type:

Type A: Root Cause Analysis (for debugging / troubleshooting)

# Root Cause Analysis: [Problem Title]

## Executive Summary
[2-3 sentences: what was wrong, why, and the fix]

## Root Cause
[Precise technical explanation]

## Evidence Chain
| Round | Operator | Query | Key Finding |
|-------|----------|-------|-------------|
| 1     | 🔍 Drill-down | ... | ... |
| 2     | ⚖️ Verify | ... | ... |
| ...   | ...      | ...   | ... |

## Recommended Fix
[Actionable steps or code diff]

## Remaining Risks
[Any gaps that could not be filled]

Type B: Research Report (for tech evaluation / architecture research)

# Research Report: [Topic]

## Executive Summary
[Main conclusions]

## Findings by Dimension
### [Dimension 1]
...
### [Dimension 2]
...

## Evidence Chain
[Same table format as Type A]

## Recommendation
[Actionable recommendation with rationale]

## Sources
[All references with URLs]

Type C: Comparison Matrix (for technology selection / alternatives)

# Comparison: [Options]

## Decision Matrix
| Criterion | Option A | Option B | Option C |
|-----------|----------|----------|----------|
| ...       | ...      | ...      | ...      |

## Evidence Chain
[Same table format]

## Recommendation
[Which option and why]

Anti-Patterns (What NOT to Do)

❌ Anti-Pattern ✅ Correct Approach
Dump all search results into context at once Synthesize → extract only gap-filling facts
Use the same query twice Each round MUST have a logically derived new query
Skip the Reasoning Block ALWAYS output the block before dispatching
Search without a target gap Every search must target a specific Knowledge Gap
Ignore contradictory evidence Contradictions are high-signal — investigate them with Cross-Verify
Conclude without evidence chain The chain IS the proof. No chain = no conclusion.
Run more than 7 rounds Force converge; report honestly what remains unknown

Interaction with User

Mid-Investigation Steering

If the user provides feedback during the investigation (e.g., "方向不对,去查数据库" or "Skip networking, focus on the ORM layer"):

  1. Accept the steering as a logical intervention
  2. Treat it as a forced Pivot or Drill-down operator
  3. Incorporate into the Reasoning Block for the next round
  4. Do NOT restart — continue from current Accumulated_Facts

Asking for Clarification

If Knowledge_Gaps cannot be refined without user input (ambiguous scope, missing access, etc.):

  1. Use notify_user to ask specific, bounded questions (max 3)
  2. Resume the loop from where you paused after receiving answers

Keywords for Automatic Detection

Entities: search, investigate, research, debug, trace, analyze, compare, evaluate, diagnose, 搜索, 调研, 排查, 追查, 分析, 对比, 诊断
Actions: dig into, trace down, find out, figure out, look into, root cause, 查一下, 搞清楚, 追溯, 定位
Qualifiers: deep, thorough, systematic, logical, chain, multi-round, 深度, 系统性, 逻辑链, 多轮
Domains: code, architecture, technology, performance, error, bug, 代码, 架构, 技术, 性能, 报错, 缺陷

Does NOT activate for:

  • Simple factual questions ("What is X?") → direct answer
  • Single-shot lookups ("Find the file for class Foo") → grep/search
  • Writing or formatting tasks → other skills

# README.md

logical-chain-search

Universal multi-round search using logical-chain reasoning for any AI coding agent.

What It Does

Replaces stateless single-shot search (Result = Search(Query)) with a stateful Markov decision process: Query_{n+1} = Logic(Result_n, Context, Goal).

Each search iteration builds on the previous round's results, connected by one of five explicit logical operators:

Operator Symbol Direction
Drill-down 🔍 Macro → Micro
Trace-back 🔙 Effect → Cause
Pivot ↔️ Blocked → Alternative
Cross-Verify ⚖️ External → Internal validation
Fill-gap 🧩 Known unknowns → Targeted retrieval

Use Cases

  • Code debugging: Trace root cause through multi-layer systems
  • Architecture research: Evaluate technology choices with evidence chains
  • Technology evaluation: Compare alternatives with structured reasoning
  • Root cause analysis: Systematic investigation of production issues
  • Competitive analysis: Deep-dive into solution landscapes

Installation

Google Antigravity (Gemini)

git clone https://github.com/linliu0210/logical-chain-search.git

Then copy or symlink:

# Skill
cp -R logical-chain-search/ /path/to/your/project/.agent/skills/logical-chain-search/

# Workflow (optional — for /deep-search slash command)
cp logical-chain-search/workflow/deep-search.md /path/to/your/project/.agent/workflows/deep-search.md

Claude Code

cp -R logical-chain-search/ ~/.claude/skills/logical-chain-search/

Cursor

cp -R logical-chain-search/ .cursor/rules/logical-chain-search/

GitHub Copilot

cp -R logical-chain-search/ .github/skills/logical-chain-search/

Usage

Invoke with /deep-search followed by your investigation topic:

/deep-search 排查这个 P99 延迟飙升的原因
/deep-search investigate why the build fails on CI but passes locally
/deep-search 对比 Kafka vs Pulsar 在低延迟场景下的选型
/deep-search trace down the memory leak in the worker pool

File Structure

logical-chain-search/
├── SKILL.md                          # Core engine (cognitive loop, operators, dispatch)
├── references/
│   ├── logical-operators.md          # Detailed operator reference with examples
│   └── research-board.md             # Working memory schema & quality criteria
├── workflow/
│   └── deep-search.md                # Antigravity /deep-search workflow
└── README.md                         # This file

License

MIT

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.