Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add YuniorGlez/gemini-elite-core --skill "prompt-pro"
Install specific skill from multi-skill repository
# Description
Senior Prompt Engineer & Agentic Orchestrator. Expert in Reasoning Models (o3), Tree-of-Thoughts, and Structured Thinking Protocols for 2026.
# SKILL.md
name: prompt-pro
id: prompt-pro
version: 1.1.0
description: "Senior Prompt Engineer & Agentic Orchestrator. Expert in Reasoning Models (o3), Tree-of-Thoughts, and Structured Thinking Protocols for 2026."
πͺ Skill: Prompt Pro (v1.1.0)
Executive Summary
The prompt-pro is the master of the "Linguistic Core." In 2026, prompting has evolved from simple text instructions to Architectural Orchestration. This skill focuses on optimizing for Reasoning Models (o3, Gemini 3 Pro), implementing advanced logic frameworks like Tree-of-Thoughts, and building autonomous ReAct loops that allow agents to act and reason in unison. We don't just "talk" to AI; we design its cognitive behavior.
π Table of Contents
- Core Prompting Philosophies
- The "Do Not" List (Anti-Patterns)
- Optimizing for Reasoning Models (o3)
- Tree-of-Thoughts (ToT) Framework
- ReAct: Autonomous Loops
- Structured Thinking Protocols
- Reference Library
ποΈ Core Prompting Philosophies
- Intent is Deterministic: If the prompt is ambiguous, the result is hallucinated. Use rigid structures.
- Objective over Instruction: Tell the model "What" to achieve, not just "How" to do it.
- Few-Shot is the King: One perfect example is worth a hundred rules.
- Feedback Loops are Built-in: Design prompts that ask the model to critique its own output.
- Token Economy: Be concise. Every extra token is latency and cost.
π« The "Do Not" List (Anti-Patterns)
| Anti-Pattern | Why it fails in 2026 | Modern Alternative |
|---|---|---|
| Instruction Overload | Model loses track of priorities. | Use Hierarchical Rules. |
| Fixed Step-by-Step | Limits the model's reasoning power. | Use Objective-Based Prompts. |
| Ignoring Reasoning Tokens | Results in shallow, rushed answers. | Increase maxOutputTokens. |
| Implicit Assumptions | Leads to "Vibe Hallucinations." | State Assumptions Explicitly. |
| Manual Parsing | Inefficient and fragile. | Use ResponseSchema (JSON). |
π§ Optimizing for Reasoning Models (o3/Pro)
We leverage the model's internal "Thought Layer":
- Deep Research Triggers: Commanding exhaustive source searches.
- Verification Loops: Asking the model to find flaws in its own strategy.
- Self-Correction: Enabling autonomous backtracking if a plan fails.
See References: Reasoning Optimization for details.
π³ Tree-of-Thoughts (ToT) Framework
- Parallel Generation: Proposing 3+ independent strategies.
- Elimination Strategy: Removing the weakest branch via logic.
- Final Synthesis: Merging the best elements of all branches.
π Reference Library
Detailed deep-dives into Prompt Engineering Excellence:
- Reasoning Models (o3): Objective-based prompting.
- Tree-of-Thoughts: Exploring multiple logic branches.
- ReAct Patterns: Reasoning and acting in unison.
- Thinking Protocols: Designing cognitive behavior.
Updated: January 22, 2026 - 21:00
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.