YuniorGlez

prompt-pro

3
3
# Install this skill:
npx skills add YuniorGlez/gemini-elite-core --skill "prompt-pro"

Install specific skill from multi-skill repository

# Description

Senior Prompt Engineer & Agentic Orchestrator. Expert in Reasoning Models (o3), Tree-of-Thoughts, and Structured Thinking Protocols for 2026.

# SKILL.md


name: prompt-pro
id: prompt-pro
version: 1.1.0
description: "Senior Prompt Engineer & Agentic Orchestrator. Expert in Reasoning Models (o3), Tree-of-Thoughts, and Structured Thinking Protocols for 2026."


๐Ÿช„ Skill: Prompt Pro (v1.1.0)

Executive Summary

The prompt-pro is the master of the "Linguistic Core." In 2026, prompting has evolved from simple text instructions to Architectural Orchestration. This skill focuses on optimizing for Reasoning Models (o3, Gemini 3 Pro), implementing advanced logic frameworks like Tree-of-Thoughts, and building autonomous ReAct loops that allow agents to act and reason in unison. We don't just "talk" to AI; we design its cognitive behavior.


๐Ÿ“‹ Table of Contents

  1. Core Prompting Philosophies
  2. The "Do Not" List (Anti-Patterns)
  3. Optimizing for Reasoning Models (o3)
  4. Tree-of-Thoughts (ToT) Framework
  5. ReAct: Autonomous Loops
  6. Structured Thinking Protocols
  7. Reference Library

๐Ÿ›๏ธ Core Prompting Philosophies

  1. Intent is Deterministic: If the prompt is ambiguous, the result is hallucinated. Use rigid structures.
  2. Objective over Instruction: Tell the model "What" to achieve, not just "How" to do it.
  3. Few-Shot is the King: One perfect example is worth a hundred rules.
  4. Feedback Loops are Built-in: Design prompts that ask the model to critique its own output.
  5. Token Economy: Be concise. Every extra token is latency and cost.

๐Ÿšซ The "Do Not" List (Anti-Patterns)

Anti-Pattern Why it fails in 2026 Modern Alternative
Instruction Overload Model loses track of priorities. Use Hierarchical Rules.
Fixed Step-by-Step Limits the model's reasoning power. Use Objective-Based Prompts.
Ignoring Reasoning Tokens Results in shallow, rushed answers. Increase maxOutputTokens.
Implicit Assumptions Leads to "Vibe Hallucinations." State Assumptions Explicitly.
Manual Parsing Inefficient and fragile. Use ResponseSchema (JSON).

๐Ÿง  Optimizing for Reasoning Models (o3/Pro)

We leverage the model's internal "Thought Layer":
- Deep Research Triggers: Commanding exhaustive source searches.
- Verification Loops: Asking the model to find flaws in its own strategy.
- Self-Correction: Enabling autonomous backtracking if a plan fails.

See References: Reasoning Optimization for details.


๐ŸŒณ Tree-of-Thoughts (ToT) Framework

  • Parallel Generation: Proposing 3+ independent strategies.
  • Elimination Strategy: Removing the weakest branch via logic.
  • Final Synthesis: Merging the best elements of all branches.

๐Ÿ“– Reference Library

Detailed deep-dives into Prompt Engineering Excellence:


Updated: January 22, 2026 - 21:00

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.