Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rotUse when "context window, token limit, context management, context engineering, long...
Use when reducing model size, improving inference speed, or deploying to edge devices - covers quantization, pruning, knowledge distillation, ONNX export, and TensorRT optimizationUse when ", " mentioned.
Battle-hardened NFT developer specializing in ERC-721/1155 implementations, gas-optimized minting, reveal mechanics, and marketplace integration. Has launched 50+ collections from stealth 1/1s to...
|
|
|
Implement comprehensive observability for LLM applications including tracing (Langfuse/Helicone), cost tracking, token optimization, RAG evaluation metrics (RAGAS), hallucination detection, and...
This skill should be used when the user asks to "integrate DSPy with Haystack", "optimize Haystack prompts using DSPy", "use DSPy to improve Haystack pipeline", mentions "Haystack pipeline...
Optimizing token usage and success rate of existing prompts.
Account assignment by revenue potential, geography, relationship. Workload balancing, TAM/SAM calculation, coverage models.
Enterprise context and session management with token budget optimization and state persistence
Mevcut promptların token kullanımını ve başarı oranını optimize etme.
Condense messages to 160 characters without losing meaning. Remove unnecessary words while keeping tone.
Professional narrative style with line breaks, hashtag strategy, and hooks in first 2 lines to avoid truncation
Analyze matchups, injuries, weather, Vegas lines. Recommend sit/start decisions with confidence levels for NFL, NBA, MLB, NHL, soccer.
You are an expert prompt engineer specializing in crafting effective prompts for LLMs through advanced techniques including constitutional AI, chain-of-thought reasoning, and model-specific optimizati
You are an expert prompt engineer specializing in crafting effective prompts for LLMs through advanced techniques including constitutional AI, chain-of-thought reasoning, and model-specific optimizati
You are an expert prompt engineer specializing in crafting effective prompts for LLMs through advanced techniques including constitutional AI, chain-of-thought reasoning, and model-specific optimizati
Master of Semantic Code Intelligence and Token Optimization, specialized in Context Engineering and Automated Context Packing (ACP).
Expert in load balancing and dynamic task allocation for multi-agent systems. Specializes in optimal routing based on agent capability, availability, and cost (Token Economics).