Fine-tune LLMs using reinforcement learning with TRL - SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, and reward model training. Use when need RLHF,...
Integrating local and cloud LLMs into Unreal Engine games for AI NPCs and intelligent behaviorsUse when "unreal llm, ue5 ai npc, unreal chatgpt, blueprint llm, unreal engine ai, ue5 dialogue,...
Build Retrieval-Augmented Generation (RAG) applications that combine LLM capabilities with external knowledge sources. Covers vector databases, embeddings, retrieval strategies, and response...
Integrating local LLMs into Godot games using NobodyWho and other Godot-native solutionsUse when "godot llm, nobodywho, godot ai npc, gdscript llm, godot local llm, godot chatgpt, godot 4 ai,...
Comprehensive security auditing framework for LLM applications covering OWASP Top 10 for LLMs, threat modeling, penetration testing, and compliance with NIST AI RMF and ISO 42001Use when "security...
Security vulnerabilities in AI-generated code and LLM applications, covering OWASP Top 10 for LLMs, secure coding patterns, and AI-specific threat modelsUse when "ai code security, llm...
Integrating local and cloud LLMs into Unity games for AI NPCs, dialogue, and intelligent behaviorsUse when "unity llm, llmunity, unity ai npc, unity local llm, unity sentis llm, unity chatgpt,...
Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models...
Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models...
Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models...
Comprehensive patterns for building AI-powered code generation tools, code assistants, automated refactoring, code review, and structured output generation using LLMs with function calling and...
Use this skill when building AI features, integrating LLMs, implementing RAG, working with embeddings, deploying ML models, or doing data science. Activates on mentions of OpenAI, Anthropic,...
World-class ML engineering skill for productionizing ML models, MLOps, and building scalable ML systems. Expertise in PyTorch, TensorFlow, model deployment, feature stores, model monitoring, and...
World-class ML engineering skill for productionizing ML models, MLOps, and building scalable ML systems. Expertise in PyTorch, TensorFlow, model deployment, feature stores, model monitoring, and...
Write and optimize prompts for AI-generated outcomes across text and image models. Use when crafting prompts for LLMs (Claude, GPT, Gemini), image generators (Midjourney, DALL-E, Stable Diffusion,...
Prompt engineering and optimization for AI/LLMs. Capabilities: transform unclear prompts, reduce token usage, improve structure, add constraints, optimize for specific models, backward-compatible...
some skills that enable LLMs like Claude, Gemini to perform agentic tasks
Master fine-tuning of large language models for specific domains and tasks. Covers data preparation, training techniques, optimization strategies, and evaluation methods. Use when adapting models...
Use when creating skills for Rust crates or std library documentation. Keywords: create rust skill, create crate skill, create std skill, εε»Ί rust skill, εε»Ί crate skill, εε»Ί std skill, ε¨ζ rust...
|