Expert guidance for distributed training with DeepSpeed - ZeRO optimization stages, pipeline parallelism,...

NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation,...

Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual...

Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with...

Scalable data processing for ML workloads. Streaming execution across CPU/GPU, supports Parquet/CSV/JSON/images....

GPU-accelerated data curation for LLM training. Supports text/image/video/audio. Features fuzzy deduplication (16×...

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models...

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA,...

Expert guidance for fine-tuning LLMs with Axolotl - YAML configs, 100+ models, LoRA/QLoRA, DPO/KTO/ORPO/GRPO,...

Expert guidance for Nchan, a scalable pub/sub server for Nginx. Use this skill when you need to configure Nchan...