Expert guidance for distributed training with DeepSpeed - ZeRO optimization stages, pipeline parallelism,...
cat ~/Nuevo
Explora los skills más recientes añadidos al marketplace
Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for...
NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation,...
Meta's 7-8B specialized moderation model for LLM input/output filtering. 6 safety categories - violence/hate, sexual...
Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with...
Scalable data processing for ML workloads. Streaming execution across CPU/GPU, supports Parquet/CSV/JSON/images....
GPU-accelerated data curation for LLM training. Supports text/image/video/audio. Features fuzzy deduplication (16×...
Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate...
Provides guidance for training and analyzing Sparse Autoencoders (SAEs) using SAELens to decompose neural network...
Provides guidance for performing causal interventions on PyTorch models using pyvene's declarative intervention...
Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote...
Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization
Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models...
Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA,...
Expert guidance for fine-tuning LLMs with Axolotl - YAML configs, 100+ models, LoRA/QLoRA, DPO/KTO/ORPO/GRPO,...
>
Apply software craftsmanship principles to TypeScript code: type safety, functional patterns, clean architecture,...
Expert guidance for Nchan, a scalable pub/sub server for Nginx. Use this skill when you need to configure Nchan...
|
|
|
|
|
|