Capture conversations and decisions into structured Notion pages; use when turning chats/notes into wiki entries,...
cat ~/Top
Browse top skills sorted by GitHub stars
Prepare meeting materials with Notion context and Codex research; use when gathering context, drafting...
Research across Notion and synthesize into structured documentation; use when gathering info from multiple Notion...
Turn Notion specs into implementation plans, tasks, and progress tracking; use when implementing PRDs/feature specs...
Run the Codex Readiness integration test. Use when you need an end-to-end agentic loop with build/test scoring.
Run the Codex Readiness unit test report. Use when you need deterministic checks plus in-session LLM evals for...
Create a concise plan. Use when a user explicitly asks for a plan related to a coding task.
Manage issues, projects & team workflows in Linear. Use when the user wants to read, create or updates tickets in Linear.
Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an...
Install Codex skills into $CODEX_HOME/skills from a curated list or a GitHub repo path. Use when a user asks to list...
Write publication-ready ML/AI papers for NeurIPS, ICML, ICLR, ACL, AAAI, COLM. Use when drafting papers from...
Write publication-ready ML/AI papers for NeurIPS, ICML, ICLR, ACL, AAAI, COLM. Use when drafting papers from...
Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen,...
State-space model with O(n) complexity vs Transformers' O(n²). 5× faster inference, million-token sequences, no KV...
Educational GPT implementation in ~300 lines. Reproduces GPT-2 (124M) on OpenWebText. Clean, hackable code for...
RNN+Transformer hybrid with O(n) inference. Linear time, infinite context, no KV cache. Train like GPT (parallel),...
Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use...
Fast tokenizers optimized for research and production. Rust-based implementation tokenizes 1GB in <20 seconds....
Language-independent tokenizer treating text as raw Unicode. Supports BPE and Unigram algorithms. Fast (50k...
Expert guidance for GRPO/RL fine-tuning with TRL for reasoning and task-specific model training
Provides guidance for enterprise-grade RL training using miles, a production-ready fork of slime. Use when training...
High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models...
Simple Preference Optimization for LLM alignment. Reference-free alternative to DPO with better performance (+6.4...
Provides guidance for LLM post-training with RL using slime, a Megatron+SGLang framework. Use when training GLM...