Expert in using ALL search & memory MCPs together. Combines Memory, Context7, NotebookLM for persistent knowledge. Use for storing learnings, cross-session context, and intelligent recall across...
Advanced memory operations reference. Basic patterns (profile loading, simple recall/remember) are in project instructions. Consult this skill for background writes, memory versioning, complex...
End-of-session reflection. Extracts memories, suggests updates to about-taylor.md and CLAUDE.md. Run before ending a long session or when context is getting full. Triggers on "debrief", "extract...
Guides Android performance optimizations. Use when app is slow, has memory leaks, ANR issues, battery drain, or janky UI. Covers profiling, memory management, rendering optimizations, and best practices.
Optimize AgentDB performance with quantization (4-32x memory reduction), HNSW indexing (150x faster search), caching, and batch operations. Use when optimizing memory usage, improving search...
Optimize AgentDB performance with quantization (4-32x memory reduction), HNSW indexing (150x faster search), caching, and batch operations. Use when optimizing memory usage, improving search...
Guide for Direct Memory Access (DMA) attack techniques using FPGA hardware. Use this skill when researching PCIe DMA attacks, pcileech, FPGA firmware development, or hardware-based memory access...
CASS Memory System - procedural memory for AI coding agents. Three-layer cognitive architecture with confidence decay, anti-pattern learning, cross-agent knowledge transfer, trauma guard safety...
CASS Memory System - procedural memory for AI coding agents. Three-layer cognitive architecture with confidence decay, anti-pattern learning, cross-agent knowledge transfer, trauma guard safety...
AWS Bedrock AgentCore comprehensive expert for deploying and managing all AgentCore services. Use when working with Gateway, Runtime, Memory, Identity, or any AgentCore component. Covers MCP...
Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4,...
Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4,...
Connect your Clawdbot/Moltbot/Openclaw to your Konteks account (konteks.app) for persistent memory, task management, and context sharing. Use when you need to store agent memories, create or read...
Distributed computing for larger-than-RAM pandas/NumPy workflows. Use when you need to scale existing pandas/NumPy code beyond memory or across clusters. Best for parallel file processing,...
Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory...
Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory...
This skill should be used at the start of any computationally intensive scientific task to detect and report available system resources (CPU cores, GPUs, memory, disk space). It creates a JSON...
This skill should be used at the start of any computationally intensive scientific task to detect and report available system resources (CPU cores, GPUs, memory, disk space). It creates a JSON...
Improve C++ (C++11+) performance using Abseil “Performance Hints” (Jeff Dean & Sanjay Ghemawat): estimation + profiling, API/bulk design, algorithmic wins, cache-friendly memory layout, fewer...
Letta framework for building stateful AI agents with long-term memory. Use for AI agent development, memory management, tool integration, and multi-agent systems.