Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add moltbot/moltbot --skill "openai-whisper"
Install specific skill from multi-skill repository
# Description
Local speech-to-text with the Whisper CLI (no API key).
# SKILL.md
name: openai-whisper
description: Local speech-to-text with the Whisper CLI (no API key).
homepage: https://openai.com/research/whisper
metadata: {"moltbot":{"emoji":"๐๏ธ","requires":{"bins":["whisper"]},"install":[{"id":"brew","kind":"brew","formula":"openai-whisper","bins":["whisper"],"label":"Install OpenAI Whisper (brew)"}]}}
Whisper (CLI)
Use whisper to transcribe audio locally.
Quick start
- whisper /path/audio.mp3 --model medium --output_format txt --output_dir .
- whisper /path/audio.m4a --task translate --output_format srt
Notes
- Models download to ~/.cache/whisper on first run.
- --model defaults to turbo on this install.
- Use smaller models for speed, larger for accuracy.
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.