Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add moltbot/moltbot --skill "openai-whisper-api"
Install specific skill from multi-skill repository
# Description
Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
# SKILL.md
name: openai-whisper-api
description: Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
homepage: https://platform.openai.com/docs/guides/speech-to-text
metadata: {"moltbot":{"emoji":"☁️","requires":{"bins":["curl"],"env":["OPENAI_API_KEY"]},"primaryEnv":"OPENAI_API_KEY"}}
OpenAI Whisper API (curl)
Transcribe an audio file via OpenAI’s /v1/audio/transcriptions endpoint.
Quick start
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a
Defaults:
- Model: whisper-1
- Output: <input>.txt
Useful flags
{baseDir}/scripts/transcribe.sh /path/to/audio.ogg --model whisper-1 --out /tmp/transcript.txt
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --language en
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --prompt "Speaker names: Peter, Daniel"
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --json --out /tmp/transcript.json
API key
Set OPENAI_API_KEY, or configure it in ~/.clawdbot/moltbot.json:
{
skills: {
"openai-whisper-api": {
apiKey: "OPENAI_KEY_HERE"
}
}
}
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.