Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add openmule/mulerouter-skills --skill "mulerouter"
Install specific skill from multi-skill repository
# Description
Generates images and videos using MuleRouter or MuleRun multimodal APIs. Text-to-Image, Image-to-Image, Text-to-Video, Image-to-Video, video editing (VACE, keyframe interpolation). Use when the user wants to generate, edit, or transform images and videos using AI models like Wan2.6, Veo3, Nano Banana Pro, Sora2, Midjourney.
# SKILL.md
name: mulerouter
description: Generates images and videos using MuleRouter or MuleRun multimodal APIs. Text-to-Image, Image-to-Image, Text-to-Video, Image-to-Video, video editing (VACE, keyframe interpolation). Use when the user wants to generate, edit, or transform images and videos using AI models like Wan2.6, Veo3, Nano Banana Pro, Sora2, Midjourney.
compatibility: Requires Python 3.10+, uv (or pip), and network access to api.mulerouter.ai or api.mulerun.com
MuleRouter API
Generate images and videos using MuleRouter or MuleRun multimodal APIs.
Configuration Check
Before running any commands, verify the environment is configured:
Step 1: Check for existing configuration
# Check environment variables
echo "MULEROUTER_SITE: $MULEROUTER_SITE"
echo "MULEROUTER_API_KEY: ${MULEROUTER_API_KEY:+[SET]}"
# Check for .env file
ls -la .env 2>/dev/null || echo "No .env file found"
Step 2: Configure if needed
Option A: Environment variables (to override defaults)
export MULEROUTER_SITE="mulerun" # or "mulerouter"
export MULEROUTER_API_KEY="your-api-key"
Option B: Create .env file
Create .env in the current working directory:
MULEROUTER_SITE=mulerun
MULEROUTER_API_KEY=your-api-key
Note: The tool only reads .env from the current directory. Run scripts from the skill root (skills/mulerouter-skills/).
Step 3: Using uv to run scripts
The skill uses uv for dependency management and execution. Make sure uv is installed and available in your PATH.
Run uv sync to install dependencies.
Quick Start
1. List available models
uv run python scripts/list_models.py
2. Check model parameters
uv run python models/alibaba/wan2.6-t2v/generation.py --list-params
3. Generate content
Text-to-Video:
uv run python models/alibaba/wan2.6-t2v/generation.py --prompt "A cat walking through a garden"
Text-to-Image:
uv run python models/alibaba/wan2.6-t2i/generation.py --prompt "A serene mountain lake"
Image-to-Video:
uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "https://example.com/photo.jpg" #remote image url
uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "/path/to/local/image.png" #local image path
Image Input
For image parameters (--image, --images, etc.), prefer local file paths over base64.
# Preferred: local file path (auto-converted to base64)
--image /tmp/photo.png
--images ["/tmp/photo.png"]
The skill automatically converts local file paths to base64 before sending to the API. This avoids command-line length limits that occur with raw base64 strings.
Workflow
- Check configuration: verify
MULEROUTER_SITEandMULEROUTER_API_KEYare set - Install dependencies: run
uv sync - Run
uv run python scripts/list_models.pyto discover available models - Run
uv run python models/<path>/<action>.py --list-paramsto see parameters - Execute with appropriate parameters
- Parse output URLs from results
Tips
- For an image generation model, a suggested timeout is 5 minutes.
- For a video generation model, a suggested timeout is 15 minutes.
References
- REFERENCE.md - API configuration and CLI options
- MODELS.md - Complete model specifications
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.