SouthpawIN

burner-phone

10
1
# Install this skill:
npx skills add SouthpawIN/burner-phone

Or install specific skill: npx add-skill https://github.com/SouthpawIN/burner-phone

# Description

Control Android devices via ADB with vision feedback. Use this to see the screen, take screenshots, analyze UI elements, and automate phone tasks.

# SKILL.md


name: burner-phone
description: Control Android devices via ADB with vision feedback. Use this to see the screen, take screenshots, analyze UI elements, and automate phone tasks.
model: qwen2.5-omni:3b
keywords: android, phone, adb, screenshot, vision, screen, tap, swipe, automation


Burner Phone Control

Use this skill for ANY request involving phone screens or mobile app automation.

Vision Feedback Loop

ALWAYS follow this pattern:

  1. Screenshot: Capture the current screen
    bash(cmd="adb exec-out screencap -p > ./assets/screen.png")

  2. Analyze: Use vision model to understand the screen
    bash(cmd="python3 ./scripts/vision_helper.py ./assets/screen.png \"Describe the screen and list coordinates (x,y) for interactable elements.\"")

  3. Act: Perform the action using exact coordinates from step 2
    bash(cmd="adb shell input tap <x> <y>")

  4. Verify: Screenshot again to confirm the action worked

Available Commands

Tapping

bash(cmd="adb shell input tap <x> <y>")

Swiping

bash(cmd="adb shell input swipe <x1> <y1> <x2> <y2> <duration_ms>")

Typing Text

bash(cmd="adb shell input text 'your text here'")

Key Events

bash(cmd="adb shell input keyevent KEYCODE_HOME")
bash(cmd="adb shell input keyevent KEYCODE_BACK")
bash(cmd="adb shell input keyevent KEYCODE_ENTER")

Launch App

bash(cmd="adb shell am start -n com.package.name/.MainActivity")

Rules

  1. ALWAYS screenshot before acting - never guess coordinates
  2. ALWAYS use vision_helper.py to get coordinates
  3. Use coordinates provided by the vision tool EXACTLY
  4. All paths are relative to the skill root directory

# README.md

πŸ”₯ Burner Phone Skill

Burner Phone

Control Android devices directly via ADB commands - perfect for burner phones, testing devices, or automation tasks.

Features

  • Vision-First: Uses AI to analyze screen content and provide exact coordinates
  • Direct Control: ADB commands for tapping, swiping, typing
  • Openskills Compatible: Works with any agent that supports the openskills format

Quick Start

# Clone the skill to your skills directory
git clone https://github.com/SouthpawIN/burner-phone.git ~/.opencode/skills/burner-phone

# Run setup
cd ~/.opencode/skills/burner-phone
chmod +x scripts/setup.sh
./scripts/setup.sh

Requirements

  • Python 3.8+
  • ADB (Android Debug Bridge)

Configuration

Variable Default Description
SENTER_URL http://localhost:8081 Senter Server URL
VISION_MODEL qwen2.5-omni:3b Vision model name

Usage

The skill follows a Vision Feedback Loop:

  1. Screenshot β†’ Capture current screen
  2. Analyze β†’ AI identifies UI elements and coordinates
  3. Act β†’ Execute ADB command with exact coordinates
  4. Verify β†’ Screenshot again to confirm

Example Commands

# Take screenshot
adb exec-out screencap -p > ./assets/screen.png

# Analyze screen
python3 ./scripts/vision_helper.py ./assets/screen.png "Find the Settings icon"

# Tap at coordinates
adb shell input tap 540 1200

# Swipe up
adb shell input swipe 540 1800 540 800 300

Directory Structure

burner-phone/
β”œβ”€β”€ SKILL.md              # Skill manifest (openskills format)
β”œβ”€β”€ README.md             # This file
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ vision_helper.py  # Vision analysis helper
β”‚   └── setup.sh          # Installation script
└── assets/
    └── screen.png        # Screenshots saved here

License

MIT

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.