Use when you have a written implementation plan to execute in a separate session with review checkpoints
npx skills add polyuiislab/infiAgent --skill "mac-use"
Install specific skill from multi-skill repository
# Description
Control macOS GUI apps visually โ take screenshots, click, scroll, type. Use when the user asks to interact with any Mac desktop application's graphical interface.
# SKILL.md
name: mac-use
description: Control macOS GUI apps visually โ take screenshots, click, scroll, type. Use when the user asks to interact with any Mac desktop application's graphical interface.
metadata: {"openclaw":{"emoji":"๐ฅ๏ธ","requires":{"bins":["python3"]},"os":["darwin"],"install":[{"id":"python-brew","kind":"brew","formula":"python","bins":["python3"],"label":"Install Python 3 (brew)"}]}}
Mac Use
Control any macOS GUI application through a screenshot โ pick element โ click โ verify loop.
Setup
Platform: macOS only (requires Apple Vision framework for OCR)
System binaries (pre-installed on macOS):
- python3 โ via Homebrew (brew install python)
- screencapture โ built-in macOS utility
Python packages โ install from the skill directory:
pip3 install --break-system-packages -r {baseDir}/requirements.txt
How It Works
The screenshot command captures a window, uses Apple Vision OCR to detect all text elements, draws numbered annotations on the image, and returns both:
1. Annotated image at /tmp/mac_use.png โ numbered green boxes around each detected text
2. Element list in JSON โ [{num: 1, text: "Submit", at: [500, 200]}, {num: 2, text: "Cancel", at: [600, 200]}, ...] where at is the center point [x, y] on the 1000x1000 canvas (origin at top-left)
You receive both by calling Bash (gets JSON with element list) and then Read on /tmp/mac_use.png (gets the visual). Always do both so you can cross-reference the numbers with what you see.
Quick Reference
# List all visible windows
python3 {baseDir}/scripts/mac_use.py list
# Screenshot + annotate (returns image + numbered element list)
python3 {baseDir}/scripts/mac_use.py screenshot <app> [--id N]
# Click element by number (primary click method)
python3 {baseDir}/scripts/mac_use.py clicknum <N>
# Click at canvas coordinates (fallback for unlabeled icons)
python3 {baseDir}/scripts/mac_use.py click --app <app> [--id N] <x> <y>
# Scroll inside a window
python3 {baseDir}/scripts/mac_use.py scroll --app <app> [--id N] <direction> <amount>
# Type text (uses clipboard paste โ supports all languages)
python3 {baseDir}/scripts/mac_use.py type [--app <app>] "text here"
# Press key or combo
python3 {baseDir}/scripts/mac_use.py key [--app <app>] <combo>
Workflow
- Open the target app with
open -a "App Name"(optionally with a URL or file path) - Wait for it to load:
sleep 2 - Screenshot the app:
bash python3 {baseDir}/scripts/mac_use.py screenshot <app> [--id N]
This returns JSON withfile(image path) andelements(numbered text list). - Read the annotated image at
/tmp/mac_use.pngto see the numbered elements visually - Decide which element to interact with:
- Prefer
clicknum Nโ pick the number of a detected text element - Fallback
click --app <app> x yโ only for unlabeled icons (arrows, close buttons, cart icons) that have no text and therefore no number - Act using
clicknum,type,key, orscroll - Screenshot again to verify the result
- Repeat from step 3
Commands
list
Show all visible app windows.
python3 {baseDir}/scripts/mac_use.py list
Returns JSON array: [{"app":"Google Chrome","title":"Wikipedia","id":4527,"x":120,"y":80,"w":1200,"h":800}, ...]
screenshot
Capture a window, detect text elements via OCR, annotate with numbered markers, and return the element list. The target window is automatically raised to the top before capture, so overlapping windows are handled.
python3 {baseDir}/scripts/mac_use.py screenshot chrome
python3 {baseDir}/scripts/mac_use.py screenshot chrome --id 4527
<app>: fuzzy, case-insensitive match (e.g. "chrome" matches "Google Chrome")--id N: target a specific window ID (required when multiple windows of the same app exist)- Returns JSON with:
file: path to annotated screenshot (/tmp/mac_use.png)id,app,title,scale: window metadataelements: array of{num, text, at}โ the numbered clickable text elements, whereatis[x, y]center coordinates on the 1000x1000 canvas (origin at top-left)- If multiple windows match, returns a list of windows instead โ pick one and retry with
--id - The image is 1000x1000 pixels with green bounding boxes and blue number badges
- Element map is saved to
/tmp/mac_use_elements.jsonforclicknum
clicknum
Click on a numbered element from the last screenshot. This is the primary click method.
python3 {baseDir}/scripts/mac_use.py clicknum 5
python3 {baseDir}/scripts/mac_use.py clicknum 12
N: the element number from the lastscreenshotoutput- Reads the saved element map, activates the window, and clicks at the element's center
- Returns JSON with
clicked_num,text, canvas coords, and absolute screen coords
click
Click at a position using canvas coordinates. Fallback only โ use for unlabeled icons.
python3 {baseDir}/scripts/mac_use.py click --app chrome 500 300
python3 {baseDir}/scripts/mac_use.py click --app chrome --id 4527 500 300
- Coordinates are canvas positions (0-1000) from the screenshot image
- x=0 is left, x=1000 is right; y=0 is top, y=1000 is bottom
- Use this only when Vision OCR didn't detect the element (icon-only buttons, images, etc.)
scroll
Scroll inside an app window.
python3 {baseDir}/scripts/mac_use.py scroll --app chrome down 5
python3 {baseDir}/scripts/mac_use.py scroll --app notes up 10
- Directions:
up,down,left,right - Amount: number of scroll clicks (3-5 for moderate, 10+ for fast scrolling)
- Mouse is moved to the center of the window before scrolling
type
Type text into the currently focused input field.
python3 {baseDir}/scripts/mac_use.py type --app chrome "hello world"
python3 {baseDir}/scripts/mac_use.py type --app chrome "ไฝ ๅฅฝไธ็"
--app: activates the app first to ensure keystrokes go to the right window- Uses clipboard paste (Cmd+V) for reliable Unicode/CJK support
- Always click on the target input field first before typing
key
Press a single key or key combination.
python3 {baseDir}/scripts/mac_use.py key --app chrome return
python3 {baseDir}/scripts/mac_use.py key --app chrome cmd+a
python3 {baseDir}/scripts/mac_use.py key --app chrome cmd+shift+s
--app: activates the app first- Common keys:
return,tab,escape,space,delete,backspace,up,down,left,right - Modifiers:
cmd,ctrl,alt/opt,shift
Important Rules
- Always screenshot before your first interaction with an app
- Always screenshot after an action to verify the result
- Always Read the screenshot image after running the screenshot command โ you need both the element list AND the visual
- Prefer
clicknumoverclickโ only use direct coordinates for unlabeled icons - Click before typing โ ensure the correct input field has focus first
- Multiple windows: if you get
multiple_windowserror, uselistto see all windows, then pass--id - Popup windows (like WeChat mini-program panels) are separate windows with their own IDs โ use
listto find them and--idto target them - Wait after opening apps: use
sleep 2-3afteropen -abefore taking a screenshot - Activate the app before screenshot/click: prepend
osascript -e 'tell application "AppName" to activate' && sleep 1when the target app may be behind other windows - Do not type passwords or secrets via this tool
Coordinate System (for fallback click only)
Screenshots are rendered onto a 1000x1000 canvas:
- Origin (0, 0) is at the top-left corner
- x increases left to right (0 = left edge, 1000 = right edge)
- y increases top to bottom (0 = top edge, 1000 = bottom edge)
- The app window is scaled to fit (aspect ratio preserved), centered, with dark gray padding
Example: Order food on Meituan in WeChat
# 1. Open WeChat
open -a "WeChat"
sleep 3
# 2. Screenshot WeChat โ find the mini program window
python3 {baseDir}/scripts/mac_use.py list
# โ find the mini program window ID
# 3. Screenshot the mini program (annotated + element list)
python3 {baseDir}/scripts/mac_use.py screenshot ๅพฎไฟก --id 41266
# โ returns: {"file": "/tmp/mac_use.png", "elements": [{num: 1, text: "ๆ็ดข", at: [500, 200]}, ...]}
# โ Read /tmp/mac_use.png to see annotated image
# 4. Click "ๆ็ดข" (element #1)
python3 {baseDir}/scripts/mac_use.py clicknum 1
# 5. Type search query
python3 {baseDir}/scripts/mac_use.py type --app ๅพฎไฟก "็ธ้ธก"
# 6. Press Enter
python3 {baseDir}/scripts/mac_use.py key --app ๅพฎไฟก return
sleep 2
# 7. Screenshot to see results
python3 {baseDir}/scripts/mac_use.py screenshot ๅพฎไฟก --id 41266
# โ Read /tmp/mac_use.png, pick a restaurant by number
# 8. Click on a restaurant (e.g. element #5)
python3 {baseDir}/scripts/mac_use.py clicknum 5
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.