Use when you have a written implementation plan to execute in a separate session with review checkpoints
npx skills add QUSD-ai/ros-bridge-skill
Or install specific skill: npx add-skill https://github.com/QUSD-ai/ros-bridge-skill
# Description
|
# SKILL.md
name: ros-bridge
version: 2.0.0
description: |
Full robotics stack for AI agents. Connect to any ROS robot via rosbridge, control motors,
read sensors, run vision models, explore autonomously, coordinate swarms, and expose as
A2A agent with optional X402 payments. Complete robot-to-agent pipeline.
homepage: https://github.com/QUSD-ai/ros-bridge-skill
metadata:
emoji: "π€"
category: robotics
tags: ["ros", "robotics", "hardware", "rosbridge", "jetson", "ugv", "lidar", "a2a", "x402"]
π€ ROS Bridge Skill
Complete robot-to-agent pipeline. Connect any ROS robot to AI agents with full control.
What's Included
| Module | Description |
|---|---|
rosbridge-client |
WebSocket client for rosbridge |
ros-tools |
VoltAgent tools: move, turn, stop, lidar |
vision-tools |
Camera capture, 9 CV detection modes |
sensor-tools |
Lidar, IMU, sensor fusion, pose estimation |
exploration-tools |
Mapping, frontier detection, path planning |
swarm-tools |
Multi-robot coordination, task allocation |
hardware-tools |
LED control, OLED display, gimbal |
robot-agent |
A2A agent factory for robot discovery |
x402-wrapper |
Paid robot API (charge per call) |
Quick Start
1. Connect to Robot
import { RosbridgeClient } from './rosbridge-client';
const robot = new RosbridgeClient('192.168.1.100', 9090);
await robot.connect();
2. Basic Control
// Move forward 0.5 m/s for 2 seconds
robot.publish('/cmd_vel', 'geometry_msgs/Twist', {
linear: { x: 0.5, y: 0, z: 0 },
angular: { x: 0, y: 0, z: 0 }
});
// Turn left at 0.3 rad/s
robot.publish('/cmd_vel', 'geometry_msgs/Twist', {
linear: { x: 0, y: 0, z: 0 },
angular: { x: 0, y: 0, z: 0.3 }
});
// Stop
robot.publish('/cmd_vel', 'geometry_msgs/Twist', {
linear: { x: 0, y: 0, z: 0 },
angular: { x: 0, y: 0, z: 0 }
});
3. Read Sensors
// Subscribe to LIDAR
robot.subscribe('/scan', 'sensor_msgs/LaserScan', (data) => {
const minDistance = Math.min(...data.ranges.filter(r => r > 0));
console.log(`Closest obstacle: ${minDistance.toFixed(2)}m`);
});
// Subscribe to odometry
robot.subscribe('/odom', 'nav_msgs/Odometry', (data) => {
console.log(`Position: x=${data.pose.pose.position.x}, y=${data.pose.pose.position.y}`);
});
// Subscribe to IMU
robot.subscribe('/imu/data', 'sensor_msgs/Imu', (data) => {
console.log(`Orientation: ${JSON.stringify(data.orientation)}`);
});
VoltAgent Integration
Turn ROS tools into LLM-callable functions:
import { Agent } from '@voltagent/core';
import { rosTools, initializeROS } from './ros-tools';
import { visionTools } from './tools/vision';
import { sensorTools } from './tools/sensors';
import { explorationTools } from './tools/exploration';
// Connect to robot
initializeROS('ws://192.168.1.100:9090');
// Create embodied agent
const agent = new Agent({
name: 'ugv-rover-01',
instructions: `You control a physical UGV robot. You can move, turn, stop,
read sensors, capture images, and explore autonomously.`,
llm: anthropic('claude-sonnet-4-20250514'),
tools: [
...rosTools, // move_forward, turn, stop, read_lidar
...visionTools, // capture_image, detect_objects, detect_faces
...sensorTools, // get_multi_sensor_scan, detect_obstacles_360
...explorationTools, // update_map, detect_frontiers, find_safe_path
],
});
// Chat with robot
const response = await agent.chat('Move forward slowly and tell me what you see');
Available Tools (20+)
Movement:
- move_forward(speed, duration) β Move at m/s for seconds
- turn(angular_velocity, duration) β Rotate at rad/s
- stop() β Emergency stop
Sensors:
- read_lidar() β 8-direction obstacle scan
- get_multi_sensor_scan() β Combined LIDAR + camera + IMU
- detect_obstacles_360() β Full danger/safe zone map
- estimate_robot_pose() β Position + orientation + velocity
Vision (9 CV Modes):
- capture_image() β Raw camera frame
- detect_objects() β YOLO object detection
- detect_faces() β Face detection
- detect_qr_codes() β QR/barcode scanning
- detect_color(target_color) β Color blob tracking
- detect_lines() β Line following
- get_depth_map() β Depth estimation
Exploration:
- update_map() β Build occupancy grid
- detect_frontiers() β Find unexplored boundaries
- find_safe_path(goal) β A* path planning
- goto(x, y) β Navigate to coordinate
Hardware:
- set_led(color) β Control status LEDs
- set_display(lines) β Write to OLED
- set_gimbal(pan, tilt) β Control camera angle
- play_sound(name) β Audio feedback
Swarm:
- broadcast_position() β Share pose with swarm
- request_assistance(task) β Ask nearby robots for help
- coordinate_formation(formation) β Move in formation
A2A Agent (Discoverable Robot)
Expose your robot as an A2A agent other agents can find and use:
import { createRobotAgent } from './robot-agent';
const agent = createRobotAgent({
name: 'warehouse-bot-01',
rosUrl: 'ws://192.168.1.100:9090',
port: 3001,
capabilities: ['move', 'scan', 'capture', 'explore'],
});
await agent.start();
// Now discoverable at:
// http://192.168.1.100:3001/.well-known/agent.json
Other agents can now:
import { A2AClient } from 'nanda-ts';
const robot = new A2AClient('http://192.168.1.100:3001');
await robot.sendTask({ message: 'Move to the loading dock' });
X402 Paid Robot Services
Charge other agents for robot API calls:
import { createX402RobotServer } from './x402-wrapper';
const server = createX402RobotServer({
rosUrl: 'ws://192.168.1.100:9090',
port: 3002,
escrowAddress: '0x...',
pricing: {
move: 0.01, // 1Β’ per movement
scan: 0.005, // 0.5Β’ per scan
capture: 0.02, // 2Β’ per image
explore: 0.10, // 10Β’ per exploration task
}
});
await server.start();
Buyers automatically pay via HTTP 402:
const scan = await x402Client.fetch('http://robot:3002/api/scan');
// Paid 0.005 USDC, received LIDAR data
Supported Robots
Works with any ROS1/ROS2 robot running rosbridge:
| Robot | Tested | Notes |
|---|---|---|
| Waveshare UGV Rover | β | Full support |
| Waveshare UGV Beast | β | Full support + arm |
| TurtleBot 3/4 | β | Navigation stack |
| Clearpath Jackal | β | Outdoor capable |
| Custom ROS robot | β | Any rosbridge setup |
Prerequisites
- Robot with ROS β ROS1 Noetic or ROS2 Humble+
- Rosbridge β
rosbridge_websocketrunning - Network access β Agent can reach
ws://ROBOT_IP:9090
# On robot (ROS2)
ros2 launch rosbridge_server rosbridge_websocket_launch.xml
# On robot (ROS1)
roslaunch rosbridge_server rosbridge_websocket.launch
Example: Autonomous Exploration
const agent = new Agent({
name: 'explorer-01',
instructions: `Explore unknown areas. Use LIDAR to avoid obstacles.
Build a map as you go. Report interesting findings.`,
tools: [...rosTools, ...sensorTools, ...explorationTools],
});
// Start autonomous exploration
await agent.chat('Explore this room and map it. Avoid obstacles.');
// Agent will:
// 1. Read LIDAR to detect obstacles
// 2. Find frontiers (unexplored areas)
// 3. Plan path to nearest frontier
// 4. Navigate while avoiding obstacles
// 5. Update map with new data
// 6. Repeat until room is mapped
Links
# README.md
π€ ROS Bridge Skill
Complete robot-to-agent pipeline. Connect any ROS robot to AI agents.
Features
- β WebSocket client for rosbridge
- β 20+ VoltAgent tools (move, scan, explore, vision)
- β 9 computer vision modes (YOLO, faces, QR, colors...)
- β Sensor fusion (LIDAR + camera + IMU)
- β Autonomous exploration & mapping
- β Multi-robot swarm coordination
- β A2A agent factory (discoverable robots)
- β X402 payments (charge per API call)
Quick Start
import { createRobotAgent } from '@qusd/ros-bridge-skill';
const agent = createRobotAgent({
name: 'my-robot',
rosUrl: 'ws://192.168.1.100:9090',
port: 3001,
});
await agent.start();
// Robot now discoverable at /.well-known/agent.json
Manual Control
import { RosbridgeClient } from '@qusd/ros-bridge-skill';
const robot = new RosbridgeClient('192.168.1.100', 9090);
await robot.connect();
// Move forward
robot.publish('/cmd_vel', 'geometry_msgs/Twist', {
linear: { x: 0.5, y: 0, z: 0 },
angular: { x: 0, y: 0, z: 0 }
});
// Read LIDAR
robot.subscribe('/scan', 'sensor_msgs/LaserScan', (data) => {
console.log('Closest obstacle:', Math.min(...data.ranges), 'm');
});
VoltAgent Integration
import { Agent } from '@voltagent/core';
import { rosTools, visionTools, explorationTools } from '@qusd/ros-bridge-skill';
const agent = new Agent({
name: 'explorer-bot',
tools: [...rosTools, ...visionTools, ...explorationTools],
});
await agent.chat('Explore this room and map obstacles');
Supported Robots
Works with any ROS1/ROS2 robot running rosbridge:
- Waveshare UGV (Rover, Beast, RaspRover)
- TurtleBot 3/4
- Clearpath Jackal/Husky
- Any custom ROS robot
Documentation
See SKILL.md for complete documentation.
Links
License
MIT
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.