EXboys

calculator

0
0
# Install this skill:
npx skills add EXboys/skilllite

Or install specific skill: npx add-skill https://github.com/EXboys/skilllite/tree/main/.skills/calculator

# Description

A simple calculator that can add, subtract, multiply, and divide numbers. Use when the user needs to perform basic arithmetic operations.

# SKILL.md


name: calculator
description: A simple calculator that can add, subtract, multiply, and divide numbers. Use when the user needs to perform basic arithmetic operations.
license: MIT
metadata:
author: skillLite
version: "1.0"


Calculator Skill

A simple calculator that performs basic arithmetic operations.

Usage

Provide an operation and two numbers to get the result.

Examples

  • Add: {"operation": "add", "a": 5, "b": 3}8
  • Subtract: {"operation": "subtract", "a": 10, "b": 4}6
  • Multiply: {"operation": "multiply", "a": 6, "b": 7}42
  • Divide: {"operation": "divide", "a": 20, "b": 4}5

Runtime

input_schema:
  type: object
  properties:
    operation:
      type: string
      description: "The operation to perform: add, subtract, multiply, divide"
      enum: [add, subtract, multiply, divide]
    a:
      type: number
      description: First operand
    b:
      type: number
      description: Second operand
  required: [operation, a, b]

# README.md

SkillLite

中文文档

The only lightweight AI Agent Skills engine with built-in native system-level sandbox, zero dependencies, and local execution.

A lightweight AI Agent Skills execution engine that integrates with any OpenAI-compatible LLM.

🎯 Why SkillLite?

Feature SkillLite Claude Code Sandbox LangChain Sandbox OpenAI Plugins Semantic Kernel
Built-in Sandbox ✅ Rust Native ✅ Node.js Native ⚠️ Pyodide/Docker ⚠️ Cloud (Closed) ❌ None (Azure)
Sandbox Tech Seatbelt + Namespace Seatbelt + bubblewrap WebAssembly/Docker Cloud Isolation -
Implementation Rust (High Perf) Node.js/TypeScript Python - C#
Local Execution
Zero Dependencies ✅ Single Binary ❌ Needs Node.js ❌ Needs Runtime
Cold Start ⚡ Milliseconds Medium 🐢 Seconds - -
LLM Agnostic ✅ Any LLM ❌ Claude Only ❌ OpenAI Only
License MIT Apache 2.0 MIT Closed MIT

Comparison with Claude Code Sandbox

Claude/Anthropic released Claude Code Sandbox in October 2025, using the same underlying technology stack as SkillLite:
- macOS: Seatbelt (sandbox-exec)
- Linux: bubblewrap + namespace

Key Differences:

Aspect SkillLite Claude Code Sandbox
Purpose General Skills Execution Engine Claude Code Exclusive
LLM Binding ✅ Any LLM ❌ Claude Only
Implementation Rust (Higher Performance, Smaller Size) Node.js/TypeScript
Deployment Single Binary, Zero Dependencies Requires Node.js Runtime
Skills Ecosystem Independent Skills Directory Depends on MCP Protocol
Use Case Any Agent Framework Integration Claude Code Internal Use

💡 Summary: Claude Code Sandbox validates that "native system-level sandbox" is the right direction for AI Agent secure execution. SkillLite provides an LLM-agnostic, Rust-implemented, lighter-weight alternative for scenarios requiring multi-LLM integration or maximum performance.

🔐 Core Innovation: Native System-Level Security Sandbox

SkillLite uses a Rust-implemented native system-level sandbox, not Docker or WebAssembly:

  • macOS: Kernel-level isolation based on Seatbelt (sandbox-exec)
  • Linux: Container-level isolation based on Namespace + Seccomp

Fundamental Difference from Other Solutions

┌─────────────────────────────────────────────────────────────────┐
│  Other Solutions                                                 │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐              │
│  │   Docker    │  │   Pyodide   │  │ Cloud Sandbox│              │
│  │ (Heavyweight)│  │ (WebAssembly)│  │(Data Upload) │              │
│  └─────────────┘  └─────────────┘  └─────────────┘              │
└─────────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────────┐
│  SkillLite Solution                                              │
│  ┌─────────────────────────────────────────────────────────────┐│
│  │           Rust Native System-Level Sandbox                   ││
│  │  • Direct OS security mechanisms (Seatbelt/Namespace)        ││
│  │  • Zero external dependencies, single binary                 ││
│  │  • Millisecond cold start, production-grade performance      ││
│  │  • Code and data never leave your machine                    ││
│  └─────────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────────┘

Security Features

Security Capability Description
Process Isolation Each Skill runs in an independent process
Filesystem Isolation Only Skill directory and temp directory accessible
Network Isolation Network disabled by default, can be enabled on demand
Resource Limits CPU, memory, execution time limits
Least Privilege Follows the principle of least privilege

✨ Features

  • 🔒 Native Security Sandbox - Rust-implemented system-level isolation, not Docker/WebAssembly
  • ⚡ Ultra Lightweight - Single binary, millisecond cold start, zero external dependencies
  • 🏠 Data Sovereignty - Pure local execution, code and data never leave your machine
  • 🔌 Universal LLM Support - Compatible with all OpenAI API format LLM providers
  • 📦 Skills Management - Auto-discovery, registration, and management of Skills
  • 🧠 Smart Schema Inference - Automatically infer input parameter Schema from SKILL.md and script code
  • 🔧 Tool Calls Handling - Seamlessly handle LLM tool call requests
  • 📄 Rich Context Support - Support for references, assets, and other extended resources

🚀 Quick Start

1. Install Rust Sandbox Executor

This project uses a Rust-written isolated sandbox to securely execute Skills scripts. You need to install Rust and compile the sandbox first.

⚠️ Platform Support: Currently only supports macOS and Linux. Windows is not supported yet.

Install Rust (if not already installed)

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Reload environment variables after installation
source ~/.cargo/env

# Verify installation
rustc --version
cargo --version

Compile the Sandbox Executor

# Enter Rust project directory and compile
cd skillbox
cargo build --release

# Optional: Install to system path (recommended)
cargo install --path .

# Verify installation
skillbox --help

After compilation, the skillbox binary will be located at:
- If using cargo install: ~/.cargo/bin/skillbox
- If only cargo build: skillbox/target/release/skillbox

2. Environment Configuration

# Copy environment variable template
cp .env.example .env

# Edit .env with your API configuration
# BASE_URL=https://api.deepseek.com/v1
# API_KEY=your_api_key_here
# MODEL=deepseek-chat

3. Run Example

python3 simple_demo.py

📁 Project Structure

skillLite/
├── skillbox/              # Rust sandbox executor
├── skilllite/             # Python SDK
│   └── skilllite/
│       ├── manager.py     # SkillManager core manager
│       ├── executor.py    # Skill executor
│       ├── loops.py       # Agentic Loop implementation
│       ├── tools.py       # Tool definitions
│       └── ...
├── .skills/               # Skills directory
│   ├── calculator/        # Calculator Skill
│   ├── data-analyzer/     # Data Analysis Skill
│   ├── http-request/      # HTTP Request Skill
│   ├── text-processor/    # Text Processing Skill
│   ├── weather/           # Weather Query Skill
│   └── writing-helper/    # Writing Assistant Skill
├── simple_demo.py         # Full example
├── simple_demo_v2.py      # Simplified example
└── simple_demo_minimal.py # Minimal example

💡 Usage

Basic Usage

from openai import OpenAI
from skilllite import SkillManager

# Initialize OpenAI-compatible client
client = OpenAI(base_url="https://api.deepseek.com/v1", api_key="your_key")

# Initialize SkillManager
manager = SkillManager(
    skills_dir="./.skills",
    llm_client=client,
    llm_model="deepseek-chat"
)

# Get tool definitions (OpenAI format)
tools = manager.get_tools()

# Call LLM
response = client.chat.completions.create(
    model="deepseek-chat",
    tools=tools,
    messages=[{"role": "user", "content": "Calculate 15 times 27"}]
)

# Handle tool calls
if response.choices[0].message.tool_calls:
    results = manager.handle_tool_calls(response)

Supported LLM Providers

Provider base_url
OpenAI https://api.openai.com/v1
DeepSeek https://api.deepseek.com/v1
Qwen https://dashscope.aliyuncs.com/compatible-mode/v1
Moonshot https://api.moonshot.cn/v1
Ollama (Local) http://localhost:11434/v1

🛠️ Create Custom Skill

Each Skill is a directory containing a SKILL.md:

my-skill/
├── SKILL.md           # Skill metadata and description (required)
├── scripts/           # Scripts directory
│   └── main.py        # Entry script
├── references/        # Reference documents (optional)
└── assets/            # Resource files (optional)

SKILL.md Example

---
name: my-skill
description: My custom Skill
version: 1.0.0
entry_point: scripts/main.py
---

# My Skill

This is the detailed description of the Skill...

📦 Core Components

  • SkillManager - Manages Skill discovery, registration, and execution
  • SkillInfo - Single Skill information encapsulation
  • AgenticLoop - Automated Agent loop execution
  • ToolDefinition - OpenAI-compatible tool definition
  • SchemaInferrer - Smart parameter Schema inference

📄 License

MIT

This project includes third-party dependencies with various licenses. See THIRD_PARTY_LICENSES.md for details.

📚 Documentation

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.