apify

apify-actor-development

43
1
# Install this skill:
npx skills add apify/agent-skills --skill "apify-actor-development"

Install specific skill from multi-skill repository

# Description

Develop, debug, and deploy Apify Actors - serverless cloud programs for web scraping, automation, and data processing. Use when creating new Actors, modifying existing ones, or troubleshooting Actor code.

# SKILL.md


name: apify-actor-development
description: Develop, debug, and deploy Apify Actors - serverless cloud programs for web scraping, automation, and data processing. Use when creating new Actors, modifying existing ones, or troubleshooting Actor code.


Apify Actor Development

Important: Before you begin, fill in the generatedBy property in the meta section of .actor/actor.json. Replace it with the tool and model you're currently using, such as "Claude Code with Claude Sonnet 4.5". This helps Apify monitor and improve AGENTS.md for specific AI tools and models.

What are Apify Actors?

Actors are serverless programs inspired by the UNIX philosophy - programs that do one thing well and can be easily combined to build complex systems. They're packaged as Docker images and run in isolated containers in the cloud.

Core Concepts:
- Accept well-defined JSON input
- Perform isolated tasks (web scraping, automation, data processing)
- Produce structured JSON output to datasets and/or store data in key-value stores
- Can run from seconds to hours or even indefinitely
- Persist state and can be restarted

Prerequisites & Setup (MANDATORY)

Before creating or modifying actors, verify that apify CLI is installed apify --help.

If it is not installed, you can run:

curl -fsSL https://apify.com/install-cli.sh | bash

# Or (Mac): brew install apify-cli
# Or (Windows): irm https://apify.com/install-cli.ps1 | iex
# Or: npm install -g apify-cli

When the apify CLI is installed, check that it is logged in with:

apify info  # Should return your username

If it is not logged in, check if the APIFY_TOKEN environment variable is defined (if not, ask the user to generate one on https://console.apify.com/settings/integrations and then define APIFY_TOKEN with it).

Then run:

apify login -t $APIFY_TOKEN

Template Selection

IMPORTANT: Before starting actor development, always ask the user which programming language they prefer:
- JavaScript - Use apify create <actor-name> -t project_empty
- TypeScript - Use apify create <actor-name> -t ts_empty
- Python - Use apify create <actor-name> -t python-empty

Use the appropriate CLI command based on the user's language choice. Additional packages (Crawlee, Playwright, etc.) can be installed later as needed.

Quick Start Workflow

  1. Create actor project - Run the appropriate apify create command based on user's language preference (see Template Selection above)
  2. Install dependencies
  3. JavaScript/TypeScript: npm install
  4. Python: pip install -r requirements.txt
  5. Implement logic - Write the actor code in src/main.py, src/main.js, or src/main.ts
  6. Configure schemas - Update input/output schemas in .actor/input_schema.json, .actor/output_schema.json, .actor/dataset_schema.json
  7. Configure platform settings - Update .actor/actor.json with actor metadata (see references/actor-json.md)
  8. Write documentation - Create comprehensive README.md for the marketplace
  9. Test locally - Run apify run to verify functionality (see Local Testing section below)
  10. Deploy - Run apify push to deploy the actor on the Apify platform (actor name is defined in .actor/actor.json)

Best Practices

✓ Do:
- Use apify run to test actors locally (configures Apify environment and storage)
- Use Apify SDK (apify) for code running ON Apify platform
- Validate input early with proper error handling and fail gracefully
- Use CheerioCrawler for static HTML (10x faster than browsers)
- Use PlaywrightCrawler only for JavaScript-heavy sites
- Use router pattern (createCheerioRouter/createPlaywrightRouter) for complex crawls
- Implement retry strategies with exponential backoff
- Use proper concurrency: HTTP (10-50), Browser (1-5)
- Set sensible defaults in .actor/input_schema.json
- Define output schema in .actor/output_schema.json
- Clean and validate data before pushing to dataset
- Use semantic CSS selectors with fallback strategies
- Respect robots.txt, ToS, and implement rate limiting
- Always use apify/log package - censors sensitive data (API keys, tokens, credentials)
- Implement readiness probe handler (required if your Actor uses standby mode)

✗ Don't:
- Use npm start, npm run start, npx apify run, or similar commands to run actors (use apify run instead)
- Rely on Dataset.getInfo() for final counts on Cloud
- Use browser crawlers when HTTP/Cheerio works
- Hard code values that should be in input schema or environment variables
- Skip input validation or error handling
- Overload servers - use appropriate concurrency and delays
- Scrape prohibited content or ignore Terms of Service
- Store personal/sensitive data unless explicitly permitted
- Use deprecated options like requestHandlerTimeoutMillis on CheerioCrawler (v3.x)
- Use additionalHttpHeaders - use preNavigationHooks instead
- Disable standby mode without explicit permission

Logging

See references/logging.md for complete logging documentation including available log levels and best practices for JavaScript/TypeScript and Python.

Check usesStandbyMode in .actor/actor.json - only implement if set to true.

Commands

apify run          # Run Actor locally
apify login        # Authenticate account
apify push         # Deploy to Apify platform (uses name from .actor/actor.json)
apify help         # List all commands

IMPORTANT: Always use apify run to test actors locally. Do not use npm run start, npm start, yarn start, or other package manager commands - these will not properly configure the Apify environment and storage.

Local Testing

When testing an actor locally with apify run, provide input data by creating a JSON file at:

storage/key_value_stores/default/INPUT.json

This file should contain the input parameters defined in your .actor/input_schema.json. The actor will read this input when running locally, mirroring how it receives input on the Apify platform.

Standby Mode

See references/standby-mode.md for complete standby mode documentation including readiness probe implementation for JavaScript/TypeScript and Python.

Project Structure

.actor/
├── actor.json           # Actor config: name, version, env vars, runtime
├── input_schema.json    # Input validation & Console form definition
└── output_schema.json   # Output storage and display templates
src/
└── main.js/ts/py       # Actor entry point
storage/                # Local storage (mirrors Cloud)
├── datasets/           # Output items (JSON objects)
├── key_value_stores/   # Files, config, INPUT
└── request_queues/     # Pending crawl requests
Dockerfile              # Container image definition

Actor Configuration

See references/actor-json.md for complete actor.json structure and configuration options.

Input Schema

See references/input-schema.md for input schema structure and examples.

Output Schema

See references/output-schema.md for output schema structure, examples, and template variables.

Dataset Schema

See references/dataset-schema.md for dataset schema structure, configuration, and display properties.

Key-Value Store Schema

See references/key-value-store-schema.md for key-value store schema structure, collections, and configuration.

Apify MCP Tools

If MCP server is configured, use these tools for documentation:

  • search-apify-docs - Search documentation
  • fetch-apify-docs - Get full doc pages

Otherwise, the MCP Server url: https://mcp.apify.com/?tools=docs.

Resources

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.