Manage Apple Reminders via the `remindctl` CLI on macOS (list, add, edit, complete, delete)....
npx skills add Rami-RK/skill
Or install specific skill: npx add-skill https://github.com/Rami-RK/skill/tree/main/custom_skills/analyzing-time-series
# Description
Comprehensive diagnostic analysis of time series data. Use when users provide CSV time series data and want to understand its characteristics before forecasting - stationarity, seasonality, trend, forecastability, and transform recommendations.
# SKILL.md
name: analyzing-time-series
description: Comprehensive diagnostic analysis of time series data. Use when users provide CSV time series data and want to understand its characteristics before forecasting - stationarity, seasonality, trend, forecastability, and transform recommendations.
Time Series Diagnostics
Comprehensive diagnostic toolkit to analyze time series data characteristics before forecasting.
Input Format
The input CSV file should have two columns:
- Date column - Timestamps or dates (e.g., date, timestamp, time)
- Value column - Numeric values to analyze (e.g., value, sales, temperature)
Workflow
Step 1: Run diagnostics
python scripts/diagnose.py data.csv --output-dir results/
This runs all statistical tests and analyses. Outputs diagnostics.json with all metrics and summary.txt with human-readable findings. Column names are auto-detected, or can be specified with --date-col and --value-col options.
Step 2: Generate plots (optional)
python scripts/visualize.py data.csv --output-dir results/
Creates diagnostic plots in results/plots/ for visual inspection. Run after diagnose.py to ensure ACF/PACF plots are synchronized with stationarity results. Column names are auto-detected, or can be specified with --date-col and --value-col options.
Step 3: Report to user
Summarize findings from summary.txt and present relevant plots. See references/interpretation.md for guidance on:
- Is the data forecastable?
- Is it stationary? How much differencing is needed?
- Is there seasonality? What period?
- Is there a trend? What direction?
- Is a transform needed?
Script Options
Both scripts accept:
- --date-col NAME - Date column (auto-detected if omitted)
- --value-col NAME - Value column (auto-detected if omitted)
- --output-dir PATH - Output directory (default: diagnostics/)
- --seasonal-period N - Seasonal period (auto-detected if omitted)
Output Files
results/
βββ diagnostics.json # All test results and statistics
βββ summary.txt # Human-readable findings
βββ diagnostics_state.json # Internal state for plot synchronization
βββ plots/
βββ timeseries.png
βββ histogram.png
βββ rolling_stats.png
βββ box_by_dayofweek.png # By day of week (if applicable)
βββ box_by_month.png # By month (if applicable)
βββ box_by_quarter.png # By quarter (if applicable)
βββ acf_pacf.png
βββ decomposition.png
βββ lag_scatter.png
References
See interpretation.md for:
- Statistical test thresholds and interpretation
- Seasonal period guidelines by data frequency
- Transform recommendations
Dependencies
pandas, numpy, matplotlib, statsmodels, scipy
# README.md
Skills with the Claude API
Lesson Files
You can find the lesson's notebook and all the required input files here.
To run the notebook, you need to create a .env file containing an Anthropic API key (no Claude subscription is required):
ANTHROPIC_API_KEY="your-key"
You can get a key from Claude Developer Platform.
About costs: Please note that running through all the notebook cells once will use approximately $0.67 in API credits.
If you'd prefer not to run the notebook, you can:
- view the notebook with pre-run outputs (exactly as shown in the video)
- check out the generated sample outputs
You can also try the same custom skills in Claude.ai.
Notes
- Here's the list of pre-installed libraries in the sandboxed environment
- Streaming: The lesson's notebook does not implement streaming with the Messages API. So when you run the cells to get the response, you might need to wait for a few minutes. If you'd like to implement streaming, you can check the documentation here.
- To see more examples of how to use Agent Skills with the API (like multi-turn conversation), make sure to check this guide.
Additional References
2. Create a Virtual Environment
Go inside the project folder and create a Python virtual environment.
python -m venv venv
Activate the environment:
-
Windows:
bash venv\Scripts\activate -
macOS / Linux:
bash source venv/bin/activate
Install all dependencies:
pip install -r requirements.txt
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.