Use when adding new error messages to React, or seeing "unknown error code" warnings.
npx skills add ramidamolis-alt/agent-skills-workflows --skill "rollback-engine"
Install specific skill from multi-skill repository
# Description
Transaction rollback system for safe code operations. Checkpoint creation, selective rollback, state diff analysis, recovery procedures, and Git-integrated undo. Use for safe destructive operations.
# SKILL.md
name: rollback-engine
description: Transaction rollback system for safe code operations. Checkpoint creation, selective rollback, state diff analysis, recovery procedures, and Git-integrated undo. Use for safe destructive operations.
triggers: ["rollback", "undo", "revert", "checkpoint", "restore", "ย้อนกลับ"]
⏪ Rollback Engine Skill
Expert in creating safe, recoverable operations with checkpoint and rollback capabilities.
Core Concepts
Rollback Architecture
┌─────────────────────────────────────────────────────────────────┐
│ ROLLBACK ENGINE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ CHECKPOINT │───>│ EXECUTE │───>│ VERIFY │ │
│ │ CREATE │ │ CHANGES │ │ CHANGES │ │
│ └──────────────┘ └──────┬───────┘ └──────┬───────┘ │
│ ^ │ │ │
│ │ v v │
│ │ ┌──────────────┐ ┌──────────────┐ │
│ │ │ FAILURE? │ │ SUCCESS? │ │
│ │ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ │ v v │
│ ┌──────┴───────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ ROLLBACK │<──│ AUTO-REVERT │ │ COMMIT │ │
│ │ TO LAST │ └──────────────┘ │ CHECKPOINT │ │
│ └──────────────┘ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Checkpoint System
Checkpoint Types
checkpoint_types:
file_checkpoint:
stores:
- file_content
- file_path
- permissions
- timestamp
use_case: "Before modifying files"
git_checkpoint:
stores:
- commit_sha
- branch_name
- uncommitted_changes
use_case: "Before destructive git operations"
state_checkpoint:
stores:
- application_state
- database_state
- environment_vars
use_case: "Before complex state changes"
memory_checkpoint:
stores:
- Memory MCP entities
- relations
use_case: "Before knowledge graph changes"
Creating Checkpoints
import os
import json
import hashlib
from datetime import datetime
from pathlib import Path
class CheckpointManager:
def __init__(self, checkpoint_dir: str = ".checkpoints"):
self.checkpoint_dir = Path(checkpoint_dir)
self.checkpoint_dir.mkdir(exist_ok=True)
def create_checkpoint(self, name: str, files: list[str]) -> str:
"""
Create a checkpoint of specified files
"""
checkpoint_id = f"{name}_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
checkpoint_path = self.checkpoint_dir / checkpoint_id
checkpoint_path.mkdir()
manifest = {
"id": checkpoint_id,
"name": name,
"timestamp": datetime.now().isoformat(),
"files": []
}
for file_path in files:
if os.path.exists(file_path):
# Read and store file content
with open(file_path, 'rb') as f:
content = f.read()
# Calculate hash
file_hash = hashlib.sha256(content).hexdigest()
# Save to checkpoint
safe_name = file_path.replace('/', '_')
with open(checkpoint_path / safe_name, 'wb') as f:
f.write(content)
manifest["files"].append({
"original_path": file_path,
"checkpoint_name": safe_name,
"hash": file_hash,
"size": len(content)
})
# Save manifest
with open(checkpoint_path / "manifest.json", 'w') as f:
json.dump(manifest, f, indent=2)
return checkpoint_id
def restore_checkpoint(self, checkpoint_id: str) -> dict:
"""
Restore files from checkpoint
"""
checkpoint_path = self.checkpoint_dir / checkpoint_id
manifest_path = checkpoint_path / "manifest.json"
with open(manifest_path) as f:
manifest = json.load(f)
restored = []
for file_info in manifest["files"]:
checkpoint_file = checkpoint_path / file_info["checkpoint_name"]
original_path = file_info["original_path"]
with open(checkpoint_file, 'rb') as f:
content = f.read()
# Restore file
with open(original_path, 'wb') as f:
f.write(content)
restored.append(original_path)
return {
"checkpoint_id": checkpoint_id,
"restored_files": restored,
"timestamp": manifest["timestamp"]
}
Git Integration
Git Checkpoint Operations
import subprocess
class GitCheckpoint:
def __init__(self, repo_path: str = "."):
self.repo_path = repo_path
def create_checkpoint(self, name: str) -> dict:
"""
Create git-based checkpoint
"""
# Get current state
current_branch = subprocess.run(
["git", "branch", "--show-current"],
capture_output=True, text=True, cwd=self.repo_path
).stdout.strip()
current_sha = subprocess.run(
["git", "rev-parse", "HEAD"],
capture_output=True, text=True, cwd=self.repo_path
).stdout.strip()
# Check for uncommitted changes
status = subprocess.run(
["git", "status", "--porcelain"],
capture_output=True, text=True, cwd=self.repo_path
).stdout
# Stash if needed
stash_name = None
if status.strip():
stash_name = f"checkpoint_{name}_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
subprocess.run(
["git", "stash", "push", "-m", stash_name],
cwd=self.repo_path
)
return {
"name": name,
"branch": current_branch,
"sha": current_sha,
"stash": stash_name,
"timestamp": datetime.now().isoformat()
}
def restore_checkpoint(self, checkpoint: dict) -> bool:
"""
Restore to git checkpoint
"""
# Reset to checkpoint SHA
subprocess.run(
["git", "reset", "--hard", checkpoint["sha"]],
cwd=self.repo_path
)
# Pop stash if exists
if checkpoint.get("stash"):
subprocess.run(
["git", "stash", "pop"],
cwd=self.repo_path
)
return True
def soft_rollback(self, num_commits: int = 1) -> str:
"""
Soft rollback - keep changes staged
"""
subprocess.run(
["git", "reset", "--soft", f"HEAD~{num_commits}"],
cwd=self.repo_path
)
return subprocess.run(
["git", "rev-parse", "HEAD"],
capture_output=True, text=True
).stdout.strip()
def hard_rollback(self, num_commits: int = 1) -> str:
"""
Hard rollback - discard all changes
"""
subprocess.run(
["git", "reset", "--hard", f"HEAD~{num_commits}"],
cwd=self.repo_path
)
return subprocess.run(
["git", "rev-parse", "HEAD"],
capture_output=True, text=True
).stdout.strip()
State Diff Analysis
Diff Generation
import difflib
class StateDiffAnalyzer:
def compute_diff(self, before: str, after: str) -> dict:
"""
Compute diff between two states
"""
before_lines = before.splitlines(keepends=True)
after_lines = after.splitlines(keepends=True)
diff = list(difflib.unified_diff(
before_lines,
after_lines,
fromfile='before',
tofile='after'
))
# Count changes
additions = sum(1 for line in diff if line.startswith('+') and not line.startswith('+++'))
deletions = sum(1 for line in diff if line.startswith('-') and not line.startswith('---'))
return {
"diff": ''.join(diff),
"additions": additions,
"deletions": deletions,
"total_changes": additions + deletions
}
def analyze_impact(self, diff_result: dict) -> dict:
"""
Analyze impact of changes
"""
impact = {
"severity": "low",
"risk_factors": [],
"recommendations": []
}
# Assess severity
if diff_result["total_changes"] > 100:
impact["severity"] = "high"
impact["risk_factors"].append("Large number of changes")
elif diff_result["total_changes"] > 20:
impact["severity"] = "medium"
# Check for risky patterns
diff_text = diff_result["diff"]
if "delete" in diff_text.lower() or "drop" in diff_text.lower():
impact["risk_factors"].append("Destructive operations detected")
impact["severity"] = "high"
if "password" in diff_text.lower() or "secret" in diff_text.lower():
impact["risk_factors"].append("Sensitive data changes")
impact["severity"] = "high"
# Recommendations
if impact["severity"] == "high":
impact["recommendations"].append("Review changes carefully before committing")
impact["recommendations"].append("Create backup before proceeding")
return impact
Recovery Procedures
Automatic Recovery
recovery_procedures:
file_recovery:
trigger: "File modification failed"
steps:
- Identify affected files from manifest
- Restore from checkpoint
- Verify file integrity (hash check)
- Log recovery action
git_recovery:
trigger: "Git operation failed"
steps:
- Identify current HEAD
- Reset to checkpoint SHA
- Pop any stashed changes
- Verify working tree clean
database_recovery:
trigger: "Database migration failed"
steps:
- Identify failed migration
- Run rollback migration
- Verify database schema
- Restore data from backup if needed
graceful_degradation:
trigger: "Partial failure"
steps:
- Identify successful operations
- Determine rollback scope
- Partial rollback only failed parts
- Resume from last good state
Recovery Workflow
class RecoveryManager:
def __init__(self, checkpoint_manager: CheckpointManager):
self.checkpoint_manager = checkpoint_manager
async def execute_with_recovery(self, operation, checkpoint_name: str):
"""
Execute operation with automatic recovery on failure
"""
# Create checkpoint before operation
checkpoint_id = self.checkpoint_manager.create_checkpoint(
checkpoint_name,
operation.affected_files
)
try:
# Execute operation
result = await operation.execute()
# Verify success
if not await self.verify_operation(result):
raise OperationVerificationError("Operation verification failed")
# Log success
await self.log_success(checkpoint_id, operation)
return result
except Exception as e:
# Automatic recovery
await self.recover_from_failure(checkpoint_id, e)
raise
async def recover_from_failure(self, checkpoint_id: str, error: Exception):
"""
Recover from operation failure
"""
# Log failure
await self.log_failure(checkpoint_id, error)
# Restore checkpoint
restore_result = self.checkpoint_manager.restore_checkpoint(checkpoint_id)
# Verify restoration
if not await self.verify_restoration(restore_result):
raise RecoveryError("Failed to restore checkpoint")
# Persist recovery record to Memory MCP
await mcp_Memory_create_entities([{
"name": f"Recovery_{checkpoint_id}",
"entityType": "RecoveryRecord",
"observations": [
f"Error: {str(error)}",
f"Checkpoint: {checkpoint_id}",
f"Files restored: {restore_result['restored_files']}",
f"Status: SUCCESS"
]
}])
MCP Integration
Checkpoint with Memory MCP
async def create_memory_checkpoint():
"""
Create checkpoint of Memory MCP state
"""
# Read entire graph
graph = await mcp_Memory_read_graph()
# Create checkpoint entity
await mcp_Memory_create_entities([{
"name": f"MemoryCheckpoint_{datetime.now().isoformat()}",
"entityType": "MemoryCheckpoint",
"observations": [
f"Entities: {len(graph['entities'])}",
f"Relations: {len(graph['relations'])}",
f"Snapshot: {json.dumps(graph)}"
]
}])
Rollback Analysis with UltraThink
async def analyze_rollback_need(current_state, target_state):
"""
Analyze whether rollback is needed
"""
return await mcp_UltraThink_ultrathink(
thought=f"""
Analyzing rollback decision:
Current State:
{current_state}
Target State:
{target_state}
Analysis:
1. What changes exist between states?
2. Are changes recoverable without rollback?
3. Risk assessment of rollback
4. Alternative solutions
Recommendation: ...
""",
total_thoughts=15
)
Quick Reference
When to Create Checkpoints
checkpoint_triggers:
always:
- Before file deletions
- Before database migrations
- Before git force operations
- Before bulk updates
recommended:
- Before complex refactoring
- Before configuration changes
- Before dependency updates
optional:
- Before minor edits
- Before adding new files
Rollback Commands
# File rollback
restore_checkpoint <checkpoint_id>
# Git rollback
git reset --hard <sha>
git reset --soft HEAD~1
# Selective rollback
rollback_files <checkpoint_id> --files="file1.py,file2.py"
Related Skills
state-machine: State transition managementomega-agent: Complex operation orchestrationdebugger: Error analysis before rollbackgit-workflow: Git operation patterns
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.