Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes...
npx skills add DeconvFFT/resume-crafter --skill "backend-engineering"
Install specific skill from multi-skill repository
# Description
Build production-grade, scalable backends with Rust (Axum) for high-performance services and FastAPI for Python APIs. Includes ML inference serving (ONNX, vLLM, TensorRT), event-driven architecture (Kafka, RabbitMQ, Redis), Docker/Kubernetes orchestration, and AWS deployment (ECS, EKS, Lambda). Use when building APIs, microservices, real-time systems, ML serving infrastructure, or deploying containerized applications to AWS.
# SKILL.md
name: backend-engineering
description: Build production-grade, scalable backends with Rust (Axum) for high-performance services and FastAPI for Python APIs. Includes ML inference serving (ONNX, vLLM, TensorRT), event-driven architecture (Kafka, RabbitMQ, Redis), Docker/Kubernetes orchestration, and AWS deployment (ECS, EKS, Lambda). Use when building APIs, microservices, real-time systems, ML serving infrastructure, or deploying containerized applications to AWS.
Backend Engineering Skill
Build scalable, high-performance backends using the right tool for each layer.
Language Selection Framework
| Use Case | Choose | Rationale |
|---|---|---|
| High-throughput streaming | Rust/Axum | Memory efficiency, no GC pauses |
| ML inference orchestration | FastAPI | Library ecosystem, model compatibility |
| CRUD APIs, rapid prototyping | FastAPI | Development velocity |
| Sub-millisecond latency | Rust/Axum | Predictable performance |
| Data pipelines | Hybrid | Rust for hot paths, Python for orchestration |
Quick Start Patterns
FastAPI Service
from fastapi import FastAPI, Depends
from sqlalchemy.ext.asyncio import AsyncSession
from contextlib import asynccontextmanager
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup: initialize pools, connections
yield
# Shutdown: cleanup
app = FastAPI(lifespan=lifespan)
@app.get("/health")
async def health(): return {"status": "ok"}
Rust/Axum Service
use axum::{routing::get, Router, Json};
use std::sync::Arc;
#[derive(Clone)]
struct AppState { /* db pools, config */ }
async fn health() -> Json<serde_json::Value> {
Json(serde_json::json!({"status": "ok"}))
}
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/health", get(health))
.with_state(Arc::new(AppState {}));
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}
Reference Documentation
Consult these references based on task requirements:
| Task | Reference File |
|---|---|
| FastAPI patterns, async DB, testing | references/fastapi.md |
| Rust/Axum services, SQLx, error handling | references/rust.md |
| ML inference, quantization, vLLM | references/ml-serving.md |
| Kafka, RabbitMQ, Redis, event patterns | references/event-driven.md |
| Docker multi-stage builds, security | references/docker.md |
| Kubernetes production patterns | references/kubernetes.md |
| AWS ECS, EKS, Lambda, CDK | references/aws.md |
Architecture Decision Flow
New Backend Service Request
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Latency requirement? โ
โ < 10ms โ Rust โ
โ > 10ms โ FastAPI ok โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ML model serving? โ
โ LLM โ vLLM โ
โ Vision/NLP โ ONNX/TensorRTโ
โ None โ skip โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Event-driven? โ
โ High throughput โ Kafka โ
โ Complex routing โ RabbitMQโ
โ Real-time โ Redis Streamsโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Deployment target? โ
โ Simple โ ECS Fargate โ
โ Complex/Multi-cloud โ EKSโ
โ Event handlers โ Lambda โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Production Checklist
Before deploying any service:
- [ ] Health checks (liveness + readiness)
- [ ] Graceful shutdown handling
- [ ] Resource limits configured (CPU, memory)
- [ ] Connection pooling tuned
- [ ] Circuit breakers on external calls
- [ ] Structured logging (JSON)
- [ ] Distributed tracing enabled
- [ ] Secrets in Secrets Manager
- [ ] Multi-stage Docker build
- [ ] Auto-scaling configured
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.