tsaarni

contour

0
0
# Install this skill:
npx skills add tsaarni/agent-skills --skill "contour"

Install specific skill from multi-skill repository

# Description

Develop and test Project Contour locally on Linux and macOS. Includes kind cluster setup, running from source, VSCode debugging, custom image builds, and troubleshooting with logs, metrics, and admin APIs.

# SKILL.md


name: contour
description: Develop and test Project Contour locally on Linux and macOS. Includes kind cluster setup, running from source, VSCode debugging, custom image builds, and troubleshooting with logs, metrics, and admin APIs.


Contour Developer Guide

Prerequisites

  • Go: Version specified in go.mod file
  • Docker: For building container images
  • kind: For local Kubernetes clusters
  • kubectl: For Kubernetes cluster management
  • macOS only: colima for Docker runtime
  • Optional: jq for JSON processing, httpie for API testing

Manage a Local Development Cluster

Create and Configure a Kind Cluster

# On macOS: Start colima if not already running
colima start

# Create a kind cluster with custom configuration
kind create cluster --config assets/kind-cluster-config.yaml --name contour

# Verify cluster is running
kubectl cluster-info --context kind-contour

# Deploy latest stable Contour release to the kind cluster
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

# Wait for Contour to be ready
kubectl -n projectcontour wait --for=condition=available --timeout=300s deployment/contour
kubectl -n projectcontour wait --for=condition=ready --timeout=300s pod -l app=envoy

Cleanup

# Delete the kind cluster when done
kind delete cluster --name contour

# On macOS: Stop colima if no longer needed
colima stop

Running Contour from the Source Code

This workflow allows you to run Contour locally on your host machine while Envoy runs in the kind cluster. This enables rapid development iteration with hot-reloading and debugging capabilities.

Step 1: Configure Network Connectivity

Create an Endpoints resource so Envoy in the cluster can connect to Contour on the host.

# On Linux: Use Docker bridge gateway IP
sed "s/REPLACE_ADDRESS_HERE/$(docker network inspect kind | jq -r '.[0].IPAM.Config[0].Gateway')/" assets/contour-endpoints-dev.yaml | kubectl apply -f -

# On macOS: Use fixed colima/lima host IP (192.168.5.2)
sed "s/REPLACE_ADDRESS_HERE/192.168.5.2/" assets/contour-endpoints-dev.yaml | kubectl apply -f -

Note: On macOS with colima/lima, the host is accessible at 192.168.5.2 (see lima documentation). On Linux, the Docker bridge gateway IP is used.

Step 2: Scale Down In-Cluster Contour

Stop the Contour deployment in the cluster and restart Envoy to pick up the new endpoints.

# Scale down the in-cluster Contour deployment to 0 replicas
kubectl -n projectcontour scale deployment contour --replicas=0

# Restart Envoy pods to reconnect to host-based Contour
kubectl -n projectcontour rollout restart daemonset envoy
kubectl -n projectcontour rollout status daemonset envoy

Step 3: Extract TLS Certificates

Extract the TLS certificates from the cluster so your local Contour can authenticate with Envoy.

# Extract CA certificate
kubectl -n projectcontour get secret contourcert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt

# Extract server certificate
kubectl -n projectcontour get secret contourcert -o jsonpath='{.data.tls\.crt}' | base64 -d > tls.crt

# Extract private key
kubectl -n projectcontour get secret contourcert -o jsonpath='{.data.tls\.key}' | base64 -d > tls.key

Note: On macOS, use base64 -D (uppercase D) instead of base64 -d if the lowercase flag is not recognized.

Step 4: Run Contour from Source

Compile and run Contour directly from the repository root. This provides immediate feedback on code changes.

# Run Contour serve command with required flags
go run github.com/projectcontour/contour/cmd/contour serve \
  --xds-address=0.0.0.0 \
  --xds-port=8001 \
  --envoy-service-http-port=8080 \
  --envoy-service-https-port=8443 \
  --contour-cafile=ca.crt \
  --contour-cert-file=tls.crt \
  --contour-key-file=tls.key

Contour should now be running locally and serving configuration to Envoy in the cluster. You can make code changes and restart the process to test them.

Option: Use VSCode Debugger

For debugging with breakpoints and variable inspection, set up VSCode launch configuration.

# Create .vscode directory if it doesn't exist
mkdir -p .vscode

# Copy launch configuration for debugging
cp assets/contour-vscode-launch.json .vscode/launch.json

# Copy workspace settings
cp assets/vscode-settings.json .vscode/settings.json

After copying the configuration, use VSCode's Run and Debug view (Cmd+Shift+D) to start Contour in debug mode.

Compile and Deploy Custom Contour Image to Kind Cluster

This workflow builds a custom Contour container image and deploys it to the kind cluster. Use this when you need to test Contour running fully in-cluster with your changes.

Step 1: Build and Load Custom Image

# Build the Contour container image
make container VERSION=latest

# Load the image into the kind cluster's container runtime
kind load docker-image ghcr.io/projectcontour/contour:latest --name contour

# Verify the image is available in the cluster
docker exec -it contour-control-plane crictl images | grep contour

Step 2: Update Deployments to Use Custom Image

Patch the Contour Deployment and Envoy DaemonSet to use your custom image:

cat <<EOF | kubectl -n projectcontour patch deployment contour --patch-file=/dev/stdin
spec:
  template:
    spec:
      containers:
      - name: contour
        image: localhost/contour:latest
        imagePullPolicy: Never
EOF

cat <<EOF | kubectl -n projectcontour patch daemonset envoy --patch-file=/dev/stdin
spec:
  template:
    spec:
      containers:
      - name: shutdown-manager
        image: localhost/contour:latest
        imagePullPolicy: Never
      initContainers:
      - name: envoy-initconfig
        image: localhost/contour:latest
        imagePullPolicy: Never
EOF

Step 3: Verify Deployment

# Wait for Contour deployment to be ready
kubectl -n projectcontour rollout status deployment/contour

# Wait for Envoy daemonset to be ready
kubectl -n projectcontour rollout status daemonset/envoy

# Verify pods are running with the custom image
kubectl -n projectcontour get pods -o wide
kubectl -n projectcontour describe pod -l app=contour | grep Image:

Troubleshooting

View Logs

# Stream Contour logs
kubectl -n projectcontour logs -f deployment/contour

# Stream Envoy logs (from the envoy container)
kubectl -n projectcontour logs -f daemonset/envoy -c envoy

# Stream shutdown-manager logs
kubectl -n projectcontour logs -f daemonset/envoy -c shutdown-manager

# View logs from all Contour pods
kubectl -n projectcontour logs -l app=contour --tail=100

# View previous container logs (if pod crashed)
kubectl -n projectcontour logs deployment/contour --previous

Access Contour Metrics and Debug Endpoints

# Port-forward to Contour metrics endpoint
kubectl -n projectcontour port-forward deployment/contour 8000:8000

# Get Prometheus metrics
http localhost:8000/metrics

# Get Contour debug information
http localhost:8000/debug/pprof/
http localhost:8000/debug/dag  # View internal DAG state

Access Envoy Admin API

The Envoy admin API provides extensive runtime information and configuration.

# Port-forward to Envoy admin interface
kubectl -n projectcontour port-forward daemonset/envoy 9001:9001

# View full configuration dump (includes EDS)
http http://localhost:9001/config_dump?include_eds | jq -C . | less

# View active clusters
http http://localhost:9001/config_dump | jq '.configs[].dynamic_active_clusters'

# View route configurations
http http://localhost:9001/config_dump | jq '.configs[].dynamic_route_configs'

# View cluster statistics
http http://localhost:9001/clusters

# View listener statistics
http http://localhost:9001/listeners

# View server info and stats
http http://localhost:9001/server_info
http http://localhost:9001/stats

# View help for all available endpoints
http http://localhost:9001/help

Running Specific Contour Versions

To test against specific Contour versions for compatibility or regression testing:

# Deploy specific stable releases
kubectl apply -f https://projectcontour.io/quickstart/v1.28.0/contour.yaml

# View available releases
# Visit: https://raw.githubusercontent.com/projectcontour/contour/refs/heads/main/versions.yaml

Creating Custom Contour Configuration

Create or update the Contour ConfigMap to customize behavior. This example shows tracing configuration:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: contour
  namespace: projectcontour
data:
  contour.yaml: |
    tracing:
      extensionService: projectcontour/otel-collector
      serviceName: test-service
      overallSampling: "50"
      clientSampling: "75"
      randomSampling: "25"
      includePodDetail: true
EOF

After updating the ConfigMap, restart Contour to apply changes:

kubectl -n projectcontour rollout restart deployment/contour
kubectl -n projectcontour rollout status deployment/contour

Additional Resources

  • Contour Documentation: https://projectcontour.io/docs/
  • API Reference: https://projectcontour.io/docs/main/api/
  • GitHub Repository: https://github.com/projectcontour/contour
  • Envoy Documentation: https://www.envoyproxy.io/docs/envoy/latest/
  • Kind Documentation: https://kind.sigs.k8s.io/

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.