netsapiensis

rocky-virtualization

0
0
# Install this skill:
npx skills add netsapiensis/claude-code-skills --skill "rocky-virtualization"

Install specific skill from multi-skill repository

# Description

KVM/QEMU virtualization on Rocky Linux 8/9 including libvirt, virsh, virt-manager, virt-install, VM lifecycle management, storage pools, network configuration (NAT/bridged/macvtap), snapshots, cloning, live migration, GPU/PCI passthrough, resource tuning, and cloud-init. Use when creating, managing, or troubleshooting virtual machines, virtual networks, or storage pools.

# SKILL.md


name: rocky-virtualization
description: KVM/QEMU virtualization on Rocky Linux 8/9 including libvirt, virsh, virt-manager, virt-install, VM lifecycle management, storage pools, network configuration (NAT/bridged/macvtap), snapshots, cloning, live migration, GPU/PCI passthrough, resource tuning, and cloud-init. Use when creating, managing, or troubleshooting virtual machines, virtual networks, or storage pools.


Rocky Linux KVM Virtualization

KVM/QEMU with libvirt, virsh, virt-manager, virt-install, networking, storage, snapshots, migration, and passthrough for Rocky Linux 8/9.

Prerequisite: See rocky-foundation for OS detection and safety tier definitions.

Prerequisites

Verify Hardware Virtualization Support

# Check CPU supports virtualization  # [READ-ONLY]
grep -cE '(vmx|svm)' /proc/cpuinfo
# vmx = Intel VT-x, svm = AMD-V
# Output > 0 means virtualization is supported

# Check KVM module is loaded  # [READ-ONLY]
lsmod | grep kvm
# Should see kvm_intel or kvm_amd plus kvm

# Detailed capability check  # [READ-ONLY]
virt-host-validate qemu

If virtualization is disabled, enable it in BIOS/UEFI (Intel VT-x / AMD-V). This cannot be done from the OS.

Installation

# Install virtualization group  # [CONFIRM]
dnf group install -y "Virtualization Host"

# Or install individual packages  # [CONFIRM]
dnf install -y qemu-kvm libvirt virt-install virt-manager \
  libvirt-client virt-viewer libguestfs-tools

# Rocky 9 additional useful packages  # [CONFIRM]
dnf install -y python3-libvirt swtpm swtpm-tools   # TPM emulation for Win11

# Enable and start libvirtd  # [CONFIRM]
systemctl enable --now libvirtd

# Verify  # [READ-ONLY]
systemctl status libvirtd
virsh version
virsh nodeinfo

User Access (Non-Root)

# Add user to libvirt group for non-root VM management  # [CONFIRM]
usermod -aG libvirt $(whoami)
# Log out and back in for group change to take effect

# Verify group membership  # [READ-ONLY]
groups
virsh -c qemu:///system list --all

Version Differences

Feature Rocky 8 Rocky 9
QEMU 6.2.x 7.2.x / 8.x
libvirt 8.x 9.x
virt-manager 3.x (GUI) 4.x (GUI)
Default machine type pc (i440fx) q35
Default firmware BIOS UEFI available, BIOS default
TPM emulation Manual setup swtpm packaged
virtiofs Experimental Stable
libvirtd Monolithic daemon Modular daemons available (virtqemud)
Secure boot Manual Easier with OVMF

Rocky 9 Modular Daemons

Rocky 9 supports splitting libvirtd into modular daemons:

# Rocky 9 modular daemons (optional, monolithic still works)  # [CONFIRM]
# systemctl disable --now libvirtd
# systemctl enable --now virtqemud virtstoraged virtnetworkd
# Most setups still use monolithic libvirtd

VM Creation

virt-install (CLI)

# Create VM from ISO  # [CONFIRM]
virt-install \
  --name rocky9-vm \
  --ram 4096 \
  --vcpus 2 \
  --cpu host-passthrough \
  --os-variant rocky9.0 \
  --disk path=/var/lib/libvirt/images/rocky9-vm.qcow2,size=40,format=qcow2,bus=virtio \
  --cdrom /var/lib/libvirt/images/Rocky-9-latest-x86_64-dvd.iso \
  --network network=default,model=virtio \
  --graphics vnc,listen=0.0.0.0 \
  --noautoconsole

# Create VM with bridged networking  # [CONFIRM]
virt-install \
  --name webserver \
  --ram 8192 \
  --vcpus 4 \
  --cpu host-passthrough \
  --os-variant rocky9.0 \
  --disk path=/var/lib/libvirt/images/webserver.qcow2,size=80,format=qcow2,bus=virtio \
  --cdrom /var/lib/libvirt/images/Rocky-9-latest-x86_64-dvd.iso \
  --network bridge=br0,model=virtio \
  --graphics spice \
  --video qxl \
  --noautoconsole

# Create VM with UEFI firmware  # [CONFIRM]
virt-install \
  --name uefi-vm \
  --ram 4096 \
  --vcpus 2 \
  --os-variant rocky9.0 \
  --boot uefi \
  --disk path=/var/lib/libvirt/images/uefi-vm.qcow2,size=40,format=qcow2,bus=virtio \
  --cdrom /var/lib/libvirt/images/Rocky-9-latest-x86_64-dvd.iso \
  --network network=default,model=virtio \
  --graphics vnc \
  --noautoconsole

# Create VM from cloud image with cloud-init  # [CONFIRM]
virt-install \
  --name cloud-vm \
  --ram 2048 \
  --vcpus 2 \
  --os-variant rocky9.0 \
  --import \
  --disk path=/var/lib/libvirt/images/cloud-vm.qcow2,format=qcow2,bus=virtio \
  --cloud-init user-data=/tmp/user-data.yaml,meta-data=/tmp/meta-data.yaml \
  --network network=default,model=virtio \
  --graphics none \
  --noautoconsole

List Available OS Variants

# List supported OS variants  # [READ-ONLY]
osinfo-query os | grep -i rocky
osinfo-query os | grep -i centos
osinfo-query os | grep -i rhel

# Install osinfo-db if outdated  # [CONFIRM]
dnf install -y osinfo-db

virt-manager (GUI)

virt-manager provides a graphical interface for managing VMs.

# Launch virt-manager  # [READ-ONLY]
virt-manager

# Connect to remote host via virt-manager
virt-manager -c qemu+ssh://user@remote-host/system

# Access VM console via virt-viewer  # [READ-ONLY]
virt-viewer --connect qemu:///system rocky9-vm
# Or remote:
virt-viewer --connect qemu+ssh://user@host/system rocky9-vm

virt-manager workflow:
1. File > New Virtual Machine
2. Choose installation source (ISO, network, PXE, import)
3. Set RAM and CPUs
4. Configure storage (create or select disk)
5. Set name, review, and customize (network, firmware, etc.)
6. Click "Finish" to create and start

Tips for virt-manager:
- Use "Virtual Hardware Details" to add/remove devices after creation
- Enable XML editing: Edit > Preferences > Enable XML editing
- Use "Clone" from right-click menu for quick VM duplication
- Access serial console via View > Text Consoles > Serial 1

VM Lifecycle Management (virsh)

Basic Operations

# List VMs  # [READ-ONLY]
virsh list                           # Running VMs
virsh list --all                     # All VMs (including stopped)
virsh list --state-shutoff           # Only stopped VMs

# VM info  # [READ-ONLY]
virsh dominfo rocky9-vm
virsh domblklist rocky9-vm           # List disk devices
virsh domiflist rocky9-vm            # List network interfaces
virsh domifaddr rocky9-vm            # IP addresses (if guest agent)
virsh dumpxml rocky9-vm              # Full XML definition
virsh vcpucount rocky9-vm
virsh memtune rocky9-vm

# Start VM  # [CONFIRM]
virsh start rocky9-vm

# Graceful shutdown  # [CONFIRM]
virsh shutdown rocky9-vm

# Force off (like pulling the power cord)  # [DESTRUCTIVE]
virsh destroy rocky9-vm

# Reboot  # [CONFIRM]
virsh reboot rocky9-vm

# Autostart on host boot  # [CONFIRM]
virsh autostart rocky9-vm
virsh autostart --disable rocky9-vm

# Delete VM  # [DESTRUCTIVE]
virsh undefine rocky9-vm
# With storage:
virsh undefine rocky9-vm --remove-all-storage --nvram

Console Access

# VNC/SPICE console via virt-viewer  # [READ-ONLY]
virt-viewer rocky9-vm

# Serial console (guest must have console configured)  # [READ-ONLY]
virsh console rocky9-vm
# Escape with: Ctrl+]

# Display VNC port  # [READ-ONLY]
virsh vncdisplay rocky9-vm
virsh domdisplay rocky9-vm

Resource Modification

# Change vCPUs (requires VM shutdown for max)  # [CONFIRM]
virsh setvcpus rocky9-vm 4 --config --maximum
virsh setvcpus rocky9-vm 4 --config

# Hot-add vCPUs (if VM supports it)  # [CONFIRM]
virsh setvcpus rocky9-vm 4 --live

# Change memory (requires VM shutdown for max)  # [CONFIRM]
virsh setmaxmem rocky9-vm 8G --config
virsh setmem rocky9-vm 8G --config

# Hot-add memory (if VM supports it and memory balloon driver present)  # [CONFIRM]
virsh setmem rocky9-vm 6G --live

# Add disk  # [CONFIRM]
virsh attach-disk rocky9-vm /var/lib/libvirt/images/data.qcow2 vdb \
  --driver qemu --subdriver qcow2 --persistent

# Remove disk  # [CONFIRM]
virsh detach-disk rocky9-vm vdb --persistent

# Add network interface  # [CONFIRM]
virsh attach-interface rocky9-vm network default --model virtio --persistent

# Remove network interface  # [CONFIRM]
virsh detach-interface rocky9-vm network --mac 52:54:00:xx:xx:xx --persistent

Edit VM XML

# Edit VM definition (opens in $EDITOR)  # [CONFIRM]
virsh edit rocky9-vm
# libvirt validates XML on save

# Export XML for backup  # [READ-ONLY]
virsh dumpxml rocky9-vm > /tmp/rocky9-vm.xml

# Define VM from XML  # [CONFIRM]
virsh define /tmp/rocky9-vm.xml

Storage Management

Storage Pools

# List pools  # [READ-ONLY]
virsh pool-list --all
virsh pool-info default

# Create directory-based pool  # [CONFIRM]
virsh pool-define-as mypool dir - - - - /var/lib/libvirt/images/mypool
virsh pool-build mypool
virsh pool-start mypool
virsh pool-autostart mypool

# Create LVM-based pool  # [CONFIRM]
virsh pool-define-as lvm-pool logical - - - vg_vms /dev/vg_vms
virsh pool-start lvm-pool
virsh pool-autostart lvm-pool

# Delete pool  # [DESTRUCTIVE]
virsh pool-destroy mypool
virsh pool-undefine mypool

Volume Management

# List volumes in pool  # [READ-ONLY]
virsh vol-list default
virsh vol-info /var/lib/libvirt/images/rocky9-vm.qcow2

# Create volume  # [CONFIRM]
virsh vol-create-as default data-disk.qcow2 50G --format qcow2

# Delete volume  # [DESTRUCTIVE]
virsh vol-delete /var/lib/libvirt/images/data-disk.qcow2

# Resize volume  # [CONFIRM]
virsh vol-resize /var/lib/libvirt/images/rocky9-vm.qcow2 80G
# Guest filesystem still needs to be extended (see rocky-storage)

Disk Image Operations

# Disk image info  # [READ-ONLY]
qemu-img info /var/lib/libvirt/images/rocky9-vm.qcow2

# Create disk image  # [CONFIRM]
qemu-img create -f qcow2 /var/lib/libvirt/images/new-disk.qcow2 40G

# Create with backing file (linked clone)  # [CONFIRM]
qemu-img create -f qcow2 -F qcow2 \
  -b /var/lib/libvirt/images/base-rocky9.qcow2 \
  /var/lib/libvirt/images/linked-vm.qcow2

# Convert between formats  # [CONFIRM]
qemu-img convert -f raw -O qcow2 disk.raw disk.qcow2
qemu-img convert -f qcow2 -O raw disk.qcow2 disk.raw

# Resize disk image (VM must be off)  # [CONFIRM]
qemu-img resize /var/lib/libvirt/images/rocky9-vm.qcow2 +20G
# Then grow partition and filesystem inside the guest

# Check and repair  # [READ-ONLY]
qemu-img check /var/lib/libvirt/images/rocky9-vm.qcow2

Networking

Default NAT Network

libvirt creates a default NAT network (virbr0, 192.168.122.0/24):

# List networks  # [READ-ONLY]
virsh net-list --all
virsh net-info default
virsh net-dumpxml default

# Start/stop default network  # [CONFIRM]
virsh net-start default
virsh net-autostart default

# DHCP leases  # [READ-ONLY]
virsh net-dhcp-leases default

Create Custom NAT Network

# Define custom network  # [CONFIRM]
cat > /tmp/app-network.xml << 'EOF'
<network>
  <name>app-net</name>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr1' stp='on' delay='0'/>
  <ip address='10.10.10.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='10.10.10.100' end='10.10.10.200'/>
      <host mac='52:54:00:aa:bb:01' name='web01' ip='10.10.10.10'/>
      <host mac='52:54:00:aa:bb:02' name='db01' ip='10.10.10.11'/>
    </dhcp>
  </ip>
</network>
EOF

virsh net-define /tmp/app-network.xml
virsh net-start app-net
virsh net-autostart app-net

Bridged Networking

Bridged networking gives VMs direct access to the physical network (they get IPs from the physical DHCP server or static).

# Create bridge using nmcli  # [CONFIRM]
# 1. Create the bridge
nmcli connection add type bridge \
  con-name br0 \
  ifname br0 \
  ipv4.addresses "192.168.1.100/24" \
  ipv4.gateway "192.168.1.1" \
  ipv4.dns "1.1.1.1,8.8.8.8" \
  ipv4.method manual

# 2. Add physical interface as bridge slave
nmcli connection add type ethernet \
  con-name br0-port1 \
  ifname eth0 \
  master br0

# 3. Activate (WARNING: this will briefly drop connectivity)  # [CONFIRM]
nmcli connection down "System eth0"     # Drop old connection
nmcli connection up br0                  # Bring up bridge
nmcli connection up br0-port1            # Add port

# Verify  # [READ-ONLY]
nmcli connection show
bridge link show
ip addr show br0

Use bridged network in VM:

virt-install ... --network bridge=br0,model=virtio ...   # [CONFIRM]
# Or attach to existing VM:
virsh attach-interface rocky9-vm bridge br0 --model virtio --persistent  # [CONFIRM]

Macvtap (Bridgeless Direct Attach)

Allows VMs to share the physical NIC without a Linux bridge. VM-to-host communication is not possible (VMs can talk to external and to each other via the switch).

# Use macvtap in virt-install  # [CONFIRM]
virt-install ... --network type=direct,source=eth0,model=virtio ...

Isolated Network (No External Access)

# VMs can only talk to each other  # [CONFIRM]
cat > /tmp/isolated-net.xml << 'EOF'
<network>
  <name>isolated</name>
  <bridge name='virbr2' stp='on' delay='0'/>
  <ip address='172.16.0.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='172.16.0.100' end='172.16.0.200'/>
    </dhcp>
  </ip>
</network>
EOF

virsh net-define /tmp/isolated-net.xml
virsh net-start isolated
virsh net-autostart isolated

Snapshots

Internal Snapshots (qcow2)

# Create snapshot  # [CONFIRM]
virsh snapshot-create-as rocky9-vm \
  --name "before-upgrade" \
  --description "Pre-upgrade snapshot" \
  --atomic

# Create snapshot of running VM (includes memory state)  # [CONFIRM]
virsh snapshot-create-as rocky9-vm \
  --name "live-snap" \
  --description "Live snapshot with memory"

# Create disk-only snapshot (no memory)  # [CONFIRM]
virsh snapshot-create-as rocky9-vm \
  --name "disk-only" \
  --disk-only --atomic

# List snapshots  # [READ-ONLY]
virsh snapshot-list rocky9-vm
virsh snapshot-list rocky9-vm --tree
virsh snapshot-info rocky9-vm --snapshotname "before-upgrade"

# Revert to snapshot  # [DESTRUCTIVE]
virsh snapshot-revert rocky9-vm --snapshotname "before-upgrade"
# WARNING: Current state is lost unless also snapshotted

# Delete snapshot  # [CONFIRM]
virsh snapshot-delete rocky9-vm --snapshotname "before-upgrade"

WRONG -- relying on snapshots as backups:

# WRONG: Snapshots are NOT backups
# They live on the same storage as the VM -- if the disk fails, both are lost
# Snapshots degrade performance over time (chain of deltas)

# CORRECT: Use snapshots for short-term rollback (upgrades, testing)
# Use borg-backup or similar for actual backups

External Snapshots (Advanced)

External snapshots create separate overlay files, leaving the base image read-only:

# Create external snapshot  # [CONFIRM]
virsh snapshot-create-as rocky9-vm \
  --name "ext-snap" \
  --disk-only \
  --diskspec vda,file=/var/lib/libvirt/images/rocky9-vm-snap.qcow2

# Blockcommit to merge back (when done with snapshot)  # [CONFIRM]
virsh blockcommit rocky9-vm vda --active --pivot --verbose

Cloning

# Clone a stopped VM  # [CONFIRM]
virt-clone \
  --original rocky9-vm \
  --name rocky9-clone \
  --auto-clone

# Clone with specific disk path  # [CONFIRM]
virt-clone \
  --original rocky9-vm \
  --name rocky9-clone \
  --file /var/lib/libvirt/images/rocky9-clone.qcow2

# After cloning, fix unique identifiers inside the guest:
# - Hostname
# - Machine ID: rm /etc/machine-id && systemd-machine-id-setup
# - SSH host keys: rm /etc/ssh/ssh_host_* && systemctl restart sshd
# - Network: nmcli (new MAC will be assigned by virt-clone)

Linked Clone (Thin)

Uses a backing file -- faster to create, less disk space, but depends on base image:

# Create backing image  # [CONFIRM]
qemu-img create -f qcow2 -F qcow2 \
  -b /var/lib/libvirt/images/base-rocky9.qcow2 \
  /var/lib/libvirt/images/linked-clone.qcow2

# Then use virt-install --import with the linked image

Cloud Image Provisioning

Cloud-init on Rocky Linux cloud images is unreliable for root SSH key injection and
static network configuration. Use virt-customize instead for predictable results.

# Install guestfs-tools (Rocky 9 package name)  # [CONFIRM]
dnf install -y guestfs-tools
# NOTE: The package is 'guestfs-tools', NOT 'libguestfs-tools'

WRONG -- relying on cloud-init for SSH keys and network on Rocky cloud images:

# WRONG: cloud-init user-data with ssh_authorized_keys for root
# On Rocky 9 GenericCloud, cloud-init:
# - Defaults to 'rocky' user, root is locked
# - ssh_authorized_keys at top level only applies to default user
# - network-config (NoCloud datasource) is often ignored by NetworkManager
# - virt-install --cloud-init may not reliably pass keys to root

# CORRECT: Use virt-customize to directly modify the disk image

Full Workflow: Cloud Image with virt-customize

# 1. Download Rocky cloud image (one-time)  # [CONFIRM]
curl -Lo /var/lib/libvirt/images/Rocky-9-GenericCloud.latest.x86_64.qcow2 \
  https://download.rockylinux.org/pub/rocky/9/images/x86_64/Rocky-9-GenericCloud.latest.x86_64.qcow2

# 2. Copy and resize for new VM  # [CONFIRM]
cp /var/lib/libvirt/images/Rocky-9-GenericCloud.latest.x86_64.qcow2 \
   /var/lib/libvirt/images/myvm.qcow2
qemu-img resize /var/lib/libvirt/images/myvm.qcow2 40G

# 3. Customize disk image BEFORE first boot  # [CONFIRM]
virt-customize -a /var/lib/libvirt/images/myvm.qcow2 \
  --hostname myvm \
  --root-password password:changeme123 \
  --ssh-inject root:file:/root/.ssh/id_rsa.pub \
  --run-command 'dnf install -y qemu-guest-agent && systemctl enable qemu-guest-agent' \
  --selinux-relabel

# 4. Create VM  # [CONFIRM]
virt-install \
  --name myvm \
  --ram 4096 \
  --vcpus 2 \
  --cpu host-passthrough \
  --os-variant rocky9.0 \
  --import \
  --disk path=/var/lib/libvirt/images/myvm.qcow2,format=qcow2,bus=virtio \
  --network bridge=br0,model=virtio \
  --graphics vnc,listen=0.0.0.0 \
  --noautoconsole \
  --autostart

# 5. Wait for DHCP, then set static IP via nmcli  # [CONFIRM]
# (find IP via: virsh domifaddr myvm --source arp)
ssh root@<dhcp-ip> 'nmcli con mod "System eth0" \
  ipv4.addresses "192.168.40.100/24" \
  ipv4.gateway "192.168.40.1" \
  ipv4.dns "1.1.1.1" \
  ipv4.method manual && nmcli con up "System eth0"'

virt-customize Key Options

# Set hostname  # [CONFIRM]
virt-customize -a disk.qcow2 --hostname myvm

# Inject SSH key for root (from file)  # [CONFIRM]
virt-customize -a disk.qcow2 --ssh-inject root:file:/root/.ssh/id_rsa.pub
# IMPORTANT: This injects a PUBLIC key. Ensure the corresponding private key
# is on the host you'll SSH from. Check with: ssh-keygen -l -f /root/.ssh/id_rsa.pub

# Inject multiple keys (run multiple --ssh-inject flags)  # [CONFIRM]
virt-customize -a disk.qcow2 \
  --ssh-inject root:file:/root/.ssh/id_rsa.pub \
  --ssh-inject root:string:"ssh-ed25519 AAAA... user@other-host"

# Set root password  # [CONFIRM]
virt-customize -a disk.qcow2 --root-password password:changeme123

# Install packages (runs inside the image)  # [CONFIRM]
virt-customize -a disk.qcow2 \
  --run-command 'dnf install -y qemu-guest-agent lvm2 && systemctl enable qemu-guest-agent'

# Run a script  # [CONFIRM]
virt-customize -a disk.qcow2 --run /path/to/setup-script.sh

# Write a file into the image  # [CONFIRM]
virt-customize -a disk.qcow2 \
  --write '/etc/sysctl.d/99-tuning.conf:vm.max_map_count=262144'

# Copy a file into the image  # [CONFIRM]
virt-customize -a disk.qcow2 --copy-in /local/path/config.conf:/etc/myapp/

# SELinux relabel (REQUIRED after modifying files)  # [CONFIRM]
virt-customize -a disk.qcow2 --selinux-relabel

Cloud Image Pitfalls (Lessons Learned)

1. Cloud-init NoCloud network-config is unreliable on Rocky/RHEL:
- Rocky 9 GenericCloud uses NetworkManager, which may ignore the NoCloud network-config
- The VM will typically get a DHCP address regardless of what you put in network-config
- Fix: Set the static IP post-boot via nmcli (see workflow above)

2. Cloud-init SSH key injection targets the default user, not root:
- The ssh_authorized_keys top-level key in user-data applies to the default user (rocky), not root
- disable_root: false does not inject keys into root -- it only unlocks the account
- Fix: Use virt-customize --ssh-inject root:file:/path/to/key.pub

3. The host's SSH key must match what's injected:
- --ssh-inject root:file:/root/.ssh/authorized_keys injects keys that allow others to SSH in
- Make sure the host's own public key (/root/.ssh/id_rsa.pub) is included
- Fix: Use --ssh-inject root:file:/root/.ssh/id_rsa.pub for the hypervisor's key

4. Cloud images don't include LVM tools:
- The Rocky 9 GenericCloud image does not have lvm2 installed
- Fix: Install via virt-customize --run-command 'dnf install -y lvm2' or post-boot

5. Cloud image root filesystem is XFS -- cannot shrink:
- The cloud image uses a single XFS root partition that growpart expands to fill the disk
- XFS cannot be shrunk, so you cannot repartition the boot disk for LVM while running
- Fix: Add a second virtio disk for LVM volumes, create PV/VG/LV there

6. guestfs-tools package name on Rocky 9:
- The package is guestfs-tools, NOT libguestfs-tools (which is a different/older package)
- virt-customize, virt-cat, virt-df, guestfish are all in guestfs-tools

7. SSH known_hosts conflicts when recreating VMs:
- Recreating a VM on the same IP generates new SSH host keys each time
- Fix: Run ssh-keygen -R <ip> on all hosts that previously connected before SSH'ing

Post-Boot Static IP Configuration

Since cloud-init network-config is unreliable, always set static IPs post-boot:

# Find the VM's DHCP address  # [READ-ONLY]
virsh domifaddr myvm --source arp
# Or: arp -an | grep <mac-address>

# SSH in and set static IP  # [CONFIRM]
ssh root@<dhcp-ip>
CON_NAME=$(nmcli -t -f NAME con show --active | grep -v lo | head -1)
nmcli con mod "$CON_NAME" ipv4.addresses "192.168.40.100/24"
nmcli con mod "$CON_NAME" ipv4.gateway "192.168.40.1"
nmcli con mod "$CON_NAME" ipv4.dns "1.1.1.1"
nmcli con mod "$CON_NAME" ipv4.method manual
nmcli con up "$CON_NAME"
# NOTE: SSH session will drop when IP changes. Reconnect on new IP.

LVM on Cloud Image VMs

Cloud images use a single root partition. For separate LVM volumes (e.g., /var/log,
/var/lib/opensearch), add a second disk:

# 1. Create data disk  # [CONFIRM]
qemu-img create -f qcow2 /var/lib/libvirt/images/myvm-data.qcow2 120G

# 2. Attach to running VM  # [CONFIRM]
virsh attach-disk myvm /var/lib/libvirt/images/myvm-data.qcow2 vdb \
  --driver qemu --subdriver qcow2 --persistent

# 3. Inside the VM: create LVM  # [CONFIRM]
dnf install -y lvm2          # Not included in cloud image
pvcreate /dev/vdb
vgcreate vg_data /dev/vdb
lvcreate -L 10G -n lv_varlog vg_data
lvcreate -L 100G -n lv_data vg_data

# 4. Format and mount  # [CONFIRM]
mkfs.xfs /dev/vg_data/lv_varlog
mkfs.xfs /dev/vg_data/lv_data

# 5. Migrate existing data (e.g., /var/log)
mkdir /mnt/tmp && mount /dev/vg_data/lv_varlog /mnt/tmp
cp -a /var/log/* /mnt/tmp/
umount /mnt/tmp && rmdir /mnt/tmp

# 6. Add to fstab and mount  # [CONFIRM]
echo '/dev/mapper/vg_data-lv_varlog  /var/log        xfs defaults 0 2' >> /etc/fstab
echo '/dev/mapper/vg_data-lv_data    /var/lib/data   xfs defaults 0 2' >> /etc/fstab
mount /var/log
mkdir -p /var/lib/data && mount /var/lib/data

Cloud-Init (Alternative -- Less Reliable)

Cloud-init can work for the default user on the default NAT network. It is unreliable
for root access and static IPs on bridged networks with Rocky cloud images.

# /tmp/user-data.yaml -- targets default 'rocky' user only
#cloud-config
hostname: cloud-vm

users:
  - name: admin
    groups: wheel
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
    ssh_authorized_keys:
      - ssh-ed25519 AAAA... user@workstation

package_update: true
packages:
  - qemu-guest-agent
  - lvm2

runcmd:
  - systemctl enable --now qemu-guest-agent
# Create VM with cloud-init (default user only, DHCP only)  # [CONFIRM]
virt-install \
  --name cloud-vm \
  --ram 2048 \
  --vcpus 2 \
  --os-variant rocky9.0 \
  --import \
  --disk path=/var/lib/libvirt/images/cloud-vm.qcow2,format=qcow2,bus=virtio \
  --cloud-init user-data=/tmp/user-data.yaml \
  --network network=default,model=virtio \
  --graphics none \
  --noautoconsole
# Then SSH as: ssh admin@<dhcp-ip>  (NOT root)

Live Migration

Prerequisites

  • Shared storage (NFS, GlusterFS, Ceph) or same-path disk images on both hosts
  • Same CPU architecture (use --cpu host-passthrough carefully)
  • SSH key access between hosts
  • libvirtd running on both hosts
# Verify connectivity  # [READ-ONLY]
virsh -c qemu+ssh://dest-host/system list

# Live migrate  # [CONFIRM]
virsh migrate --live --persistent --undefinesource \
  rocky9-vm qemu+ssh://dest-host/system

# Migrate with copy of storage (no shared storage needed, slower)  # [CONFIRM]
virsh migrate --live --persistent --copy-storage-all \
  rocky9-vm qemu+ssh://dest-host/system

# Monitor migration progress  # [READ-ONLY]
virsh domjobinfo rocky9-vm

# Cancel migration  # [CONFIRM]
virsh domjobabort rocky9-vm

PCI/GPU Passthrough

Enable IOMMU

# Edit GRUB for Intel  # [CONFIRM]
# Add to GRUB_CMDLINE_LINUX in /etc/default/grub:
# intel_iommu=on iommu=pt

# Or for AMD:
# amd_iommu=on iommu=pt

# Rebuild GRUB  # [CONFIRM]
grub2-mkconfig -o /boot/grub2/grub.cfg     # BIOS
grub2-mkconfig -o /boot/efi/EFI/rocky/grub.cfg  # UEFI

# Reboot required  # [CONFIRM]
reboot

# Verify IOMMU groups  # [READ-ONLY]
dmesg | grep -i iommu
find /sys/kernel/iommu_groups/ -type l | sort -V

GPU Passthrough

# Identify GPU PCI address  # [READ-ONLY]
lspci -nn | grep -i vga
lspci -nn | grep -i nvidia    # Or amd

# Check IOMMU group  # [READ-ONLY]
# All devices in the same IOMMU group must be passed through together

# Bind GPU to vfio-pci driver  # [CONFIRM]
# /etc/modprobe.d/vfio.conf
echo "options vfio-pci ids=10de:xxxx,10de:yyyy" > /etc/modprobe.d/vfio.conf
# Replace with actual PCI vendor:device IDs

# Blacklist host driver  # [CONFIRM]
echo "blacklist nouveau" > /etc/modprobe.d/blacklist-gpu.conf
# Or for AMD: echo "blacklist amdgpu" > /etc/modprobe.d/blacklist-gpu.conf

# Load vfio-pci early  # [CONFIRM]
echo "vfio-pci" > /etc/modules-load.d/vfio-pci.conf

# Rebuild initramfs  # [CONFIRM]
dracut -f

# Reboot and verify  # [CONFIRM]
reboot
lspci -nnk -d 10de:xxxx     # Should show "Kernel driver in use: vfio-pci"

# Attach to VM via virsh edit or virt-manager  # [CONFIRM]
# In virt-manager: Add Hardware > PCI Host Device > Select GPU

Guest Agent

Install qemu-guest-agent inside guests for better host-guest communication:

# Inside the guest  # [CONFIRM]
dnf install -y qemu-guest-agent
systemctl enable --now qemu-guest-agent

# Host can now query guest  # [READ-ONLY]
virsh domifaddr rocky9-vm --source agent
virsh domfsinfo rocky9-vm
virsh guestinfo rocky9-vm

# Guest filesystem freeze/thaw (for consistent snapshots)  # [CONFIRM]
virsh domfsfreeze rocky9-vm
# Take snapshot
virsh domfsthaw rocky9-vm

Performance Tuning

CPU Tuning

<!-- In VM XML (virsh edit) -->
<!-- Host CPU passthrough (best performance) -->
<cpu mode='host-passthrough' check='none' migratable='on'/>

<!-- CPU pinning for dedicated cores -->
<vcpu placement='static'>4</vcpu>
<cputune>
  <vcpupin vcpu='0' cpuset='2'/>
  <vcpupin vcpu='1' cpuset='3'/>
  <vcpupin vcpu='2' cpuset='4'/>
  <vcpupin vcpu='3' cpuset='5'/>
  <emulatorpin cpuset='0-1'/>
</cputune>

Memory Tuning

<!-- Huge pages for better memory performance -->
<memoryBacking>
  <hugepages>
    <page size='2048' unit='KiB'/>
  </hugepages>
  <nosharepages/>
  <locked/>
</memoryBacking>
# Enable huge pages on host  # [CONFIRM]
echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
# Permanent: /etc/sysctl.d/99-hugepages.conf
# vm.nr_hugepages = 2048

Disk I/O Tuning

<!-- Use virtio-scsi for better performance -->
<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2' cache='none' io='native' discard='unmap'/>
  <source file='/var/lib/libvirt/images/rocky9-vm.qcow2'/>
  <target dev='sda' bus='scsi'/>
</disk>
<controller type='scsi' model='virtio-scsi'/>
Cache mode Safety Performance Use case
none Safe (host cache bypassed) Good Production (recommended)
writeback Unsafe (data loss on host crash) Best Dev/test only
writethrough Safe Moderate Paranoid safety

Network Tuning

<!-- Use virtio with multiqueue -->
<interface type='network'>
  <source network='default'/>
  <model type='virtio'/>
  <driver name='vhost' queues='4'/>
</interface>

SELinux for KVM

# If VM images are in non-standard location  # [CONFIRM]
semanage fcontext -a -t virt_image_t "/data/vms(/.*)?"
restorecon -Rv /data/vms/

# If using NFS storage for VMs  # [CONFIRM]
setsebool -P virt_use_nfs on

# If VMs need USB passthrough  # [CONFIRM]
setsebool -P virt_use_usb on

# Check SELinux denials related to libvirt  # [READ-ONLY]
ausearch -m AVC -c virtqemud --start today
ausearch -m AVC -c qemu --start today

Firewall for KVM

# Default NAT network usually works without extra rules
# For bridged networking, ensure bridge traffic is allowed:

# If bridge filtering is blocking traffic  # [CONFIRM]
cat > /etc/sysctl.d/99-bridge.conf << 'EOF'
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-arptables = 0
EOF
sysctl -p /etc/sysctl.d/99-bridge.conf

# For remote VNC/SPICE access  # [CONFIRM]
firewall-cmd --add-port=5900-5999/tcp --permanent    # VNC ports
firewall-cmd --reload

# For live migration  # [CONFIRM]
firewall-cmd --add-port=49152-49216/tcp --permanent   # Migration ports
firewall-cmd --reload

Monitoring and Troubleshooting

# VM resource usage  # [READ-ONLY]
virt-top                               # Like top, for VMs
virsh domstats rocky9-vm
virsh cpu-stats rocky9-vm
virsh domblkstat rocky9-vm vda
virsh domifstat rocky9-vm vnet0

# libvirt logs  # [READ-ONLY]
journalctl -u libvirtd --since "1 hour ago"
# Per-VM QEMU logs:
ls /var/log/libvirt/qemu/
cat /var/log/libvirt/qemu/rocky9-vm.log

# Guest filesystem operations (without logging in)  # [READ-ONLY]
virt-filesystems -d rocky9-vm -l
virt-df -d rocky9-vm                   # Disk usage inside guest (VM must be off)
virt-cat -d rocky9-vm /etc/hostname    # Read file from guest (VM must be off)

# Guest shell (VM must be off)  # [CONFIRM]
guestfish -d rocky9-vm -i

Backup VMs

# Export XML definition  # [READ-ONLY]
virsh dumpxml rocky9-vm > /backup/rocky9-vm.xml

# Backup disk image (VM must be off or use snapshot)  # [CONFIRM]
# Method 1: VM off
virsh shutdown rocky9-vm
cp /var/lib/libvirt/images/rocky9-vm.qcow2 /backup/
virsh start rocky9-vm

# Method 2: Snapshot + backup (VM stays running)
virsh snapshot-create-as rocky9-vm tmp-backup --disk-only --atomic  # [CONFIRM]
cp /var/lib/libvirt/images/rocky9-vm.qcow2 /backup/                # [CONFIRM] (base is now read-only)
virsh blockcommit rocky9-vm vda --active --pivot --verbose          # [CONFIRM] (merge and resume)
virsh snapshot-delete rocky9-vm tmp-backup --metadata               # [CONFIRM]

# Method 3: Borg backup of disk images (see rocky-borg-backup)

Checklist: New VM Deployment

  • [ ] Verify host virtualization support (virt-host-validate)
  • [ ] Install guestfs-tools on the hypervisor (for virt-customize)
  • [ ] Choose networking (NAT, bridged, isolated)
  • [ ] Prepare storage pool and disk image
  • [ ] For cloud images: customize with virt-customize BEFORE first boot
  • [ ] Set hostname (--hostname)
  • [ ] Inject SSH key (--ssh-inject root:file:/root/.ssh/id_rsa.pub)
  • [ ] Set root password (--root-password)
  • [ ] Pre-install packages (--run-command 'dnf install -y qemu-guest-agent lvm2')
  • [ ] SELinux relabel (--selinux-relabel)
  • [ ] Create VM with virt-install --import
  • [ ] Wait for DHCP, then set static IP via nmcli (not cloud-init network-config)
  • [ ] If LVM needed: add second disk, create PV/VG/LV (cloud image root is XFS, cannot shrink)
  • [ ] Clear stale SSH host keys (ssh-keygen -R <ip>) on all connecting hosts
  • [ ] Set autostart if needed (virsh autostart)
  • [ ] Configure backup for VM disk images
  • [ ] Test snapshot and restore workflow

When to Use This Skill

  • Setting up KVM virtualization on a host
  • Creating and managing virtual machines
  • Provisioning VMs from cloud images (virt-customize + virt-install)
  • Configuring virtual networking (NAT, bridged, isolated)
  • Managing VM storage pools and disk images
  • Adding LVM volumes to cloud image VMs (second disk approach)
  • Taking and reverting snapshots
  • Cloning VMs for development or testing
  • Setting up GPU/PCI passthrough
  • Live migrating VMs between hosts
  • Troubleshooting VM performance or connectivity
  • rocky-foundation -- OS detection, safety tiers
  • rocky-core-system -- systemd service management, user groups
  • rocky-networking -- Bridge configuration, firewall rules
  • rocky-storage -- LVM for VM storage pools
  • rocky-selinux -- SELinux contexts for VM images and devices
  • rocky-security-hardening -- Host hardening for hypervisors
  • rocky-borg-backup -- Backup VM disk images and definitions

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.