Skip to main content
Version: main 🚧

AWS-EBS guide

This guide walks you through creating volume snapshots for a vCluster with persistent data and restoring that data from the snapshot. You'll deploy a sample application that writes data to a persistent volume, create a snapshot, simulate data loss, and restore from the snapshot using AWS EBS as the storage provider.

Supported CSI Drivers

vCluster officially supports volume snapshots with AWS EBS CSI Driver and OpenEBS. This walkthrough demonstrates the complete end-to-end process using AWS as an example. Similar steps can be adapted for other supported CSI drivers.

Prerequisites​

Before starting, ensure you have:

  • An existing Amazon EKS cluster with the EBS CSI Driver installed. Follow the EKS deployment guide to set up your cluster.
  • The vCluster CLI installed
  • Completed the volume snapshots setup based on your chosen tenancy model
  • An OCI-compatible registry (such as GitHub Container Registry, Docker Hub, or AWS ECR) or an S3-compatible bucket (AWS S3 or MinIO) for storing snapshots
note

You can skip the CSI driver installation steps in the setup guide as the EBS CSI driver is already installed during EKS cluster creation.

Deploy vCluster​

Choose the deployment option based on your tenancy model:

Create a vCluster with default values:

Create vCluster
vcluster create myvcluster

Deploy a demo application​

Deploy a sample application in the vCluster. This application writes the current date and time in five-second intervals to a file called out.txt on a persistent volume.

Deploy application with persistent storage
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: public.ecr.aws/amazonlinux/amazonlinux
command: ["/bin/sh"]
args: ["-c", "while true; do date -u >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
EOF

Verify the application​

Wait until the pod is running and the PVC is in Bound state:

Check pod status
kubectl get pods

Expected output:

NAME   READY   STATUS    RESTARTS   AGE
app 1/1 Running 0 37s
Check PVC status
kubectl get pvc

Expected output:

NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
ebs-claim Bound pvc-4062a395-e84e-4efd-91c4-8e09cb12d3a8 4Gi RWO <unset> 42s

Verify that data is being written to the persistent volume:

View application data
kubectl exec -it app -- cat /data/out.txt | tail -n 3

Expected output:

Tue Oct 28 13:38:41 UTC 2025
Tue Oct 28 13:38:46 UTC 2025
Tue Oct 28 13:38:51 UTC 2025

Create snapshot with volumes​

Create a vCluster snapshot with volume snapshots included by using the --include-volumes parameter. The vCluster CLI creates a snapshot request in the host cluster, which is then processed in the background by the vCluster snapshot controller.

Disconnect from the vCluster:

Disconnect from vCluster
vcluster disconnect

Create the snapshot:

Create snapshot with volumes
vcluster snapshot create myvcluster "oci://ghcr.io/my-user/my-repo:my-tag" --include-volumes

Expected output:

18:01:13 info Beginning snapshot creation... Check the snapshot status by running `vcluster snapshot get myvcluster oci://ghcr.io/my-user/my-repo:my-tag`
note

Replace oci://ghcr.io/my-user/my-repo:my-tag with your own OCI registry or other storage location. Ensure you have the necessary authentication configured for it.

Check snapshot status​

Monitor the snapshot creation progress:

Check snapshot status
vcluster snapshot get myvcluster "oci://ghcr.io/my-user/my-repo:my-tag"

Expected output:

                   SNAPSHOT                | VOLUMES | SAVED |  STATUS   |  AGE
-----------------------------------------+---------+-------+-----------+--------
oci://ghcr.io/my-user/my-repo:my-tag | 1/1 | Yes | Completed | 2m51s

Wait until the status shows Completed and SAVED shows Yes before proceeding to the restore step.

Simulate data loss​

To demonstrate the restore capability, delete the application and its data from the virtual cluster. First, connect to the vCluster:

Connect to vCluster
vcluster connect myvcluster

Delete the application and PVC:

Delete application and PVC
cat <<EOF | kubectl delete -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: public.ecr.aws/amazonlinux/amazonlinux
command: ["/bin/sh"]
args: ["-c", "while true; do date -u >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
EOF

Restore from snapshot​

Restore the vCluster from the snapshot, including the volume data. First, disconnect from the vCluster:

Disconnect from vCluster
vcluster disconnect

Run the restore command with the --restore-volumes parameter. This creates a restore request which is processed by the restore controller, orchestrating the restoration of the PVC from the snapshots:

Restore vCluster with volumes
vcluster restore myvcluster "oci://ghcr.io/my-user/my-repo:my-tag" --restore-volumes

Expected output:

17:39:14 info Pausing vCluster myvcluster
17:39:15 info Scale down statefulSet vcluster-myvcluster/myvcluster...
17:39:17 info Starting snapshot pod for vCluster vcluster-myvcluster/myvcluster...
...
2025-10-27 12:09:35 INFO snapshot/restoreclient.go:260 Successfully restored snapshot from oci://ghcr.io/my-user/my-repo:my-tag {"component": "vcluster"}
17:39:37 info Resuming vCluster myvcluster after it was paused

Verify the restore​

Once the vCluster is running again, connect to it and verify that the pod and PVC have been restored:

Connect to vCluster
vcluster connect myvcluster

Check that the pod is running:

Check pod status
kubectl get pods

Expected output:

NAME   READY   STATUS    RESTARTS   AGE
app 1/1 Running 0 12m

Check that the PVC is bound:

Check PVC status
kubectl get pvc

Expected output:

NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
ebs-claim Bound pvc-c6ebf439-9fe5-4413-9f86-89916c1e4e49 4Gi RWO <unset> 12m

Verify that the data was successfully restored by checking the log file:

Verify restored data
kubectl exec -it app -- cat /data/out.txt

Expected output (showing both old and new timestamps):

...
Tue Oct 28 13:39:21 UTC 2025
Tue Oct 28 13:39:26 UTC 2025
Tue Oct 28 13:39:31 UTC 2025
Tue Oct 28 13:46:10 UTC 2025
Tue Oct 28 13:46:15 UTC 2025
Tue Oct 28 13:46:20 UTC 2025

Notice the gap in timestamps. The earlier timestamps (around 13:39) are from before the deletion, while the later timestamps (13:46) are after the restore. This confirms that the data was successfully recovered from the snapshot, and the application resumed writing new entries.

Cleanup​

To remove the resources created in this tutorial:

Delete the vCluster:

Delete vCluster
vcluster delete myvcluster

If you created an EKS cluster specifically for this tutorial, you can delete it to avoid ongoing charges:

Delete EKS cluster
eksctl delete cluster -f cluster.yaml --disable-nodegroup-eviction