Mounting shared EBS volume/snapshot


First thanks for this great tool!

I wonder if I can mount existing EBS volume/snapshot to a cluster (shared across nodes)? This a doable in CfnCluster, even for multiple EBS volumes (see, need to login the AWS forum)

I know S3 should be the main storage mechanism, and I am aware of alces flight’s docs on working with S3, including the Home Directory Synchronization. But I already have data stored in EBS and just want to attach it to the cluster. I can mount it only to the master node and then transfer to the cluster’s shared volume, but it would be nice to just add it as shared.

Another use case for this is to temporarily increase the cluster’s disk capacity. For example, some simulations might produce 1 TB of data, and I need a huge EBS volume to temporarily hold them before uploading to S3. But most of the time I do not need such a huge EBS volume, so it is a waste to request an 1 TB home volume disk when creating the cluster.


Hi @JiaweiZhuang,

While this isn’t directly supported by Flight Compute Solo, you can achieve what you’re wanting to do manually after your cluster has started up.

  1. add your EBS volume to the login node using the “Volumes” settings in the EC2 console
  2. log in to your cluster and mount the volume somewhere appropriate, for e.g.:
sudo mkdir /opt/data
sudo mount /dev/xvdb1 /opt/data
  1. a. If you have existing nodes, you’ll need to configure the mounted volume to be exported over NFS (modify the options as you see fit, or use a standard set as follows):
sudo exportfs -o "rw,no_root_squash,no_subtree_check,sync"
  1. b. Mount the volume on existing nodes, using pdsh:
module load services/pdsh
pdsh -g compute mkdir -p /opt/data
pdsh -g compute mount login1:/opt/data /opt/data
  1. If you’ve not started any nodes yet (or for future nodes) you can add configuration so that this process happens automatically (i.e. when new compute nodes are booted via autoscaling to meet demand). Add a new configuration file in /opt/clusterware/etc/cluster-nfs.d, for e.g. /opt/clusterware/etc/cluster-nfs.d/data.rc:
cw_CLUSTER_NFS_exports="${cw_CLUSTER_NFS_exports} /opt/data"

With this configuration file in place, new nodes will mount /opt/data from the login node.

If this is something you’d like to automate on a regular basis, you could consider writing a customization profile script that will perform the following during the initialize event that happens the login node is started but before any compute nodes are joined to the cluster:

  • attach your EBS volume using the aws CLI tool
  • create the mountpoint and mount the volume
  • create the cluster-nfs configuration file

You can find out about customization in the Cluster customization section of the Flight docs.


Thanks very much! I’ll give it a try.