Enabling the AWS EFS Operator on ROSA
Important note: Please be aware that this is the old operator that is no longer supported by Red Hat Support/SRE. Please use the EFS CSI Operator for new clusters. This guide is kept for posterity and for clarity between the two operators.
The Amazon Web Services Elastic File System (AWS EFS) is a Network File System (NFS) that can be provisioned on Red Hat OpenShift Service on AWS clusters. AWS also provides and supports a CSI EFS Driver to be used with Kubernetes that allows Kubernetes workloads to leverage this shared file storage.
This is a guide to quickly enable the EFS Operator on ROSA
See here for the official ROSA documentation.
Prerequisites
- A Red Hat OpenShift on AWS (ROSA) cluster
- The OC CLI
- The AWS CLI
- JQ
Prepare AWS Account
Get the Instance Name of one of your worker nodes
NODE=$(oc get nodes --selector=node-role.kubernetes.io/worker \ -o jsonpath='{.items[0].metadata.name}')
Get the VPC ID of your worker nodes
VPC=$(aws ec2 describe-instances \ --filters "Name=private-dns-name,Values=$NODE" \ --query 'Reservations[*].Instances[*].{VpcId:VpcId}' \ | jq -r '.[0][0].VpcId')
Get subnets in your VPC
SUBNET=$(aws ec2 describe-subnets \ --filters Name=vpc-id,Values=$VPC Name=tag:kubernetes.io/role/internal-elb,Values='' \ --query 'Subnets[*].{SubnetId:SubnetId}' \ | jq -r '.[0].SubnetId')
Get the CIDR block of your worker nodes
CIDR=$(aws ec2 describe-vpcs \ --filters "Name=vpc-id,Values=$VPC" \ --query 'Vpcs[*].CidrBlock' \ | jq -r '.[0]')
Get the Security Group of your worker nodes
SG=$(aws ec2 describe-instances --filters \ "Name=private-dns-name,Values=$NODE" \ --query 'Reservations[*].Instances[*].{SecurityGroups:SecurityGroups}' \ | jq -r '.[0][0].SecurityGroups[0].GroupId')
Add EFS to security group
aws ec2 authorize-security-group-ingress \ --group-id $SG \ --protocol tcp \ --port 2049 \ --cidr $CIDR | jq .
Create EFS File System
Note: You may want to create separate/additional access-points for each application/shared vol.
EFS=$(aws efs create-file-system --creation-token efs-token-1 \ --encrypted | jq -r '.FileSystemId')
Configure Mount Target for EFS
MOUNT_TARGET=$(aws efs create-mount-target --file-system-id $EFS \ --subnet-id $SUBNET --security-groups $SG \ | jq -r '.MountTargetId')
Create Access Point for EFS
ACCESS_POINT=$(aws efs create-access-point --file-system-id $EFS \ --client-token efs-token-1 \ | jq -r '.AccessPointId')
Deploy and test the AWS EFS Operator
Install the EFS Operator
cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/aws-efs-operator.openshift-operators: "" name: aws-efs-operator namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: aws-efs-operator source: community-operators sourceNamespace: openshift-marketplace startingCSV: aws-efs-operator.v0.0.8 EOF
Create a namespace
oc new-project efs-demo
Create a EFS Shared Volume
cat <<EOF | oc apply -f - apiVersion: aws-efs.managed.openshift.io/v1alpha1 kind: SharedVolume metadata: name: e fs-volume namespa ce: efs-demo spec: accessP ointID: ${ACCESS_POINT} fileSys temID: ${EFS} EOF
Create a POD to write to the EFS Volume
cat <<EOF | oc apply -f - apiVersion: v1 kind: Pod metadata: name: test-efs spec: volumes: - name: efs-storage-vol persistentVolumeClaim: claimName: pvc-efs-volume containers: - name: test-efs image: centos:latest command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do echo 'hello efs' | tee -a /mnt/efs-data/verify-efs && sleep 5; done;" ] volumeMounts: - mountPath: "/mnt/efs-data" name: efs-storage-vol EOF
Create a POD to read from the EFS Volume
cat <<EOF | oc apply -f - apiVersion: v1 kind: Pod metadata: name: test-efs-read spec: volumes: - name: efs-storage-vol persistentVolumeClaim: claimName: pvc-efs-volume containers: - name: test-efs-read image: centos:latest command: [ "/bin/bash", "-c", "--" ] args: [ "tail -f /mnt/efs-data/verify-efs" ] volumeMounts: - mountPath: "/mnt/efs-data" name: efs-storage-vol EOF
Verify the second POD can read the EFS Volume
oc logs test-efs-read
You should see a stream of “hello efs”
hello efs hello efs hello efs hello efs hello efs hello efs hello efs hello efs hello efs hello efs
Cleanup
Delete the Pods
oc delete pod -n efs-demo test-efs test-efs-read
Delete the Volume
oc delete -n efs-demo SharedVolume efs-volume
Delete the Namespace
oc delete project efs-demo
Delete the EFS Shared Volume via AWS
aws efs delete-mount-target --mount-target-id $MOUNT_TARGET | jq . aws efs delete-access-point --access-point-id $ACCESS_POINT | jq . aws efs delete-file-system --file-system-id $EFS | jq .