Home GitHub

IMPORTANT NOTE: This site is not official Red Hat documentation and is provided for informational purposes only. These guides may be experimental, proof of concept, or early adoption. Officially supported documentation is available at docs.openshift.com and access.redhat.com.

Enabling the AWS EFS Operator on ROSA

The Amazon Web Services Elastic File System (AWS EFS) is a Network File System (NFS) that can be provisioned on Red Hat OpenShift Service on AWS clusters. AWS also provides and supports a CSI EFS Driver to be used with Kubernetes that allows Kubernetes workloads to leverage this shared file storage.

This is a guide to quickly enable the EFS Operator on ROSA to

See here for the official ROSA documentation.

Prerequisites

Prepare AWS Account

  1. Get the Instance Name of one of your worker nodes

    NODE=$(oc get nodes --selector=node-role.kubernetes.io/worker \
      -o jsonpath='{.items[0].metadata.name}')
    
  2. Get the VPC ID of your worker nodes

    VPC=$(aws ec2 describe-instances \
      --filters "Name=private-dns-name,Values=$NODE" \
      --query 'Reservations[*].Instances[*].{VpcId:VpcId}' \
      | jq -r '.[0][0].VpcId')
    
  3. Get subnets in your VPC

    SUBNET=$(aws ec2 describe-subnets \
      --filters Name=vpc-id,Values=$VPC Name=tag:kubernetes.io/role/internal-elb,Values='' \
      --query 'Subnets[*].{SubnetId:SubnetId}' \
      | jq -r '.[0].SubnetId')
    
  4. Get the CIDR block of your worker nodes

    CIDR=$(aws ec2 describe-vpcs \
      --filters "Name=vpc-id,Values=$VPC" \
      --query 'Vpcs[*].CidrBlock' \
      | jq -r '.[0]')
    
  5. Get the Security Group of your worker nodes

    SG=$(aws ec2 describe-instances --filters \
      "Name=private-dns-name,Values=$NODE" \
      --query 'Reservations[*].Instances[*].{SecurityGroups:SecurityGroups}' \
      | jq -r '.[0][0].SecurityGroups[0].GroupId')
    
  6. Add EFS to security group

    aws ec2 authorize-security-group-ingress \
      --group-id $SG \
      --protocol tcp \
      --port 2049 \
      --cidr $CIDR | jq .
    
  7. Create EFS File System

    Note: You may want to create separate/additional access-points for each application/shared vol.

    EFS=$(aws efs create-file-system --creation-token efs-token-1 \
      --encrypted | jq -r '.FileSystemId')
    
  8. Configure Mount Target for EFS

    MOUNT_TARGET=$(aws efs create-mount-target --file-system-id $EFS \
      --subnet-id $SUBNET --security-groups $SG \
      | jq -r '.MountTargetId')
    
  9. Create Access Point for EFS

    ACCESS_POINT=$(aws efs create-access-point --file-system-id $EFS \
      --client-token efs-token-1 \
      | jq -r '.AccessPointId')
    

Deploy and test the AWS EFS Operator

  1. Install the EFS Operator

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      labels:
     operators.coreos.com/aws-efs-operator.openshift-operators: ""
      name: aws-efs-operator
      namespace: openshift-operators
    spec:
      channel: stable
      installPlanApproval: Automatic
      name: aws-efs-operator
      source: community-operators
      sourceNamespace: openshift-marketplace
      startingCSV: aws-efs-operator.v0.0.8
    EOF
    
  2. Create a namespace

     oc new-project efs-demo
    
  3. Create a EFS Shared Volume

    cat <<EOF | oc apply -f -
    apiVersion: aws-efs.managed.openshift.io/v1alpha1
    kind: SharedVolume
    metadata:
      name: efs-volume
      namespace: efs-demo
    spec:
      accessPointID: ${ACCESS_POINT}
      fileSystemID: ${EFS}
    EOF
    
  4. Create a POD to write to the EFS Volume

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
     name: test-efs
    spec:
     volumes:
    - name: efs-storage-vol
      persistentVolumeClaim:
        claimName: pvc-efs-volume
     containers:
    - name: test-efs
      image: centos:latest
      command: [ "/bin/bash", "-c", "--" ]
      args: [ "while true; do echo 'hello efs' | tee -a /mnt/efs-data/verify-efs && sleep 5; done;" ]
      volumeMounts:
        - mountPath: "/mnt/efs-data"
          name: efs-storage-vol
    EOF
    
  5. Create a POD to read from the EFS Volume

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
     name: test-efs-read
    spec:
     volumes:
    - name: efs-storage-vol
      persistentVolumeClaim:
        claimName: pvc-efs-volume
     containers:
    - name: test-efs-read
      image: centos:latest
      command: [ "/bin/bash", "-c", "--" ]
      args: [ "tail -f /mnt/efs-data/verify-efs" ]
      volumeMounts:
        - mountPath: "/mnt/efs-data"
          name: efs-storage-vol
    EOF
    
  6. Verify the second POD can read the EFS Volume

    oc logs test-efs-read
    

    You should see a stream of “hello efs”

     hello efs
     hello efs
     hello efs
     hello efs
     hello efs
     hello efs
     hello efs
     hello efs
     hello efs
     hello efs
    

Cleanup

  1. Delete the Pods

     oc delete pod -n efs-demo test-efs test-efs-read
    
  2. Delete the Volume

     oc delete -n efs-demo SharedVolume efs-volume
    
  3. Delete the Namespace

     oc delete project efs-demo
    
  4. Delete the EFS Shared Volume via AWS

    aws efs delete-mount-target --mount-target-id $MOUNT_TARGET | jq .
    aws efs delete-access-point --access-point-id $ACCESS_POINT | jq .
    aws efs delete-file-system --file-system-id $EFS | jq .