Cloud Experts Documentation

Create Filestore Storage for OSD in GCP

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

By default, within OSD in GCP only the GCE-PD StorageClassexternal link (opens in new tab) is available in the cluster. With this StorageClass, only ReadWriteOnce mode is permitted, and the gcePersistentDisks can only be mounted by a single consumer in read-write modeexternal link (opens in new tab) .

Because of that, and for provide Storage with Shared Access (RWX) Access Mode to our OpenShift clusters a GCP Filestoreexternal link (opens in new tab) could be used.

GCP Filestore is not managed neither supported by Red Hat or Red Hat SRE team.

Prerequisites

The GCP Cloud Shellexternal link (opens in new tab) can be used as well and have all the prerequisites installed already.

Steps

  1. From the CLI or GCP Cloud Shell, login within your account and your GCP project:

    gcloud auth login <google account user>
    gcloud config set project <google project name>
    
  2. Create a Filestore instance in GCP:

    export ZONE_FS="us-west1-a"
    export NAME_FS="nfs-server"
    export TIER_FS="BASIC_HDD"
    export VOL_NAME_FS="osd4"
    export CAPACITY="1TB"
    export VPC_NETWORK="projects/my-project/global/networks/demo-vpc"
    
    gcloud filestore instances create $NAME_FS --zone=$ZONE_FS --tier=$TIER_FS --file-share=name="$VOL_NAME_FS",capacity=$CAPACITY --network=name="$VPC_NETWORK"
    

Due to the Static Provisioning through the creation of the PV/PVC the Filestore for the RWX storage needs to be created upfront.

  1. After the creation, check the Filestore instance generated in the GCP project:

    gcloud filestore instances describe $NAME_FS --zone=$ZONE_FS
    
  2. Extract the ipAddresses from the NFS share for use them into the PV definition:

    NFS_IP=$(gcloud filestore instances describe $NAME_FS --zone=$ZONE_FS --format=json | jq -r .networks[0].ipAddresses[0])
    
    echo $NFS_IP
    
  3. Login your OSD in GCP cluster

  4. Create a Persistent Volume using the NFS_IP of the Filestore as the nfs server into the PV definition, specifying the path of the shared Filestore:

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: nfs
    spec:
      capacity:
        storage: 500Gi
      accessModes:
        - ReadWriteMany
      nfs:
        server: $NFS_IP
        path: "/$VOL_NAME_FS"
    EOF
    

As you can check the PV is generated with the accessMode of ReadWriteMany (RWX)

  1. Check that the PV is generated properly:

    $ oc get pv nfs
    NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    nfs   500Gi      RWX            Retain           Available                                   12s
    
  2. Create a PersistentVolumeClaim for this PersistentVolume:

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: nfs
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ""
      resources:
        requests:
          storage: 500Gi
    EOF
    

As we can check the storageClassName is empty because we’re using the Static Provisioning in this case.

  1. Check that the PVC is generated properly and with the Bound status:

    oc get pvc nfs
    NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    nfs  Bound    nfs   500Gi      RWX                           7s
    
  2. Generate an example app with more than replicas sharing the same Filestore NFS volume share:

    cat <<EOF | oc apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      creationTimestamp: null
      labels:
        app: nfs-web2
      name: nfs-web
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nfs-web
      strategy: {}
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: nfs-web
        spec:
          containers:
          - image: nginxinc/nginx-unprivileged
            name: nginx-unprivileged
            ports:
              - name: web
                containerPort: 8080
            volumeMounts:
              - name: nfs
                mountPath: "/usr/share/nginx/html"
          volumes:
          - name: nfs
            persistentVolumeClaim:
              claimName: nfs
    EOF
    
  3. Check that the pods are up && running:

    oc get pod
    NAME                        READY   STATUS    RESTARTS   AGE
    nfs-web2-54f9fb5cd8-8dcgh   1/1     Running   0          118s
    nfs-web2-54f9fb5cd8-bhmkw   1/1     Running   0          118s
    
  4. Check that the pods mount the same volume provided by the Filestore NFS share:

    for i in $(oc get pod --no-headers | awk '{ print $1 }'); do echo "POD -> $i"; oc exec -ti $i -- df -h | grep nginx; echo ""; done
    
    POD -> nfs-web2-54f9fb5cd8-8dcgh
    10.124.186.98:/osd4 1007G     0  956G   0% /usr/share/nginx/html
    
    POD -> nfs-web2-54f9fb5cd8-bhmkw
    10.124.186.98:/osd4 1007G     0  956G   0% /usr/share/nginx/html
    

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.