Cloud Experts Documentation

Configure an ARO cluster with Azure Files using a private endpoint

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Effectively securing your Azure Storage Account requires more than just basic access controls. Azure Private Endpoints provide a powerful layer of protection by establishing a direct, private connection between your virtual network and storage resources—completely bypassing the public internet. This approach not only minimizes your attack surface and the risk of data exfiltration, but also enhances performance through reduced latency, simplifies network architecture, supports compliance efforts, and enables secure hybrid connectivity. It’s a comprehensive solution for protecting your critical cloud data.

Configuring private endpoint access to an Azure Storage Account involves three key steps:

  1. Create the storage account

  2. Create the private endpoint

  3. Define a new storage class for Azure Red Hat OpenShift (ARO)

Note: In many environments, Azure administrators use automation to streamline steps 1 and 2. This typically ensures the storage account is provisioned according to organizational policies—such as encryption and security configurations—along with the automatic creation of the associated private endpoint.

WARNING please note that this approach does not work on FIPS-enabled clusters. This is due to the CIFS protocol being largely non-compliant with FIPS cryptographic requirements. Please see the following for more information:

Pre Requisites

  • ARO cluster logged into
  • oc cli

Set Environment Variables

Set the following variables to match your ARO cluster and Azure storage account naming.

AZR_CLUSTER_NAME=<my-cluster-name>
AZR_RESOURCE_GROUP=<my-rg>
AZR_STORAGE_ACCOUNT_NAME=<my-storage-account> # Name of the storage account
STORAGECLASS_NAME=azure-files # Name of the OpenShift Storage Class that will be created
Copy

Dynamically get the region the ARO cluster is in

 export AZR_REGION=$(az aro show  -n ${AZR_CLUSTER_NAME} -g ${AZR_RESOURCE_GROUP} | jq -r '.location')
Copy

The Azure Private endpoint needs to be placed in a subnet. General best practices are to place private endpoints in their own subnet. Often times however, this might not be possible due to the vnet design and the private endpoint will need to placed in the worker node subnet.

Option 1: Retrieve the worker node subnet that the private endpoint will be create it.

SUBNET_ID=$(az aro show  -n ${AZR_CLUSTER_NAME} -g ${AZR_RESOURCE_GROUP} | jq -r '.workerProfiles[0].subnetId') 

AZR_VNET=$(echo ${SUBNET_ID} | awk -F'/' '{for(i=1;i<=NF;i++) if($i=="virtualNetworks") print $(i+1)}')
Copy

Option 2: Manually specify the private service endpoint subnet and vnet you would like to use.

SUBNET_ID=<SUBNET_ID> # The SubnetId you want to use for private endpoints
AZR_VNET=<Azure VNet> # The name of the VNet the subnet you want to use for private endpoints
Copy

Self-Provision a Storage Account

  1. Create the storage account and attach the private endpoint to it
az storage account create \
    --name ${AZR_STORAGE_ACCOUNT_NAME} \
    --resource-group ${AZR_RESOURCE_GROUP} \
    --location ${AZR_REGION} \
    --sku Premium_LRS \
    --public-network-access Disabled \
    --kind FileStorage \
    --enable-large-file-share 
Copy

Create/Configure the Private Endpoint

  1. Create private endpoint
az network private-endpoint create \
  --name $AZR_CLUSTER_NAME \
  --resource-group ${AZR_RESOURCE_GROUP} \
  --vnet-name ${AZR_VNET} \
  --subnet ${SUBNET_ID} \
  --private-connection-resource-id $(az resource show -g ${AZR_RESOURCE_GROUP} -n ${AZR_STORAGE_ACCOUNT_NAME} --resource-type "Microsoft.Storage/storageAccounts" --query "id" -o tsv) \
  --location ${AZR_REGION} \
  --group-id file \
  --connection-name ${AZR_STORAGE_ACCOUNT_NAME}
Copy

DNS Resolution for Private Connection

  1. Configure the private DNS zone for the private link connection

In order to use the private endpoint connection you will need to create a private DNS zone, if not configured correctly, the connection will attempt to use the public IP (file.core.windows.net) whereas the private connection’s domain is prefixed with ‘privatelink’

az network private-dns zone create \
  --resource-group ${AZR_RESOURCE_GROUP} \
  --name "privatelink.file.core.windows.net"
  
az network private-dns link vnet create \
  --resource-group ${AZR_RESOURCE_GROUP} \
  --zone-name "privatelink.file.core.windows.net" \
  --name $AZR_CLUSTER_NAME \
  --virtual-network ${AZR_VNET} \
  --registration-enabled false
Copy

If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the storage account endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for StorageAccountA.privatelink.file.core.windows.net with the private endpoint IP address.

When using a custom or on-premises DNS server, you should configure your DNS server to resolve the storage account name in the privatelink subdomain to the private endpoint IP address. You can do this by delegating the privatelink subdomain to the private DNS zone of the VNet or by configuring the DNS zone on your DNS server and adding the DNS A records.

Note: For MAG customers: GOV Private Endpoint DNSexternal link (opens in new tab) Custom DNS Configexternal link (opens in new tab)

  1. Retrieve the private IP from the private link connection:
PRIVATE_IP=`az resource show \
  --ids $(az network private-endpoint show --name $AZR_CLUSTER_NAME --resource-group ${AZR_RESOURCE_GROUP} --query 'networkInterfaces[0].id' -o tsv) \
  --api-version 2019-04-01 \
  -o json | jq -r '.properties.ipConfigurations[0].properties.privateIPAddress'`
Copy
  1. Create the DNS records for the private link connection:
az network private-dns record-set a create \
  --name ${AZR_STORAGE_ACCOUNT_NAME} \
  --zone-name privatelink.file.core.windows.net \
  --resource-group ${AZR_RESOURCE_GROUP}

az network private-dns record-set a add-record \
  --record-set-name ${AZR_STORAGE_ACCOUNT_NAME} \
  --zone-name privatelink.file.core.windows.net \
  --resource-group ${AZR_RESOURCE_GROUP} \
  -a ${PRIVATE_IP}
Copy
  1. test private endpoint connectivity
  • on a VM or Openshift worker node
nslookup ${AZR_STORAGE_ACCOUNT_NAME}.file.core.windows.net
Copy
  • Should return:
Server:		x.x.x.x
Address:	x.x.x.x#x

Non-authoritative answer:
<storage_account_name>.file.core.windows.net	canonical name = <storage_account_name>.privatelink.file.core.windows.net.
Name:	<storage_account_name>.privatelink.file.core.windows.net
Address: x.x.x.x
Copy

Configure ARO Storage Resources

  1. Login to your cluster

  2. Set ARO Cluster permissions

oc create clusterrole azure-secret-reader \
  --verb=create,get \
  --resource=secrets

oc adm policy add-cluster-role-to-user azure-secret-reader system:serviceaccount:kube-system:persistent-volume-binder
Copy
  1. Create a storage class
  • Using an existing storage account
cat  <<EOF | oc apply -f -
    allowVolumeExpansion: true
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ${STORAGECLASS_NAME}
    parameters:
      resourceGroup: ${AZR_RESOURCE_GROUP}
      server: ${AZR_STORAGE_ACCOUNT_NAME}.file.core.windows.net
      secretNamespace: kube-system
      skuName: Premium_LRS
      storageAccount: ${AZR_STORAGE_ACCOUNT_NAME}
    provisioner: file.csi.azure.com
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
EOF
Copy

Test it out

  1. Create a PVC

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-azure-files-volume
    spec:
      storageClassName: azure-files
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
    EOF
    
    Copy
  2. Create a Pod to write to the Azure Files Volume

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
     name: test-files
    spec:
     volumes:
       - name: files-storage-vol
         persistentVolumeClaim:
           claimName: pvc-azure-files-volume
     containers:
       - name: test-files
         image: centos:latest
         command: [ "/bin/bash", "-c", "--" ]
         args: [ "while true; do echo 'hello azure files' | tee -a /mnt/files-data/verify-files && sleep 5; done;" ]
         volumeMounts:
           - mountPath: "/mnt/files-data"
             name: files-storage-vol
    EOF
    
    Copy

    It may take a few minutes for the pod to be ready.

  3. Wait for the Pod to be ready

    watch oc get pod test-files
    
    Copy
  4. Create a Pod to read from the Azure Files Volume

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
     name: test-files-read
    spec:
     volumes:
       - name: files-storage-vol
         persistentVolumeClaim:
           claimName: pvc-azure-files-volume
     containers:
       - name: test-files-read
         image: centos:latest
         command: [ "/bin/bash", "-c", "--" ]
         args: [ "tail -f /mnt/files-data/verify-files" ]
         volumeMounts:
           - mountPath: "/mnt/files-data"
             name: files-storage-vol
    EOF
    
    Copy
  5. Verify the second POD can read the Azure Files Volume

    oc logs test-files-read
    
    Copy

    You should see a stream of “hello azure files”

    hello azure files
    hello azure files
    hello azure files
    hello azure files
    hello azure files
    hello azure files
    hello azure files
    hello azure files
    hello azure files
    hello azure files
    
    Copy

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.