Home GitHub

IMPORTANT NOTE: This site is not official Red Hat documentation and is provided for informational purposes only. These guides may be experimental, proof of concept, or early adoption. Officially supported documentation is available at docs.openshift.com and access.redhat.com.

Installing the AWS Load Balancer Controller (ALB) on ROSA

Updated: 02/22/2022

In most situations you will want to stick with the OpenShift native Ingress Controller in order to use the native Ingress and Route resources to provide access to your applications. However if you absolutely require an ALB or NLB based Load Balancer then running the AWS Load Balancer Controller (ALB) may be worth looking at.

Prerequisites

Getting Started

  1. Set some environment variables

  2. Disable AWS cli output paging

     export AWS_PAGER=""
     export ALB_VERSION="v2.4.0"
     export CLUSTER_NAME="cz-demo"
     export SCRATCH_DIR="/tmp/alb-sts"
     export OIDC_PROVIDER=$(oc get authentication.config.openshift.io cluster -o json \
       | jq -r .spec.serviceAccountIssuer| sed -e "s/^https:\/\///")
     export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
     export REGION=$(rosa describe cluster -c $CLUSTER_NAME -o json | jq -r .region.id)
     export NAMESPACE="alb-controller"
     export SA="alb-controller"
     rm -rf $SCRATCH_DIR
     mkdir -p $SCRATCH_DIR
    

Configure IAM credentials

  1. Create AWS Policy and Service Account

     wget -O $SCRATCH_DIR/iam-policy.json \
       https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/$ALB_VERSION/docs/install/iam_policy.json
    
     POLICY_ARN=$(aws iam create-policy --policy-name  \
       "AWSLoadBalancerControllerIAMPolicy-$ALB_VERSION" \
       --policy-document file://$SCRATCH_DIR/iam-policy.json \
       --query Policy.Arn --output text)
    
     echo $POLICY_ARN
    

    If the Policy already exists you can use this instead

     POLICY_ARN=$(aws iam list-policies \
       --query 'Policies[?PolicyName==`AWSLoadBalancerControllerIAMPolicy-'$ALB_VERSION'`].Arn' \
       --output text)
    
     echo $POLICY_ARN
    
  2. Create a Trust Policy

    cat <<EOF > $SCRATCH_DIR/TrustPolicy.json
    {
      "Version": "2012-10-17",
      "Statement": [
     {
       "Effect": "Allow",
       "Principal": {
         "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
       },
       "Action": "sts:AssumeRoleWithWebIdentity",
       "Condition": {
         "StringEquals": {
           "${OIDC_PROVIDER}:sub": [
             "system:serviceaccount:${NAMESPACE}:${SA}"
           ]
         }
       }
     }
      ]
    }
    EOF
    
  3. Create Role for ALB Controller

     ALB_ROLE=$(aws iam create-role \
       --role-name "$CLUSTER_NAME-alb-controller" \
       --assume-role-policy-document file://$SCRATCH_DIR/TrustPolicy.json \
       --query "Role.Arn" --output text)
     echo $ALB_ROLE
    
  4. Attach the Policies to the Role

     aws iam attach-role-policy \
       --role-name "$CLUSTER_NAME-alb-controller" \
       --policy-arn $POLICY_ARN
    

Configure Cluster subnets

  1. Get the Instance Name of one of your worker nodes

    NODE=$(oc get nodes --selector=node-role.kubernetes.io/worker \
      -o jsonpath='{.items[0].metadata.name}')
    echo $NODE
    
  2. Get the VPC ID of your worker nodes

    VPC=$(aws ec2 describe-instances \
      --filters "Name=private-dns-name,Values=$NODE" \
      --query 'Reservations[*].Instances[*].{VpcId:VpcId}' \
      | jq -r '.[0][0].VpcId')
    echo $VPC
    
  3. Get list of Subnets

     SUBNET_IDS=$(aws ec2 describe-subnets --output json \
       --filters Name=tag-value,Values="${CLUSTER_NAME}-*public*" \
       --query "Subnets[].SubnetId" --output text | sed 's/\t/ /g')
     echo ${SUBNET_IDS}
    
  4. Add tags to those subnets (change the subnet ids in the resources line)

     aws ec2 create-tags \
       --resources $(echo ${SUBNET_IDS}) \
       --tags Key=kubernetes.io/role/elb,Value=''
    
  5. Get cluster name (according to AWS Tags)

     AWS_CLUSTER=$(basename $(aws ec2 describe-subnets \
       --filters Name=tag-value,Values="${CLUSTER_NAME}-*public*" \
       --query 'Subnets[0].Tags[?Value==`shared`].Key[]' | jq -r '.[0]'))
    
     echo $AWS_CLUSTER
    
  6. Create a namespace for the controller

     oc new-project $NAMESPACE
    
  7. Apply CRDs

     kubectl apply -k \
       "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
    
  8. Add the helm repo and install the controller (install helm3 if not already)

     helm repo add eks https://aws.github.io/eks-charts
     helm repo update
     helm upgrade alb-controller eks/aws-load-balancer-controller -i \
       -n $NAMESPACE --set clusterName=$CLUSTER_NAME \
       --set serviceAccount.name=$SA \
       --set "vpcId=$VPC" \
       --set "region=$REGION" \
       --set serviceAccount.annotations.'eks\.amazonaws\.com/role-arn'=$ALB_ROLE \
       --set "clusterName=$AWS_CLUSTER" \
       --set "image.repository=amazon/aws-alb-ingress-controller" \
       --set "image.tag=$ALB_VERSION" --version 1.4.0
    
  9. Update SCC to allow setting fsgroup in Deployment

     oc adm policy add-scc-to-user anyuid -z $SA -n $NAMESPACE
    

Deploy Sample Application

  1. Create a new application in OpenShift

     oc new-project demo
     oc new-app https://github.com/sclorg/django-ex.git
     kubectl -n demo patch service django-ex -p '{"spec":{"type":"NodePort"}}'
    
  2. Create an Ingress to trigger an ALB

    Note: Setting the alb.ingress.kubernetes.io/group.name allows you to create multiple ALB Ingresses using the same ALB which can help reduce your AWS costs.

    cat << EOF | kubectl apply -f -
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: django-ex
      namespace: demo
      annotations:
     kubernetes.io/ingress.class: alb
     alb.ingress.kubernetes.io/scheme: internet-facing
     alb.ingress.kubernetes.io/target-type: instance
     alb.ingress.kubernetes.io/group.name: "demo"
      labels:
     app: django-ex
    spec:
      rules:
     - host: foo.bar
       http:
         paths:
           - pathType: Prefix
             path: /
             backend:
               service:
                 name: django-ex
                 port:
                   number: 8080
    EOF
    
  3. Check the logs of the ALB controller

     kubectl -n $NAMESPACE logs -f \
       deployment/alb-controller-aws-load-balancer-controller
    
  4. Save the ingress address

     URL=$(kubectl -n demo get ingress django-ex \
       -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    
     curl -s --header "Host: foo.bar" $URL | head
    
     <!doctype html>
     <html lang="en">
     <head>
       <meta charset="utf-8">
       <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
       <title>Welcome to OpenShift</title>
    

Cleanup

  1. Delete the demo app

     kubectl delete ns demo
    
  2. Uninstall the ALB Controller

     helm delete -n $NAMESPACE alb-controller
    
  3. Get PolicyARN

    POLICY_ARN=$(aws iam list-policies \
      --query 'Policies[?PolicyName==`AWSLoadBalancerControllerIAMPolicy-'$ALB_VERSION'`].Arn' \
      --output text)
    
  4. Dettach the Policy from the Role

     aws iam detach-role-policy \
       --role-name "$CLUSTER_NAME-alb-controller" \
       --policy-arn $POLICY_ARN
    
  5. Delete Role for ALB Controller

     aws iam delete-role \
       --role-name "$CLUSTER_NAME-alb-controller"