Cloud Experts Documentation

AWS Load Balancer Operator On ROSA

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

AWS Load Balancer Controllerexternal link (opens in new tab) is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.

Compared with default AWS In Tree Provider, this controller is actively developed with advanced annotations for both ALBexternal link (opens in new tab) and NLBexternal link (opens in new tab) . Some advanced usecases are:

  • Using native kubernetes ingress with ALB
  • Integrate ALB with WAF
  • Specify NLB source IP ranges
  • Specify NLB internal IP address

AWS Load Balancer Operatorexternal link (opens in new tab) is used to used to install, manage and configure an instance of aws-load-balancer-controller in a OpenShift cluster.

Prerequisites

Environment

  1. Prepare the environment variables

    export AWS_PAGER=""
    export ROSA_CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}"  | sed 's/-[a-z0-9]\{5\}$//')
    export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
    export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed  's|^https://||')
    export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
    export SCRATCH="/tmp/${ROSA_CLUSTER_NAME}/alb-operator"
    mkdir -p ${SCRATCH}
    echo "Cluster: ${ROSA_CLUSTER_NAME}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
    

AWS VPC / Subnets

Note: This section only applies to BYO VPC clusters, if you let ROSA create your VPCs you can skip to the following Installation section.

  1. Set Variables describing your VPC and Subnets:

    export VPC_ID=<vpc-id>
    export PUBLIC_SUBNET_IDS=<public-subnets>
    export PRIVATE_SUBNET_IDS=<private-subnets>
    export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}")
    
  2. Tag VPC with the cluster name

    aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
    
  3. Add tags to Public Subnets

    aws ec2 create-tags \
      --resources ${PUBLIC_SUBNET_IDS} \
      --tags Key=kubernetes.io/role/elb,Value='' \
      --region ${REGION}
    
  4. Add tags to Private Subnets

    aws ec2 create-tags \
      --resources "${PRIVATE_SUBNET_IDS}" \
      --tags Key=kubernetes.io/role/internal-elb,Value='' \
      --region ${REGION}
    

Installation

  1. Create Policy for the aws load balancer controller

    Note: Policy is from AWS Load Balancer Controller Policyexternal link (opens in new tab) plus subnet create tags permission (required by the operator)

    oc new-project aws-load-balancer-operator
    POLICY_ARN=$(aws iam list-policies --query \
      "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \
      --output text)
    if [[ -z "${POLICY_ARN}" ]]; then
      wget -O "${SCRATCH}/load-balancer-operator-policy.json" \
        https://raw.githubusercontent.com/rh-mobb/documentation/main/content/rosa/aws-load-balancer-operator/load-balancer-operator-policy.json
      POLICY_ARN=$(aws --region "$REGION" --query Policy.Arn \
      --output text iam create-policy \
      --policy-name aws-load-balancer-operator-policy \
      --policy-document "file://${SCRATCH}/load-balancer-operator-policy.json")
    fi
    echo $POLICY_ARN
    
  2. Create trust policy for ALB Operator

    cat <<EOF > "${SCRATCH}/trust-policy.json"
    {
      "Version": "2012-10-17",
      "Statement": [
      {
      "Effect": "Allow",
      "Condition": {
        "StringEquals" : {
          "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"]
        }
      },
      "Principal": {
        "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity"
      }
      ]
    }
    EOF
    
  3. Create Role for ALB Operator

    ROLE_ARN=$(aws iam create-role --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
    --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \
    --query Role.Arn --output text)
    echo $ROLE_ARN
    
    aws iam attach-role-policy --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
      --policy-arn $POLICY_ARN
    
  4. Create secret for ALB Operator

    cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    stringData:
      credentials: |
        [default]
        role_arn = $ROLE_ARN
        web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
    EOF
    
  5. Install Red Hat AWS Load Balancer Operator

    cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      upgradeStrategy: Default
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      channel: stable-v1.0
      installPlanApproval: Automatic
      name: aws-load-balancer-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      startingCSV: aws-load-balancer-operator.v1.0.0
    EOF
    
  6. Install Red Hat AWS Load Balancer Controller

    Note: If you get an error here wait a minute and try again, it likely means the Operator hasn’t completed installing yet.

    cat << EOF | oc apply -f -
    apiVersion: networking.olm.openshift.io/v1
    kind: AWSLoadBalancerController
    metadata:
      name: cluster
    spec:
      credentials:
        name: aws-load-balancer-operator
    EOF
    
  7. Check the Operator and Controller pods are both running

    oc -n aws-load-balancer-operator get pods
    

    You should see the following, if not wait a moment and retry.

    NAME                                                             READY   STATUS    RESTARTS   AGE
    aws-load-balancer-controller-cluster-6ddf658785-pdp5d            1/1     Running   0          99s
    aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn   2/2     Running   0          2m4s
    

Validate the deployment with Echo Server application

  1. Deploy Echo Server Ingress with ALB

    oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-namespace.yaml
    oc adm policy add-scc-to-user anyuid system:serviceaccount:echoserver:default
    oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-deployment.yaml
    oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-service.yaml
    oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-ingress.yaml
    
  2. Curl the ALB ingress endpoint to verify the echoserver pod is accessible

    INGRESS=$(oc -n echoserver get ingress echoserver \
      -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    curl -sH "Host: echoserver.example.com" \
      "http://${INGRESS}" | grep Hostname
    
    Hostname: echoserver-7757d5ff4d-ftvf2
    
  3. Deploy Echo Server NLB Load Balancer

    cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      name: echoserver-nlb
      namespace: echoserver
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: external
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
        service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    spec:
      ports:
        - port: 80
          targetPort: 8080
          protocol: TCP
      type: LoadBalancer
      selector:
        app: echoserver
    EOF
    
  4. Test the NLB endpoint

    NLB=$(oc -n echoserver get service echoserver-nlb \
      -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    curl -s "http://${NLB}" | grep Hostname
    
    Hostname: echoserver-7757d5ff4d-ftvf2
    

Clean Up

  1. Delete the Operator and the AWS Roles

    oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
    aws iam detach-role-policy \
      --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
      --policy-arn $POLICY_ARN
    aws iam delete-role \
      --role-name "${ROSA_CLUSTER_NAME}-alb-operator"
    
  2. If you wish to delete the policy you can run

    aws iam delete-policy --policy-arn $POLICY_ARN
    

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.