IMPORTANT NOTE: This site is not official Red Hat documentation and is provided for informational purposes only. These guides may be experimental, proof of concept, or early adoption. Officially supported documentation is available at and


Authors: Connor Wooley
Last Editor: Dustin Scott
Published Date: 17 June 2021
Modified Date: 25 May 2023

Note: It is recommended that you use the Cloud Front based guide unless you absolutely must use an ALB based solution.

Here ’s a good overview of AWS LB types and what they support

Problem Statement

  1. Operator requires WAF (Web Application Firewall) in front of their workloads running on OpenShift (ROSA)

  2. Operator does not want WAF running on OpenShift to ensure that OCP resources do not experience Denial of Service through handling the WAF

Proposed Solution

Loosely based off EKS instructions here -

  1. Deploy secondary Ingress solution (+TLS +DNS) that uses an AWS ALB

    • Todo Configure TLS + DNS for that Ingress (Lets Encrypt + WildCard DNS)

Pre Requisites

  • A ROSA / OSD on AWS cluster
  • Helm 3 cli
  • oc / kubectl
  • AWS cli
  1. Disable AWS cli output paging

    export AWS_PAGER=""
  2. Set the ALB Controller version

    export ALB_VERSION="v2.2.0"
  3. Set the name of your cluster for lookup

    export CLUSTER_NAME="waf-demo"


Create a new public ROSA cluster called waf-demo and make sure to set it to be multi-AZ enabled, or replace the cluster name variable with your own cluster name.

AWS Load Balancer Controller

AWS Load Balancer controller manages the following AWS resources

Application Load Balancers to satisfy Kubernetes ingress objects Network Load Balancers in IP mode to satisfy Kubernetes service objects of type LoadBalancer with NLB IP mode annotation

  1. Create AWS Policy and Service Account

    curl -so iam-policy.json${ALB_VERSION}/docs/install/iam_policy.json
    POLICY_ARN=$(aws iam create-policy --policy-name "AWSLoadBalancerControllerIAMPolicy" --policy-document file://iam-policy.json --query Policy.Arn --output text)
    echo $POLICY_ARN
  2. Create service account

    aws iam create-user --user-name aws-lb-controller  \
      --query User.Arn --output text
  3. Attach policy to user

    aws iam attach-user-policy --user-name aws-lb-controller \
      --policy-arn ${POLICY_ARN}
  4. Create access key and save the output (Paste the AccessKeyId and SecretAccessKey into values.yaml)

    aws iam create-access-key --user-name aws-lb-controller
    export AWS_ID=<from above>
    export AWS_KEY=<from above>
  5. Modify the VPC ID and cluster name in the values.yaml with the output from (replace poc-waf with your cluster name):

    VPC_ID=$(aws ec2 describe-vpcs --output json --filters \
      Name=tag-value,Values="${CLUSTER_NAME}*" \
      --query "Vpcs[].VpcId" --output text)
    echo ${VPC_ID}
  6. Modify the subnet list in ingress.yaml with the output from: (replace poc-waf with your cluster name)

    SUBNET_IDS=$(aws ec2 describe-subnets --output json \
      --filters Name=tag-value,Values="${CLUSTER_NAME}-*public*" \
      --query "Subnets[].SubnetId" --output text | sed 's/\t/ /g')
    echo ${SUBNET_IDS}
  7. Add tags to those subnets (change the subnet ids in the resources line)

    aws ec2 create-tags \
      --resources $(echo ${SUBNET_IDS}) \
  8. Create a namespace for the controller

    kubectl create ns aws-load-balancer-controller
  9. Apply CRDs

    kubectl apply -k ""
  10. Add the helm repo and install the controller (install helm3 if not already)

    helm repo add eks
    helm install -n aws-load-balancer-controller \
      aws-load-balancer-controller eks/aws-load-balancer-controller \
      --set "env.AWS_ACCESS_KEY_ID=${AWS_ID}" \
      --set "env.AWS_SECRET_ACCESS_KEY=${AWS_KEY}" \
      --set "vpcID=${VPC_ID}" \
      --set "clusterName=${CLUSTER_NAME}" \
      --set "image.tag=${ALB_VERSION}" \

Deploy Sample Application

  1. Create a new application in OpenShift

    oc new-project demo
    oc new-app
    kubectl -n demo patch service django-ex -p '{"spec":{"type":"NodePort"}}'
  2. Create an Ingress to trigger an ALB

    cat << EOF | kubectl apply -f -
    kind: Ingress
      name: django-ex
      namespace: demo
      annotations: alb internet-facing instance
        # subnet-0982bb73ca67d61de,subnet-0aa9967e8767d792f,subnet-0fd57669a80eb7596 "true"
        # wafv2 arn to use
        # arn:aws:wafv2:us-east-2:660250927410:regional/webacl/waf-demo/6565d2a1-6d26-4b6b-b56f-1e996c7e9e8f
        app: django-ex
        - host:
              - pathType: Prefix
                path: /*
                    name: django-ex
                      number: 8080
  3. Check the logs of the ALB controller

    kubectl logs -f deployment/aws-load-balancer-controller
  4. use the second address from the ingress to browse to the app

    kubectl -n demo get ingress
    curl -s --header "Host:" | head

WAF time

  1. Create a WAF rule here and use the Core and SQL Injection rules. (make sure region matches us-east-2)

  2. View your WAF

    aws wafv2 list-web-acls --scope REGIONAL --region us-east-2 | jq .
  3. set the waf annotation to match the ARN provided above (and uncomment it) then re-apply the ingress

    kubectl apply -f ingress.yaml
  4. test the app still works

    curl -s --header "Host:" --location ""
  5. test the WAF denies a bad request

    You should get a 403 Forbidden error

    curl -X POST -F "user='<script><alert>Hello></alert></script>'"