IMPORTANT NOTE: This site is not official Red Hat documentation and is provided for informational purposes only. These guides may be experimental, proof of concept, or early adoption. Officially supported documentation is available at docs.openshift.com and access.redhat.com.

Adding an additional ingress controller to an ARO cluster


Authors: Paul Czarkowski, Stuart Kirk, Anton Nesterov, Connor Wooley
Last Editor: Dustin Scott
Published Date: 19 March 2022
Modified Date: 25 May 2023


Prerequisites

  • an Azure Red Hat OpenShift cluster
  • a DNS zone that you can easily modify

Get Started

  1. Create some environment variables

    DOMAIN=custom.azure.mobb.ninja
    EMAIL=example@email.com
    SCRATCH_DIR=/tmp/aro
    
  2. Create a certificate for the ingress controller

    certbot certonly --manual \
      --preferred-challenges=dns \
      --email $EMAIL \
      --server https://acme-v02.api.letsencrypt.org/directory \
      --agree-tos \
      --manual-public-ip-logging-ok \
      -d "*.$DOMAIN" \
      --config-dir "$SCRATCH_DIR/config" \
      --work-dir "$SCRATCH_DIR/work" \
      --logs-dir "$SCRATCH_DIR/logs"
    
  3. Create a secret for the certificate

    oc create secret tls custom-tls \
      -n openshift-ingress \
      --cert=$SCRATCH_DIR/config/live/$DOMAIN/fullchain.pem \
      --key=$SCRATCH_DIR/config/live/$DOMAIN/privkey.pem
    
  4. Create an ingress controller

    cat <<EOF | oc apply -f -
    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: custom
      namespace: openshift-ingress-operator
    spec:
      domain: $DOMAIN
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/worker: ""
      routeSelector:
        matchLabels:
          type: custom
      defaultCertificate:
        name: custom-tls
      httpEmptyRequestsPolicy: Respond
      httpErrorCodePages:
        name: ""
      replicas: 3
    EOF
    

    NOTE: By default the ingress controller is created with external scope. This means that the corresponding Azure Load Balancer will have a public frontend IP. If you wish to deploy a privately visible ingress controller add the following lines to the spec:

    spec:
      ...
      endpointPublishingStrategy:
        loadBalancer:
          scope: Internal
        type: LoadBalancerService
      ...
    
  5. Wait a few moments then get the EXTERNAL-IP of the new ingress controller

    oc get -n openshift-ingress svc router-custom
    

    In case of an Externally (publicly) scoped ingress controller the output should look like:

     NAME            TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
     router-custom   LoadBalancer   172.30.90.84   20.120.48.78   80:32160/TCP,443:32511/TCP   49s
    

    In case of an Internal (private) one:

    NAME            TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
    router-custom   LoadBalancer   172.30.55.36     10.0.2.4     80:30475/TCP,443:30249/TCP   10s
    
  6. Optionally verify in the Azure portal or using CLI that the Load Balancer Service has gotten the new Frontend IP and two Load Balancing Rules - one for port 80 and another one for port 443. In case of an Internally scoped Ingress Controller the changes are to be observed within the Load Balancer that has the -internal suffix.

  7. Create a wildcard DNS record pointing at the EXTERNAL-IP

  8. Test that the Ingress is working

    NOTE: For the Internal ingress controller, make sure that the test host has the necessary reachability to the VPC/subnet as well as the DNS resolver.

    curl -s https://test.$DOMAIN | head
    
     <html>
       <head>
         <meta name="viewport" content="width=device-width, initial-scale=1">
    
  9. Create a new project to deploy an application to

    oc new-project demo
    
  10. Create a new application

    oc new-app --docker-image=docker.io/openshift/hello-openshift
    
  11. Expose

    cat << EOF | oc apply -f -
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      labels:
        app: hello-openshift
        app.kubernetes.io/component: hello-openshift
        app.kubernetes.io/instance: hello-openshift
        type: custom
      name: hello-openshift-tls
    spec:
      host: hello.$DOMAIN
      port:
        targetPort: 8080-tcp
      tls:
        termination: edge
        insecureEdgeTerminationPolicy: Redirect
      to:
        kind: Service
        name: hello-openshift
    EOF
    
  12. Verify it works

    curl https://hello.custom.azure.mobb.ninja
    
     Hello OpenShift!