Cloud Experts Documentation

OpenShift Logging

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

A guide to shipping logs and metrics on OpenShift

Prerequisites

  1. OpenShift CLI (oc)
  2. Rights to install operators on the cluster

Setup OpenShift Logging

This is for setup of centralized logging on OpenShift making use of Elasticsearch OSS edition. This largely follows the processes outlined in the OpenShift documentation here . Retention and storage considerations are reviewed in Red Hat’s primary source documentation.

This setup is primarily concerned with simplicity and basic log searching. Consequently it is insufficient for long-lived retention or for advanced visualization of logs. For more advanced observability setups, you’ll want to look at Forwarding Logs to Third Party Systems

  1. Create a namespace for the OpenShift Elasticsearch Operator.

    This is necessary to avoid potential conflicts with community operators that could send similarly named metrics/logs into the stack.

    oc create -f - <<EOF
    apiVersion: v1
    kind: Namespace
    metadata:
    name: openshift-operators-redhat
    annotations:
      openshift.io/node-selector: ""
    labels:
      openshift.io/cluster-monitoring: "true"
    EOF
    
  2. Create a namespace for the OpenShift Logging Operator

    oc create -f - <<EOF
    apiVersion: v1
    kind: Namespace
    metadata:
    name: openshift-logging
    annotations:
      openshift.io/node-selector: ""
    labels:
      openshift.io/cluster-monitoring: "true"
    EOF
    
  3. Install the OpenShift Elasticsearch Operator by creating the following objects:

    1. Operator Group for OpenShift Elasticsearch Operator

      oc create -f - <<EOF
      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: openshift-operators-redhat
        namespace: openshift-operators-redhat
      spec: {}
      EOF
      
    2. Subscription object to subscribe a Namespace to the OpenShift Elasticsearch Operator

      oc create -f - <<EOF
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: "elasticsearch-operator"
        namespace: "openshift-operators-redhat"
      spec:
        channel: "stable"
        installPlanApproval: "Automatic"
        source: "redhat-operators"
        sourceNamespace: "openshift-marketplace"
        name: "elasticsearch-operator"
      EOF
      
    3. Verify Operator Installation

      oc get csv --all-namespaces
      

      Example Output

      NAMESPACE                                               NAME                                            DISPLAY                  VERSION               REPLACES   PHASE
      default                                                 elasticsearch-operator.5.0.0-202007012112.p0    OpenShift Elasticsearch Operator   5.0.0-202007012112.p0               Succeeded
      kube-node-lease                                         elasticsearch-operator.5.0.0-202007012112.p0    OpenShift Elasticsearch Operator   5.0.0-202007012112.p0               Succeeded
      kube-public                                             elasticsearch-operator.5.0.0-202007012112.p0    OpenShift Elasticsearch Operator   5.0.0-202007012112.p0               Succeeded
      kube-system                                             elasticsearch-operator.5.0.0-202007012112.p0    OpenShift Elasticsearch Operator   5.0.0-202007012112.p0               Succeeded
      openshift-apiserver-operator                            elasticsearch-operator.5.0.0-202007012112.p0    OpenShift Elasticsearch Operator   5.0.0-202007012112.p0               Succeeded
      openshift-apiserver                                     elasticsearch-operator.5.0.0-202007012112.p0    OpenShift Elasticsearch Operator   5.0.0-202007012112.p0               Succeeded
      openshift-authentication-operator                       elasticsearch-operator.5.0.0-202007012112.p0    OpenShift Elasticsearch Operator   5.0.0-202007012112.p0               Succeeded
      openshift-authentication                                elasticsearch-operator.5.0. 0-202007012112.p0    OpenShift Elasticsearch Operator   5.0.0-202007012112.p0               Succeeded
      ...
      
  4. Install the Red Hat OpenShift Logging Operator by creating the following objects:

    1. The Cluster Logging OperatorGroup

      oc create -f - <<EOF
      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: cluster-logging
        namespace: openshift-logging
      spec:
        targetNamespaces:
        - openshift-logging
      EOF
      
    2. Subscription Object to subscribe a Namespace to the Red Hat OpenShift Logging Operator

      oc create -f - <<EOF
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: cluster-logging
        namespace: openshift-logging
      spec:
        channel: "stable"
        name: cluster-logging
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      EOF
      
    3. Verify the Operator installation, the PHASE should be Succeeded

    oc get csv -n openshift-logging
    

    Example Output

    NAME                              DISPLAY                            VERSION    REPLACES   PHASE
    cluster-logging.5.0.5-11          Red Hat OpenShift Logging          5.0.5-11              Succeeded
    elasticsearch-operator.5.0.5-11   OpenShift Elasticsearch Operator   5.0.5-11              Succeeded
    
  5. Create an OpenShift Logging instance:

    NOTE: For the storageClassName below, you will need to adjust for the platform on which you’re running OpenShift. managed-premium as listed below is for Azure Red Hat OpenShift (ARO). You can verify your available storage classes with oc get storageClasses

    oc create -f - <<EOF
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
      namespace: "openshift-logging"
    spec:
      managementState: "Managed"
      logStore:
        type: "elasticsearch"
        retentionPolicy:
          application:
            maxAge: 1d
          infra:
            maxAge: 7d
          audit:
            maxAge: 7d
        elasticsearch:
          nodeCount: 3
          storage:
            storageClassName: "managed-premium"
            size: 200G
          resources:
            requests:
              memory: "8Gi"
          proxy:
            resources:
              limits:
                memory: 256Mi
              requests:
                memory: 256Mi
          redundancyPolicy: "SingleRedundancy"
      visualization:
        type: "kibana"
        kibana:
          replicas: 1
      curation:
        type: "curator"
        curator:
          schedule: "30 3 * * *"
      collection:
        logs:
          type: "fluentd"
          fluentd: {}
    EOF
    
  6. It will take a few minutes for everything to start up. You can monitor this progress by watching the pods.

    watch oc get pods -n openshift-logging
    
  7. Your logging instances are now configured and recieving logs. To view them, you will need to log into your Kibana instance and create the appropriate index patterns. For more information on index patterns, see the Kibana documentation.external link (opens in new tab)

    NOTE: The following restrictions and notes apply to index patterns:

    • All users can view the app- logs for namespaces they have access to
    • Only cluster-admins can view the infra- and audit- logs
    • For best accuracy, use the @timestamp field for determining chronology

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.