Cloud Experts Documentation

Installing the Open Data Hub Operator

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

The Open Data Hub operator is available for deployment in the OpenShift OperatorHub as a Community Operators. You can install it from the OpenShift web console:

  1. From the OpenShift web console, log in as a user with cluster-admin privileges. For a developer installation from try.openshift.com including AWS and CRC, the kubeadmin user will work.

  2. Create a new project named ‘jph-demo’ for your installation of Open Data Hub UI Create Project

  3. Find Open Data Hub in the OperatorHub catalog.

    • Select the new namespace if not already selected.
    • Under Operators, select OperatorHub for a list of operators available for deployment.
    • Filter for Open Data Hub or look under Big Data for the icon for Open Data Hub.
UI Install Open Data Hub
  1. Click the Install button and follow the installation instructions to install the Open Data Hub operator.(optional if operator not installed)

  2. The subscription creation view will offer a few options including Update Channel, keep the rolling channel selected.

  3. To view the status of the Open Data Hub operator installation, find the Open Data Hub Operator under Operators -> Installed Operators (inside the project you created earlier). Once the STATUS field displays InstallSucceeded, you can proceed to create a new Open Data Hub deployment.

  4. Find the Open Data Hub Operator under Installed Operators (inside the project you created earlier)

  5. Click on the Open Data Hub Operator to bring up the details for the version that is currently installed.

  6. Click Create Instance to create a new deployment.

UI Install Open Data Hub
  1. Select the YAML View radio button to be presented with a YAML file to customize your deployment. Most of the components available in ODH have been removed, and only components for JupyterHub are required for this example.
apiVersion: kfdef.apps.kubeflow.org/v1
kind: KfDef
metadata:
  creationTimestamp: '2022-06-24T18:55:12Z'
  finalizers:
    - kfdef-finalizer.kfdef.apps.kubeflow.org
  generation: 2
  managedFields:
    - apiVersion: kfdef.apps.kubeflow.org/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:spec':
          .: {}
          'f:applications': {}
          'f:repos': {}
      manager: Mozilla
      operation: Update
      time: '2022-06-24T18:55:12Z'
    - apiVersion: kfdef.apps.kubeflow.org/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:finalizers':
            .: {}
            'v:"kfdef-finalizer.kfdef.apps.kubeflow.org"': {}
        'f:status': {}
      manager: opendatahub-operator
      operation: Update
      time: '2022-06-24T18:55:12Z'
  name: opendatahub
  namespace: jph-demo
  resourceVersion: '27393048'
  uid: f54399a6-faa7-4724-bf3d-be04a63d3120
spec:
  applications:
    - kustomizeConfig:
        repoRef:
          name: manifests
          path: odh-common
      name: odh-common
    - kustomizeConfig:
        parameters:
          - name: s3_endpoint_url
            value: s3.odh.com
        repoRef:
          name: manifests
          path: jupyterhub/jupyterhub
      name: jupyterhub
    - kustomizeConfig:
        overlays:
          - additional
        repoRef:
          name: manifests
          path: jupyterhub/notebook-images
      name: notebook-images
  repos:
    - name: kf-manifests
      uri: >-
        https://github.com/opendatahub-io/manifests/tarball/v1.4.0-rc.2-openshift
    - name: manifests
      uri: 'https://github.com/opendatahub-io/odh-manifests/tarball/v1.2'
status: {}
UI KFyaml
  1. Update the spec of the resource to match the above and click Create. If you accepted the default name, this will trigger the creation of an Open Data Hub deployment named opendatahub with JupyterHub.

  2. Verify the installation by viewing the project workload. JupyterHub and traefik-proxy should be running. UI Project workload

  3. Click Routes under Networking and url to launch Jupyterhub is created UI Project workload routes

  4. Open JupyterHub on web browser UI JupyterHub

  5. Configure GPU and start server UI Start Server

  6. Check for GPU in notebook UI GPUCheck

Reference: Check the blog on Using the NVIDIA GPU Operator to Run Distributed TensorFlow 2.4 GPU Benchmarks in OpenShift 4

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.