Cloud Experts Documentation

Creating a OSD in GCP with Existing VPCs

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Tip The official documentation for installing a OSD cluster in GCP can be found here .

For deploy an OSD cluster in GCP using existing Virtual Private Cloud (VPC) you need to implement some prerequisites that you must create before starting the OpenShift Dedicated installation though the OCM.

Prerequisites

NOTE: Also the GCloud Shell can be used, and have the gcloud cli among other tools preinstalled.

Generate GCP VPC and Subnets

This is a diagram showing the GCP infra prerequisites that are needed for the OSD installation:

GCP Prereqs

To deploy the GCP VPC and subnets among other prerequisites for install the OSD in GCP using the preexisting VPCs you have two options:

  • Option 1 - GCloud CLI
  • Option 2 - Terraform Automation

Please select one of these two options and proceed with the OSD install steps.

Option 1 - Generate OSD VPC and Subnets using GCloud CLI

As mentioned before, for deploy OSD in GCP using existing GCP VPC, you need to provide and create beforehand a GCP VPC and two subnets (one for the masters and another for the workers nodes).

  1. Login and configure the proper GCP project where the OSD will be deployed:

    export PROJECT_NAME=<google project name>
    gcloud auth list
    gcloud config set project $PROJECT_NAME
    gcloud config list project
    
  2. Export the names of the vpc and subnets:

    export REGION=<region name>
    export OSD_VPC=<vpc name>
    export MASTER_SUBNET=<master subnet name>
    export WORKER_SUBNET=<worker subnet name>
    
  3. Create a custom mode VPC network:

    gcloud compute networks create $OSD_VPC --subnet-mode=custom
    gcloud compute networks describe $OSD_VPC
    

NOTE: we need to create the mode custom for the VPC network, because the auto mode generates automatically the subnets with IPv4 ranges with predetermined set of rangesexternal link (opens in new tab) .

  1. This example is using the standard configuration for these two subnets:

    master-subnet - CIDR 10.0.0.0/17   - Gateway 10.0.0.1
    worker-subnet - CIDR 10.0.128.0/17 - Gateway 10.0.128.1
    
  2. Create the GCP Subnets for the masters and workers within the previous GCP VPC network:

    gcloud compute networks subnets create $MASTER_SUBNET \
    --network=$OSD_VPC --range=10.0.0.0/17 --region=$REGION
    
    gcloud compute networks subnets create $WORKER_SUBNET \
    --network=$OSD_VPC --range=10.0.128.0/17 --region=$REGION
    
    GCP VPC and Subnets
  3. Once the VPC and the two subnets are provided it is needed to create one GCP Cloud Routerexternal link (opens in new tab) :

    export OSD_ROUTER=<router name>
    
    gcloud compute routers create $OSD_ROUTER \
    --project=$PROJECT_NAME --network=$OSD_VPC --region=$REGION
    
    GCP Routers
  4. Then, we will deploy two GCP Cloud NATsexternal link (opens in new tab) and attach them within the GCP Router:

    • Generate the GCP Cloud Nat for the Master Subnets:
    export NAT_MASTER=<master subnet name>
    
    gcloud compute routers nats create $NAT_MASTER \
    --region=$REGION                               \
    --router=$OSD_ROUTER                           \
    --auto-allocate-nat-external-ips               \
    --nat-custom-subnet-ip-ranges=$MASTER_SUBNET
    
    GCP Nat Master
    • Generate the GCP Cloud NAT for the Worker Subnets:
    export NAT_WORKER=<worker subnet name>
    
    gcloud compute routers nats create $NAT_WORKER \
       --region=$REGION                           \
       --router=$OSD_ROUTER                       \
       --auto-allocate-nat-external-ips           \
       --nat-custom-subnet-ip-ranges=$WORKER_SUBNET
    
    GCP Nat Worker
  5. As you can check the Cloud NATs GW are attached now to the Cloud Router:

    GCP Nat Master

Option 2 - Deploy OSD VPC and Subnets using Terraform

You can use also automation code in Terraformexternal link (opens in new tab) to deploy all the GCP infrastructure required to deploy the OSD in preexistent VPCs.

  • Clone the tf-osd-gcp repository:
git clone https://github.com/rh-mobb/tf-osd-gcp.git
cd tf-osd-gcp
  • Copy and modify the tfvars file in order to custom to your scenario:
cp -pr terraform.tfvars.example terraform.tfvars
  • Deploy the network infrastructure in GCP needed for deploy the OSD cluster:
make all

Install the OSD cluster using pre-existent VPCs

These steps are based in the official OSD installation documentation .

  1. Log in to OpenShift Cluster Manager and click Create cluster.

  2. In the Cloud tab, click Create cluster in the Red Hat OpenShift Dedicated row.

  3. Under Billing model, configure the subscription type and infrastructure type OSD Install

  4. Select Run on Google Cloud Platform.

  5. Click Prerequisites to review the prerequisites for installing OpenShift Dedicated on GCP with CCS.

  6. Provide your GCP service account private key in JSON format. You can either click Browse to locate and attach a JSON file or add the details in the Service account JSON field. OSD Install

  7. Validate your cloud provider account and then click Next. On the Cluster details page, provide a name for your cluster and specify the cluster details: OSD Install

NOTE: the Region used to be installed needs to be the same as the VPC and Subnets deployed in the early step.

  1. On the Default machine pool page, select a Compute node instance type and a Compute node count: OSD Install

  2. In the Cluster privacy section, select Public endpoints and application routes for your cluster.

  3. Select Install into an existing VPC to install the cluster in an existing GCP Virtual Private Cloud (VPC): OSD Install

  4. Provide your Virtual Private Cloud (VPC) subnet settings, that you deployed as prerequisites in the previous section: OSD Install

  5. In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided: OSD Install

  6. On the Cluster update strategy page, configure your update preferences.

  7. Review the summary of your selections and click Create cluster to start the cluster installation. Check that the Install into Existing VPC is enabled and the VPC and Subnets are properly selected and defined: OSD Install

Cleanup

Deleting a OSD cluster consists of two parts:

  1. Deleting the OSD cluster can be done using the OCM console described in the official OSD docs .

  2. Deleting the GCP infrastructure resources (VPC, Subnets, Cloud NAT, Cloud Router). Depending of which option you selected you must perform:

  3. Option 1: Delete GCP resources using GCloud CLI:

    gcloud compute routers nats delete $NAT_WORKER \
    --region=$REGION --router=$OSD_ROUTER --quiet
    
    gcloud compute routers nats delete $NAT_MASTER \
    --region=$REGION --router=$OSD_ROUTER --quiet
    
    gcloud compute routers delete $OSD_ROUTER --region=$REGION --quiet
    
    gcloud compute networks subnets delete $MASTER_SUBNET --region=$REGION --quiet
    gcloud compute networks subnets delete $WORKER_SUBNET --region=$REGION --quiet
    
    gcloud compute networks delete $OSD_VPC --quiet
    
  4. Option 2: Delete GCP resources using Terraform:

    make destroy
    

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.