OpenG2P K8s Cluster Setup
Last updated
Last updated
OpenG2P modules and components are recommended to be run on Kubernetes (K8s), because of ease-of-use, management, and security features that K8s provides.
This document provides instructions to set up a K8s Cluster on which OpenG2P Modules and other components can be installed.
The following tools are installed on all the nodes and the client machine:
wget
, curl
, kubectl
, istioctl
, helm
, jq
Set up firewall rules on each node according to the following table. The exact method to set up the firewall rules will vary from cloud to cloud and on-prem. (For example on AWS, EC2 security groups can be used. For on-prem cluster, ufw can be used. Etc.)
TCP
22
SSH
TCP
80
Postgres ports
TCP
443
Postgres ports
TCP
5432
Postgres port
TCP
9345
RKE2 agent nodes
Kubernetes API
TCP
6443
RKE2 agent nodes
Kubernetes API
UDP
8472
RKE2 server and agent nodes
Required only for Flannel VXLAN
TCP
10250
RKE2 server and agent nodes
kubelet
TCP
2379
RKE2 server nodes
etcd client port
TCP
2380
RKE2 server nodes
etcd peer port
TCP
30000:32767
RKE2 server and agent nodes
NodePort port range
For example, this is how you can use ufw
to set up the firewall on each cluster node.
SSH into each node, and change to superuser
Run the following command for each rule in the above table
Example:
Enable ufw:
Decide the number of K8s Control plane nodes(server nodes) and worker nodes(agent nodes)
Choose an odd number of control-plane nodes. For example, for a 3-node k8s cluster, choose 1 control-plane node and 2 worker nodes. For a 7-node k8s cluster, choose 3 control-plane nodes and 4 worker nodes.
The following setup has to be done on each node on the cluster:
SSH into the node
Create the rke2 config directory:
Create a config.yaml
file in the above directory, using one of the following config file templates:
Edit the above config.yaml
file with the appropriate names, IPs, and tokens
Run this to download rke2.
Run this to start rke2:
On the control-plane node, run:
On the worker node, run:
To export KUBECONFIG, run (only on control-plane nodes):
Navigate to Cluster Management section in Rancher
Click on Import Existing
cluster. And follow the steps to import the new OpenG2P cluster
After importing, download kubeconfig for the new cluster from rancher (top right on the main page), to access the cluster through kubectl from user's machine (client), without SSH
This installation only applies if Longhorn is used as storage. This may be skipped if you are using NFS.
The following setup can be done from the client machine. This install Istio Operator, Istio Service Mesh, Istio Ingressgateway components.
If an external Loadbalancer is being used, then use the istio-operator-external-lb.yaml
file.
Configure the operator.yaml with any further configuration
Gather Wildcard TLS certificate and key and run;
Create istio gateway for all hosts using this command:
If using external loadbalancer/external TLS termination, use the istio-gateway-no-tls.yaml
file
Configure the the config.yaml with relevant values
Run this to download rke2.
Run this to start rke2 node:
Additional Reference:
If you are using AWS only to get EC2 nodes, and you want to set up the K8s cluster manually, move to the .
The following section uses to set up the K8s cluster.
For the first control-plane node, use
For subsequent control-plane nodes, use
For worker nodes, use
Additional Reference:
This section assumes a Rancher server has already been set up and operational. in case not already done.
This section assumes an NFS Server has already been set up and operational for providing storage volumes to this K8s cluster, with requirements as given in . This section assumes an NFS server has already been set up and operational, which meets the requirements, as given in . This NFS server is used to provide persistent storage volumes to this K8s cluster.
From directory, configure the istio-operator.yaml, and run;
From directory, take either the rke2-server.conf.subsequent.template
or rke2-agent.conf.template
based on whether the new node is control plane node or Worker node. Copy this file to /etc/rancher/rke2/config.yaml
in the new node.