Infrastructure Setup
This document describes how to setup kubernetes infrastructure for OpenG2P.
Base infrastructure setup
To set up the base infrastructure, log in to the machine and install the following. Make sure to follow each verification step to ensure that everything is installed correctly and the setup is progressing smoothly.
1. Tools setup
Install the following tools. After installation, verify the version of each tool to confirm that they have been installed correctly.
Tools: wget , curl , kubectl , istioctl , helm , jq
🔍 Verification Checkpoint: Run the following commands and verify that each returns version information without errors.
wget --version
curl --version
kubectl version --client
istioctl version
helm version
jq --version2. Firewall setup
Follow the link below to set up the firewall rules required for the deployment. 🔒Set up Firewall rules
🔍 Verification Checkpoint:
Run iptables -L or ufw status to ensure the rules are active in case you're using on-premises or self-managed native server nodes. If you're deploying on AWS cloud infrastructure, verify or configure the necessary firewall rules within the Security Groups associated with your instances.
3. Kubernetes cluster installation
Follow the below steps to set up Kubernetes Cluster (RKE2 Server) as a root user.
Create the rke2 config directory -
mkdir -p /etc/rancher/rke2Create a
config.yamlfile in the above directory, using the following config file template. Use rke2-server.conf.primary.template. The token can be any arbitrary string.Edit the above
config.yamlfile with the appropriate names, IPs, and tokens.Run the following commands to set the
RKE2version, download the same and start RKE2 server:export INSTALL_RKE2_VERSION="v1.28.9+rke2r1" curl -sfL https://get.rke2.io | sh - systemctl enable rke2-server systemctl start rke2-serverExport KUBECONFIG:
echo -e 'export PATH="$PATH:/var/lib/rancher/rke2/bin"\nexport KUBECONFIG="/etc/rancher/rke2/rke2.yaml"' >> ~/.bashrc source ~/.bashrc kubectl get nodes
Download the Kubeconfig file rke2.yaml and keep it securely. (This is important!)
🔍 Verification Checkpoint: Check the status of rke2 server as shown in the screenshot below.

4. Wireguard installation
Install Wireguard Bastion server for secure VPN access:
Clone the openg2p-deployment repo and navigate to the kubernetes/wireguard directory
Run this command to install wireguard server/channel with root user:
WG_MODE=k8s ./wg.sh <name for this wireguard server> <client ips subnet mask> <port> <no of peers> <subnet mask of the cluster nodes & lbs>For example:
WG_MODE=k8s ./wg.sh wireguard_app_users 10.15.0.0/16 51820 254 172.16.0.0/24Check logs of the servers and wait for all servers to finish startup. Example:
kubectl -n wireguard-system logs -f wireguard-app-usersOnce it finishes, navigate to
/etc/wireguard-app-users. You will find multiple peer configuration files and CD in topeer1folder and copypeer1.confto your notepad.Follow the link provided below to setup a WireGuard on your system. Install WireGuard Client on Desktop
🔍 Verification Checkpoint: Make sure the WireGuard service is running on k8s cluster and the Wireguard setup is completed on your machine. On k8s cluster:

On your machine:

After installing WireGuard on the cluster and configuring it on your local machine, you can install and configure kubectl using the RKE2 kubeconfig file generated during the Kubernetes cluster setup on the server. This allows you to access the cluster from your local command line.
5. NFS Server installation
Install NFS Server to provide persistent storage volumes to kubernetes cluster:
Follow the openg2p-deployment repository under the openg2p-deployment/nfs-server directory to install the NFS server. Run the following command as the root user.
./install-nfs-server.shFor every sandbox/namespace, create a new folder in
/srv/nfsfolder on the server node. Suggested folder structure:/srv/nfs/<cluster name>. Example:sudo mkdir /srv/nfs/globalRun this command to provide full accces for
nfsfoldersudo chmod -R 777 /srv/nfs
🔍 Verification Checkpoint: Make sure the NFS server is running and the setup is completed on server node.

Install the Kubernetes NFS CSI driver and the NFS client provisioner on the cluster.
From openg2p-deployment repo kubernetes/nfs-client directory, run: (Make sure to replace the
<Node Internal IP>and<cluster name>parameters appropriately below)NFS_SERVER=<Node Internal IP> \ NFS_PATH=/srv/nfs/<cluster_name> \ ./install-nfs-csi-driver.sh
🔍 Verification Checkpoint: Make sure the NFS CSI driver and client provisioner is running and the setup is completed on server node.

6. Istio installation
To set up Istio from kubernetes/istio directory, run the commands below to install the Istio Operator, Istio Service Mesh, and Istio Ingress Gateway components. Wait for istiod and ingressgateway pods to start on istio-system namespace.
istioctl install -f istio-operator.yaml
kubectl apply -f istio-ef-spdy-upgrade.yaml🔍 Verification Checkpoint: Check whether all the Istio pods have come up.

7. Setting up nginx load balancer
Follow the document here to setup nginx.
Purpose
Domain Example
Description
Rancher UI
rancher.example.com
Used to access the Rancher web interface
Keycloak Authentication
keycloak.example.com
Used for authentication via Keycloak
Sandbox Environment
sandbox.example.com
Main entry point for the sandbox environment
Wildcard for Sandbox
*.sandbox.example.com
Covers subdomains like app.sandbox.example.com, etc.
🔍 Verification Checkpoint: After creating the certificates, verify that they are present in the /etc/letsencrypt/live/ directory.

8. Mapping domains to cluster IP
Set up DNS records for the Rancher and Keycloak, OpenG2P-Sandbox hostnames so that they resolve to the private IP address of the node where the services are exposed. Using a public DNS provider (e.g., AWS Route 53, Cloudflare, GoDaddy) or a provider of your choice.
Create A records (or CNAMEs, if appropriate) for the fully qualified domain names (FQDNs) you plan to use for Rancher and Keycloak, OpenG2P-Sandbox (e.g., rancher.example.com and keycloak.example.com, dev.example.com, *.dev.example.com).
Point these records to the Internal IP address of node.
🔍 Verification Checkpoint: The screenshot below is an example of DNS mapping using AWS Route 53. You can use any DNS provider as per your requirements, and the domain mapping should be similar to what is shown in the screenshot.

9. Rancher installation
Install rancher from kubernetes/rancher directory (edit hostname):
RANCHER_HOSTNAME=rancher.example.com \
NS=cattle-system \
./install.sh --set replicas=1 --version 2.12.3Login to Rancher using the above hostname and bootstrap the admin user according to the instructions. After successfully logging in to Rancher as admin, save the new admin user password in local cluster, in cattle-system namespace, under rancher-secret, with key adminPassword.
🔍 Verification Checkpoint: Verify that all Rancher pods are running properly in the cattle-system namespace, and Rancher is accessible from your browser.


10. keycloak installation
Install keycloak from kubernetes/keycloak directory (edit hostname):
KEYCLOAK_HOSTNAME=keycloak.example.com \
NS=keycloak-system \
./install.sh --set replicaCount=1Log in to Keycloak using admin credentials from the Keycloak namespace secrets in Rancher UI.
🔍 Verification Checkpoint:
Verify Keycloak pods in the keycloak-system namespace and ensure it's accessible in your browser.


11. Integrating Rancher with Keycloak
Integrating Rancher with Keycloak enables centralized authentication and user management using Keycloak as the IdP.
🔍 Verification Checkpoint: Once you attempt to log in using rancher.hostname.org, you will be redirected to authenticate via Keycloak. Log in using your Keycloak credentials. In Rancher, your user status should appear as "Active," as shown in the screenshot.

So, this completes the base infrastructure setup for OpenG2P, and you can now begin installing the OpenG2P modules by following the steps below.
12. Creating a project and namespace
Continue to use the same cluster (local cluster) for OpenG2P modules installation.
In Rancher, create a project and namespace, on which the OpenG2P modules will be installed.
In Rancher, make sure that Istio auto-injection for the dev namespace is disabled.
🔍 Verification Checkpoint: Verify your project name and namespace appear under project/namespace section.

13. Istio gateway setup
Set up an Istio gateway on dev namespace.
Provide your hostname and run this to define the variables:
export NS=dev export HOSTNAME='dev.your.org' export WILDCARD_HOSTNAME='*.dev.your.org'Go to kubernetes/istio directory and run this to apply gateway.
envsubst < istio-gateway.yaml | kubectl apply -f -
🔍 Verification Checkpoint: Once created, the gateway will appear in Rancher UI under Istio > Gateway in the dev namespace.

14. Cluster Monitoring installation
Install Prometheus and Monitoring enable cluster monitoring directly from the Rancher UI.
🔍 Verification Checkpoint: Once monitoring is installed in Rancher, navigate to the Monitoring section where you'll see options for Alertmanager and Grafana. You can click on these to access their respective dashboards.

15. Cluster Logging installation
Install Logging and Fluentd is used to collect and parse logs generated by applications within the Kubernetes cluster. Follow the below commands to install logging:
helm repo add rancher-charts
helm repo update
helm install rancher-logging-crd rancher-charts/rancher-logging-crd --version 102.0.0+up3.17.10 --namespace cattle-logging-system --create-namespace
helm install rancher-logging rancher-charts/rancher-logging --version 102.0.0+up3.17.10 --namespace cattle-logging-system --set global.cattle.psp.enabled=false --set psp.enabled=false🔍 Verification Checkpoint: Once logging is installed, verify that all pods in the cattle-logging-system namespace are up and running, and ensure that logs are being collected for each service.
This completes the OpenG2P cluster setup and you can now proceed with installing the OpenG2P modules.
Last updated
Was this helpful?

