OpenG2P In a Box
Getting started with OpenG2P
This document describes a deployment model wherein the infrastructure and components required by OpenG2P modules can be set up on a single node/VM/machine. This will help you to get started with OpenG2P and experience the functionality without having to meet all resource requirements for a production-grade setup. This is based on V4 architecture, but a compact version of the same. The essence of the V4 is preserved so that upgrading the infra is easier when more hardware resources are available.
Deployment architecture

Do NOT use this deployment model for production/pilots.
Prerequisites
Hardware requirements
OpenG2P in-a-box minimally requires access to a machine (virtual machine) with the following configuration. Please make sure this machine is available with OS installed as mentioned below. You must have "root" access to the machine:
16vCPU / 64 GB RAM / 256 GB storage
Operating System: Ubuntu 22.04
DNS Requirements for Certificate Generation
A valid domain with DNS management access is required. You may use AWS Route53 or any other DNS provider. The DNS access must allow you to:
Create and delete
TXT
records (for DNS-ACME challenge).Manage
A
records (for pointing domains to IP/Ingress).Create
CNAME
records (if needed for subdomain routing).
Base infrastructure setup
To set up the base infrastructure, log in to the machine and install the following. Make sure to follow each verification step to ensure that everything is installed correctly and the setup is progressing smoothly.
1. Tools setup
Install the following tools. After installation, verify the version of each tool to confirm that they have been installed correctly.
Tools: wget
, curl
, kubectl
, istioctl
, helm
, jq
🔍 Verification Checkpoint: Run the following commands and verify that each returns version information without errors.
wget --version
curl --version
kubectl version --client
istioctl version
helm version
jq --version
2. Firewall setup
Follow the link below to set up the firewall rules required for the deployment. 🔒Set up Firewall rules
🔍 Verification Checkpoint:
Run iptables -L
or ufw status
to ensure the rules are active in case you're using on-premises or self-managed native server nodes. If you're deploying on AWS cloud infrastructure, verify or configure the necessary firewall rules within the Security Groups associated with your instances.
3. Kubernetes cluster installation
Follow the below steps to set up Kubernetes Cluster (RKE2 Server) as a root
user.
Create the rke2 config directory -
mkdir -p /etc/rancher/rke2
Create a
config.yaml
file in the above directory, using the following config file template. Use rke2-server.conf.primary.template. The token can be any arbitrary string.Edit the above
config.yaml
file with the appropriate names, IPs, and tokens.Run the following commands to set the
RKE2
version, download the same and start RKE2 server:export INSTALL_RKE2_VERSION="v1.28.9+rke2r1" curl -sfL https://get.rke2.io | sh - systemctl enable rke2-server systemctl start rke2-server
Export KUBECONFIG:
echo -e 'export PATH="$PATH:/var/lib/rancher/rke2/bin"\nexport KUBECONFIG="/etc/rancher/rke2/rke2.yaml"' >> ~/.bashrc source ~/.bashrc kubectl get nodes
Download the Kubeconfig file rke2.yaml
and keep it securely. (This is important!)
🔍 Verification Checkpoint: Check the status of rke2 server as shown in the screenshot below.

4. Wireguard installation
Install Wireguard Bastion server for secure VPN access:
Clone the openg2p-deployment repo and navigate to the kubernetes/wireguard directory
Run this command to install wireguard server/channel with root user:
WG_MODE=k8s ./wg.sh <name for this wireguard server> <client ips subnet mask> <port> <no of peers> <subnet mask of the cluster nodes & lbs>
For example:
WG_MODE=k8s ./wg.sh wireguard_app_users 10.15.0.0/16 51820 254 172.16.0.0/24
Check logs of the servers and wait for all servers to finish startup. Example:
kubectl -n wireguard-system logs -f wireguard-app-users
Once it finishes, navigate to
/etc/wireguard-app-users
. You will find multiple peer configuration files and CD in topeer1
folder and copypeer1.conf
to your notepad.Follow the link provided below to setup a WireGuard on your system. Install WireGuard Client on Desktop
🔍 Verification Checkpoint: Make sure the WireGuard service is running on k8s cluster and the Wireguard setup is completed on your machine. On k8s cluster:

On your machine:

After installing WireGuard on the cluster and configuring it on your local machine, you can install and configure kubectl
using the RKE2 kubeconfig file generated during the Kubernetes cluster setup on the server. This allows you to access the cluster from your local command line.
5. NFS Server installation
Install NFS Server to provide persistent storage volumes to kubernetes cluster:
Follow the openg2p-deployment repository under the openg2p-deployment/nfs-server directory to install the NFS server. Run the following command as the root user.
./install-nfs-server.sh
For every sandbox/namespace, create a new folder in
/srv/nfs
folder on the server node. Suggested folder structure:/srv/nfs/<cluster name>
. Example:sudo mkdir /srv/nfs/rancher sudo mkdir /srv/nfs/openg2p
Run this command to provide full accces for
nfs
foldersudo chmod -R 777 /srv/nfs
🔍 Verification Checkpoint: Make sure the NFS server is running and the setup is completed on server node.

Install the Kubernetes NFS CSI driver and the NFS client provisioner on the cluster.
From openg2p-deployment repo kubernetes/nfs-client directory, run: (Make sure to replace the
<Node Internal IP>
and<cluster name>
parameters appropriately below)NFS_SERVER=<Node Internal IP> \ NFS_PATH=/srv/nfs/<cluster_name> \ ./install-nfs-csi-driver.sh
🔍 Verification Checkpoint: Make sure the NFS CSI driver and client provisioner is running and the setup is completed on server node.

6. Istio installation
To set up Istio from kubernetes/istio directory, run the commands below to install the Istio Operator, Istio Service Mesh, and Istio Ingress Gateway components. Wait for istiod
and ingressgateway
pods to start on istio-system namespace.
istioctl install -f istio-operator-no-external-lb.yaml
kubectl apply -f istio-ef-spdy-upgrade.yaml
🔍 Verification Checkpoint: Check whether all the Istio pods have come up.

7. Setting up TLS certificates for domain
Set up TLS/SSL certificates for your domain (e.g., sandbox.<your-domain>) to enable secure, encrypted communication between services. Ensure certificates are created for the following four domains to enable HTTPS in the environment:
Purpose
Domain Example
Description
Rancher UI
rancher.example.com
Used to access the Rancher web interface
Keycloak Authentication
keycloak.example.com
Used for authentication via Keycloak
Sandbox Environment
sandbox.example.com
Main entry point for the sandbox environment
Wildcard for Sandbox
*.sandbox.example.com
Covers subdomains like app.sandbox.example.com
, etc.
Follow the below steps to generate SSL certifiactes for each domain.
Install letsencrypt and certbot using below command:
sudo apt install certbot
Since the preferred challenge is DNS type, the below commands asks for
_acme-challenge.
Create the_acme-challenge
TXT DNS record accordingly using a Public DNS Provider (e.g., AWS Route 53, Cloudflare, GoDaddy), and continue with the prompt to generate certs and map the value in DNS Provider.Create SSL Certificate using letsencrypt for
rancher
by editing hostname below:certbot certonly --agree-tos --manual \ --preferred-challenges=dns \ -d rancher.example.com
Create Rancher TLS Secret using below command (edit certificate paths below):
kubectl -n istio-system create secret tls tls-rancher-ingress \ --cert /etc/letsencrypt/live/rancher.example.com/fullchain.pem \ --key /etc/letsencrypt/live/rancher.example.com/privkey.pem
Screenshot for TXT record mapping:
Create SSL Certificate using letsencrypt for
keycloak
by editing hostname below:certbot certonly --agree-tos --manual \ --preferred-challenges=dns \ -d keycloak.example.com
Create Keycloak TLS Secret, using (edit certificate paths below):
kubectl -n istio-system create secret tls tls-keycloak-ingress \ --cert /etc/letsencrypt/live/keycloak.example.com/fullchain.pem \ --key /etc/letsencrypt/live/keycloak.example.com/privkey.pem
Screenshot for TXT record mapping:
Create SSL Certificate using letsencrypt for
Sandbox Environment
andWildcard for Sandbox
at the same time by editing hostname below and keep it ready for future use:certbot certonly --agree-tos --manual \ --preferred-challenges=dns \ -d dev.example.com \ -d *.dev.example.com
Create OpenG2P-Sandbox envrionment TLS Secret, using (Edit certificate paths below):
kubectl -n istio-system create secret tls tls-openg2p-$NS-ingress \ --cert /etc/letsencrypt/live/dev.example.com/fullchain.pem \ --key /etc/letsencrypt/live/dev.example.com/privkey.pem
Screenshot for TXT record mapping:
Note:
You can name your sandbox anything, e.g., dev, qa, or test
. Make sure to note it down for future use, as you’ll use the same name for the project and namespace when creating them in Rancher.
🔍 Verification Checkpoint: After creating the certificates, verify that they are present in the /etc/letsencrypt/live/ directory and have been uploaded to the istio-system namespace as a Kubernetes secret.

8. Mapping domains to cluster IP
Set up DNS records for the Rancher and Keycloak, OpenG2P-Sandbox hostnames so that they resolve to the private IP address of the node where the services are exposed. Using a public DNS provider (e.g., AWS Route 53, Cloudflare, GoDaddy) or a provider of your choice.
Create A records (or CNAMEs, if appropriate) for the fully qualified domain names (FQDNs) you plan to use for Rancher and Keycloak, OpenG2P-Sandbox (e.g., rancher.example.com and keycloak.example.com, dev.example.com, *.dev.example.com).
Point these records to the Internal IP address of node.
🔍 Verification Checkpoint: The screenshot below is an example of DNS mapping using AWS Route 53. You can use any DNS provider as per your requirements, and the domain mapping should be similar to what is shown in the screenshot.

9. Rancher installation
Install rancher from kubernetes/rancher directory (edit hostname):
RANCHER_HOSTNAME=rancher.example.com \
TLS=true \
./install.sh --set replicas=1 --version 2.9.3
Login to Rancher using the above hostname and bootstrap the admin
user according to the instructions. After successfully logging in to Rancher as admin, save the new admin user password in local
cluster, in cattle-system
namespace, under rancher-secret
, with key adminPassword
.
🔍 Verification Checkpoint: Verify that all Rancher pods are running properly in the cattle-system namespace, and Rancher is accessible from your browser.


10. keycloak installation
Install keycloak from kubernetes/keycloak directory (edit hostname):
KEYCLOAK_HOSTNAME=keycloak.example.com \
TLS=true \
./install.sh --set replicaCount=1
Log in to Keycloak using admin credentials from the Keycloak namespace secrets in Rancher UI.
🔍 Verification Checkpoint:
Verify Keycloak pods in the keycloak-system
namespace and ensure it's accessible in your browser.


11. Integrating Rancher with Keycloak
Integrating Rancher with Keycloak enables centralized authentication and user management using Keycloak as the IdP.
🔍 Verification Checkpoint: Once you attempt to log in using rancher.hostname.org, you will be redirected to authenticate via Keycloak. Log in using your Keycloak credentials. In Rancher, your user status should appear as "Active," as shown in the screenshot.

So, this completes the base infrastructure setup for OpenG2P, and you can now begin installing the OpenG2P modules
by following the steps below.
12. Creating a project and namespace
Continue to use the same cluster (local
cluster) for OpenG2P modules installation.
In Rancher, create a project and namespace, on which the OpenG2P modules will be installed.
In rancher -> namespaces menu, enable Istio Auto Injection
for dev
namespace.
🔍 Verification Checkpoint: Verify Istio injection is enabled for the dev namespace in the DEV project.

13. Istio gateway setup
Set up an Istio gateway on dev
namespace.
Provide your hostname and run this to define the variables:
export NS=dev export WILDCARD_HOSTNAME='*.dev.example.com'
Go to kubernetes/istio directory and run this to apply gateway.
envsubst < istio-gateway-tls.yaml | kubectl apply -f -
🔍 Verification Checkpoint: Once created, the gateway will appear in Rancher UI under Istio > Gateway in the dev namespace.

14. Cluster Monitoring installation
Install Prometheus and Monitoring enable cluster monitoring directly from the Rancher UI.
🔍 Verification Checkpoint: Once monitoring is installed in Rancher, navigate to the Monitoring section where you'll see options for Alertmanager and Grafana. You can click on these to access their respective dashboards.

15. Cluster Logging installation
Install Logging and Fluentd is used to collect and parse logs generated by applications within the Kubernetes cluster.
🔍 Verification Checkpoint: Once logging is installed, verify that all pods in the cattle-logging-system namespace are up and running, and ensure that logs are being collected for each service.
This completes the OpenG2P cluster setup and you can now proceed with installing the OpenG2P modules.
OpenG2P module's installation
You can follow the below links to install OpenG2P modules via Rancher UI.
Install SocialRegistry Module.
Install PBMS Module.
Install SPAR Module.
Install OpenG2P Landing Page.
🔍 Verification Checkpoint: Once you deploy any of the modules mentioned above, you can also deploy the OpenG2P Landing Page. All services should be accessible from landing page.

FAQ
Last updated
Was this helpful?