OpenG2P In a Box
Getting started with OpenG2P
Last updated
Was this helpful?
Getting started with OpenG2P
Last updated
Was this helpful?
This document describes a deployment model wherein the infrastructure and components required by OpenG2P modules can be set up on a single node/VM/machine. This will help you to get started with OpenG2P and experience the functionality without having to meet all r for a production-grade setup. This is based on , but a compact version of the same. The essence of the V4 is preserved so that upgrading the infra is easier when more hardware resources are available.
Do NOT use this deployment model for production/pilots.
OpenG2P in-a-box minimally requires access to a machine (virtual machine) with the following configuration. Please make sure this machine is available with OS installed as mentioned below. You must have "root" access to the machine:
16vCPU / 64 GB RAM / 256 GB storage
Operating System: Ubuntu 22.04
A valid domain with DNS management access is required. You may use AWS Route53 or any other DNS provider. The DNS access must allow you to:
Create and delete TXT
records (for DNS-ACME challenge).
Manage A
records (for pointing domains to IP/Ingress).
Create CNAME
records (if needed for subdomain routing).
Concepts
Before proceeding with the deployment, read up on the following topics to better understand each infrastructure component required for a successful setup:
To set up the base infrastructure, log in to the machine and install the following. Make sure to follow each verification step to ensure that everything is installed correctly and the setup is progressing smoothly.
Tools: Install the following tools. After installation, verify the version of each tool to confirm that they have been installed correctly.
Tools: wget
, curl
, kubectl
, istioctl
, helm
, jq
🔍 Verification Checkpoint:
Run the following commands and verify that each returns the version information:
✅ You should see version details for each tool without any errors.
Kubernetes cluster: Follow the below steps to set up Kubernetes Cluster (RKE2 Server) as a root
user.
Create the rke2 config directory - mkdir -p /etc/rancher/rke2
Edit the above config.yaml
file with the appropriate names, IPs, and tokens.
Run the following commands to set the RKE2
version, download the same and start RKE2 server:
Export KUBECONFIG:
Download the Kubeconfig file rke2.yaml
and keep it securely. (This is important!)
🔍 Verification Checkpoint:
Check the status of rke2 server as shown in the screenshot below.
Wireguard: Install Wireguard Bastion server for secure VPN access:
Run this command to install wireguard server/channel with root user:
For example:
Check logs of the servers and wait for all servers to finish startup. Example:
Once it finishes, navigate to /etc/wireguard-app-users
. You will find multiple peer configuration files and CD in to peer1
folder and copy peer1.conf
to your notepad.
On your machine:
Once WireGuard is running and configured on your machine, you can easily set up kubectl
and access the cluster from your machine. (optional)
NFS Server: Install NFS Server to provide persistent storage volumes to kubernetes cluster:
To install an NFS server, run the following command as root user:
For every sandbox/namespace, create a new folder in /srv/nfs
folder on the server node. Suggested folder structure: /srv/nfs/<cluster name>
.
Example:
Run this command to provide full accces for nfs
folder sudo chmod -R 777 /srv/nfs
🔍 Verification Checkpoint:
Make sure the NFS server is running and the setup is completed on server node.
Install the Kubernetes NFS CSI driver and the NFS client provisioner on the cluster as follows:
🔍 Verification Checkpoint: Make sure the NFS CSI driver and client provisioner is running and the setup is completed on server node.
Wait for istiod
and ingressgateway
pods to start on istio-system namespace.
🔍 Verification Checkpoint:
Check whether all the Istio pods have come up.
TLS: Set up Transport Layer Security (TLS) for secure communication by following the steps. This will ensure that data transmitted between services is encrypted and protected from unauthorized access:
Install letsencrypt and certbot using below command:
Since the preferred challenge is DNS type, the below command asks for _acme-challenge.
Create the _acme-challenge
TXT DNS record accordingly using a Public DNS Provider (e.g., AWS Route 53, Cloudflare, GoDaddy), and continue with the prompt to generate certs.
Create SSL Certificate using letsencrypt for rancher by editing hostname below:
Create Rancher TLS Secret using below command (edit certificate paths below):
Create SSL Certificate using letsencrypt for keycloak by editing hostname below:
Create Keycloak TLS Secret, using (edit certificate paths below):
🔍 Verification Checkpoint: After creating the certificates, verify that they are present in the /etc/letsencrypt/live/ directory and have been uploaded to the istio-system namespace as a Kubernetes secret.
DNS: Set up DNS records for the Rancher and Keycloak hostnames so that they resolve to the public (or private, depending on your setup) IP address of the node where the services are exposed. This can be achieved in the following way:
Using a Public DNS Provider (e.g., AWS Route 53, Cloudflare, GoDaddy):
Create A records (or CNAMEs, if appropriate) for the fully qualified domain names (FQDNs) you plan to use for Rancher and Keycloak (e.g., rancher.example.com and keycloak.example.com).
Point these records to the Internal IP address of node. 🔍 Verification Checkpoint: The screenshot below is an example of DNS mapping using AWS Route 53. You can use any DNS provider as per your requirements, and the domain mapping should be similar to what is shown in the screenshot.
Login to Rancher using the above hostname and bootstrap the admin
user according to the instructions. After successfully logging in to Rancher as admin, save the new admin user password in local
cluster, in cattle-system
namespace, under rancher-secret
, with key adminPassword
.
🔍 Verification Checkpoint:
Verify that all Rancher pods are running properly in the cattle-system namespace, and Rancher is accessible from your browser.
Log in to Keycloak using admin credentials from the Keycloak namespace secrets in Rancher.
🔍 Verification Checkpoint:
Verify Keycloak pods in the keycloak-system
namespace and ensure it's accessible in your browser.
Note: So, this completes the base infrastructure setup for OpenG2P, and you can now begin installing the OpenG2P modules
by following the steps below.
Creating a Project and Namespace: Continue to use the same cluster (local
cluster) for OpenG2P modules installation.
In Rancher, create a project and namespace, on which the OpenG2P modules will be installed. The rest of this guide will assume the namespace to be dev
.
In rancher -> namespaces menu, enable Istio Auto Injection
for dev namespace.
🔍 Verification Checkpoint:
Verify Istio injection is enabled for the dev namespace in the dev project.
Istio: Set up an Istio gateway on dev namespace.
Provide your hostname and run this to define the variables:
Create SSL certificate using letsencrypt for the wildcard hostname used above. Example usage(provide your hostname):
Create OpenG2P TLS Secret, using (Edit certificate paths below):
Follow step 9 for DNS Mapping. 🔍 Verification Checkpoint: Once created, the gateway will appear in Rancher UI under Istio > Gateway in the dev namespace.
Cluster Logging: Install Logging and Fluentd Installation.
Fluentd is used to collect and parse logs generated by applications within the Kubernetes cluster.
Only one Fluentd installation is required per Kubernetes cluster. Follow the below steps from rancher.
Navigate to Apps & Marketplace → Charts
.
Search for and select the Logging
chart.
Install it using the default values.
When prompted, select Project: System
to ensure Fluentd runs in the appropriate system namespace.
🔍 Verification Checkpoint:
Once logging is installed, verify that all pods in the cattle-logging-system namespace are up and running, and ensure that logs are being collected for each service.
You can follow the below links to install OpenG2P modules via Rancher UI.
🔒
📦
🔐
📁
🔗
🧩
🔐
🧑💻
🧾
📊
📝 and
Firewall: Follow the document linked below to set up the firewall rules required for the deployment.
🔒
🔍 Verification Checkpoint:
Run iptables -L
or ufw status
to ensure the rules are active in case you're using on-premises or self-managed native server nodes. If you're deploying on AWS cloud infrastructure, verify or configure the necessary firewall rules within the Security Groups associated with your instances.
Create a config.yaml
file in the above directory, using the following config file template.
Use . The token can be any arbitrary string.
Clone the repo and navigate to the directory
Follow the link provided below to setup a WireGuard on your system. 🔍 Verification Checkpoint: Make sure the WireGuard server is running on server node and the wireguard setup is completed on your machine. On server node:
Download/copy the install script from the link provided below into the server node.
From openg2p-deployment repo directory, run: (Make sure to replace the <Node Internal IP>
and <cluster name>
parameters appropriately below)
Istio: To set up Istio from directory, run the commands below to install the Istio Operator, Istio Service Mesh, and Istio Ingress Gateway components.
Rancher: Install rancher from directory (edit hostname):
keycloak: Install keycloak from directory (edit hostname):
enables centralized authentication and user management using Keycloak as the IdP. 🔍 Verification Checkpoint: Once you attempt to log in using rancher.hostname.org, you will be redirected to authenticate via Keycloak. Log in using your Keycloak credentials. In Rancher, your user status should appear as "Active," as shown in the screenshot.
Go to directory and run this to apply gateway.
Cluster Monitoring: Install enable cluster monitoring directly from the Rancher UI. 🔍 Verification Checkpoint: Once monitoring is installed in Rancher, navigate to the Monitoring section where you'll see options for Alertmanager and Grafana. You can click on these to access their respective dashboards.
Install .
Install Module.
Install Module.
Install Module. 🔍 Verification Checkpoint: Once you deploy any of the modules mentioned above, you can also deploy the OpenG2P Landing Page. All services should be accessible from landing page.
How is "In a Box" different from ? Why should this not be used for production?
In-a-box does not use the Nginx Load Balancer. The HTTPS traffic directly terminates on the Istio gateway via Wireguard. However, Nginx is required in production as described .
A single private is enabled (via Wireguard). In production, you will typically need several channels for access control.