OpenG2P In a Box

Getting started with OpenG2P

This document describes a deployment model wherein the infrastructure and components required by OpenG2P modules can be set up on a single node/VM/machine. This will help you to get started with OpenG2P and experience the functionality without having to meet all resource requirements for a production-grade setup. This is based on V4 architecture, but a compact version of the same. The essence of the V4 is preserved so that upgrading the infra is easier when more hardware resources are available.

Deployment architecture

OpenG2P In a Box

Prerequisites

Hardware requirements

OpenG2P in-a-box minimally requires access to a machine (virtual machine) with the following configuration. Please make sure this machine is available with OS installed as mentioned below. You must have "root" access to the machine:

  • 16vCPU / 64 GB RAM / 256 GB storage

  • Operating System: Ubuntu 22.04

Note the internal IP address of your server machine (node) in your notepad for future DNS mappings.

DNS Requirements for Certificate Generation

A valid domain with DNS management access is required. You may use AWS Route53 or any other DNS provider. The DNS access must allow you to:

  • Create and delete TXT records (for DNS-ACME challenge).

  • Manage A records (for pointing domains to IP/Ingress).

  • Create CNAME records (if needed for subdomain routing).

Base infrastructure setup

To set up the base infrastructure, log in to the machine and install the following. Make sure to follow each verification step to ensure that everything is installed correctly and the setup is progressing smoothly.

1. Tools setup

Install the following tools. After installation, verify the version of each tool to confirm that they have been installed correctly. Tools: wget , curl , kubectl , istioctl , helm , jq

🔍 Verification Checkpoint: Run the following commands and verify that each returns version information without errors.

wget --version
curl --version
kubectl version --client
istioctl version
helm version
jq --version

2. Firewall setup

Follow the link below to set up the firewall rules required for the deployment. 🔒Set up Firewall rules

🔍 Verification Checkpoint: Run iptables -L or ufw status to ensure the rules are active in case you're using on-premises or self-managed native server nodes. If you're deploying on AWS cloud infrastructure, verify or configure the necessary firewall rules within the Security Groups associated with your instances.

3. Kubernetes cluster installation

Follow the below steps to set up Kubernetes Cluster (RKE2 Server) as a root user.

  1. Create the rke2 config directory - mkdir -p /etc/rancher/rke2

  2. Create a config.yaml file in the above directory, using the following config file template. Use rke2-server.conf.primary.template. The token can be any arbitrary string.

  3. Edit the above config.yaml file with the appropriate names, IPs, and tokens.

  4. Run the following commands to set the RKE2 version, download the same and start RKE2 server:

    export INSTALL_RKE2_VERSION="v1.28.9+rke2r1"
    curl -sfL https://get.rke2.io | sh - 
    systemctl enable rke2-server
    systemctl start rke2-server
  5. Export KUBECONFIG:

    echo -e 'export PATH="$PATH:/var/lib/rancher/rke2/bin"\nexport KUBECONFIG="/etc/rancher/rke2/rke2.yaml"' >> ~/.bashrc
    source ~/.bashrc
    kubectl get nodes 

🔍 Verification Checkpoint: Check the status of rke2 server as shown in the screenshot below.

4. Wireguard installation

Install Wireguard Bastion server for secure VPN access:

  1. Clone the openg2p-deployment repo and navigate to the kubernetes/wireguard directory

  2. Run this command to install wireguard server/channel with root user:

    WG_MODE=k8s ./wg.sh <name for this wireguard server> <client ips subnet mask> <port> <no of peers> <subnet mask of the cluster nodes & lbs>

    For example:

    WG_MODE=k8s ./wg.sh wireguard_app_users 10.15.0.0/16 51820 254 172.16.0.0/24
  3. Check logs of the servers and wait for all servers to finish startup. Example:

    kubectl -n wireguard-system logs -f wireguard-app-users
  4. Once it finishes, navigate to /etc/wireguard-app-users. You will find multiple peer configuration files and CD in to peer1 folder and copy peer1.conf to your notepad.

  5. Follow the link provided below to setup a WireGuard on your system. Install WireGuard Client on Desktop

🔍 Verification Checkpoint: Make sure the WireGuard service is running on k8s cluster and the Wireguard setup is completed on your machine. On k8s cluster:

On your machine:

5. NFS Server installation

Install NFS Server to provide persistent storage volumes to kubernetes cluster:

  1. Follow the openg2p-deployment repository under the openg2p-deployment/nfs-server directory to install the NFS server. Run the following command as the root user.

    ./install-nfs-server.sh
  2. For every sandbox/namespace, create a new folder in /srv/nfs folder on the server node. Suggested folder structure: /srv/nfs/<cluster name>. Example:

    sudo mkdir /srv/nfs/rancher
    sudo mkdir /srv/nfs/openg2p

    Run this command to provide full accces for nfs folder sudo chmod -R 777 /srv/nfs

🔍 Verification Checkpoint: Make sure the NFS server is running and the setup is completed on server node.

  1. Install the Kubernetes NFS CSI driver and the NFS client provisioner on the cluster.

  2. From openg2p-deployment repo kubernetes/nfs-client directory, run: (Make sure to replace the <Node Internal IP> and <cluster name> parameters appropriately below)

    NFS_SERVER=<Node Internal IP> \
    NFS_PATH=/srv/nfs/<cluster_name> \
        ./install-nfs-csi-driver.sh

🔍 Verification Checkpoint: Make sure the NFS CSI driver and client provisioner is running and the setup is completed on server node.

6. Istio installation

To set up Istio from kubernetes/istio directory, run the commands below to install the Istio Operator, Istio Service Mesh, and Istio Ingress Gateway components. Wait for istiod and ingressgateway pods to start on istio-system namespace.

istioctl install -f istio-operator-no-external-lb.yaml
kubectl apply -f istio-ef-spdy-upgrade.yaml

🔍 Verification Checkpoint: Check whether all the Istio pods have come up.

7. Setting up TLS certificates for domain

Set up TLS/SSL certificates for your domain (e.g., sandbox.<your-domain>) to enable secure, encrypted communication between services. Ensure certificates are created for the following four domains to enable HTTPS in the environment:

Purpose

Domain Example

Description

Rancher UI

rancher.example.com

Used to access the Rancher web interface

Keycloak Authentication

keycloak.example.com

Used for authentication via Keycloak

Sandbox Environment

sandbox.example.com

Main entry point for the sandbox environment

Wildcard for Sandbox

*.sandbox.example.com

Covers subdomains like app.sandbox.example.com, etc.

Follow the below steps to generate SSL certifiactes for each domain.

  1. Install letsencrypt and certbot using below command:

    sudo apt install certbot
  2. Since the preferred challenge is DNS type, the below commands asks for _acme-challenge. Create the _acme-challenge TXT DNS record accordingly using a Public DNS Provider (e.g., AWS Route 53, Cloudflare, GoDaddy), and continue with the prompt to generate certs and map the value in DNS Provider.

  3. Create SSL Certificate using letsencrypt for rancher by editing hostname below:

    certbot certonly --agree-tos --manual \
        --preferred-challenges=dns \
        -d rancher.example.com

    Create Rancher TLS Secret using below command (edit certificate paths below):

    kubectl -n istio-system create secret tls tls-rancher-ingress \
        --cert /etc/letsencrypt/live/rancher.example.com/fullchain.pem \
        --key /etc/letsencrypt/live/rancher.example.com/privkey.pem

    Screenshot for TXT record mapping:

  4. Create SSL Certificate using letsencrypt for keycloak by editing hostname below:

    certbot certonly --agree-tos --manual \
        --preferred-challenges=dns \
        -d keycloak.example.com

    Create Keycloak TLS Secret, using (edit certificate paths below):

    kubectl -n istio-system create secret tls tls-keycloak-ingress \
        --cert /etc/letsencrypt/live/keycloak.example.com/fullchain.pem \
        --key /etc/letsencrypt/live/keycloak.example.com/privkey.pem

    Screenshot for TXT record mapping:

  5. Create SSL Certificate using letsencrypt for Sandbox Environment and Wildcard for Sandbox at the same time by editing hostname below and keep it ready for future use:

    certbot certonly --agree-tos --manual \
        --preferred-challenges=dns \
        -d dev.example.com \  
        -d *.dev.example.com

    Create OpenG2P-Sandbox envrionment TLS Secret, using (Edit certificate paths below):

    kubectl -n istio-system create secret tls tls-openg2p-$NS-ingress \
        --cert /etc/letsencrypt/live/dev.example.com/fullchain.pem \
        --key /etc/letsencrypt/live/dev.example.com/privkey.pem

    Screenshot for TXT record mapping:

    Note: You can name your sandbox anything, e.g., dev, qa, or test. Make sure to note it down for future use, as you’ll use the same name for the project and namespace when creating them in Rancher.

🔍 Verification Checkpoint: After creating the certificates, verify that they are present in the /etc/letsencrypt/live/ directory and have been uploaded to the istio-system namespace as a Kubernetes secret.

8. Mapping domains to cluster IP

Set up DNS records for the Rancher and Keycloak, OpenG2P-Sandbox hostnames so that they resolve to the private IP address of the node where the services are exposed. Using a public DNS provider (e.g., AWS Route 53, Cloudflare, GoDaddy) or a provider of your choice.

Create A records (or CNAMEs, if appropriate) for the fully qualified domain names (FQDNs) you plan to use for Rancher and Keycloak, OpenG2P-Sandbox (e.g., rancher.example.com and keycloak.example.com, dev.example.com, *.dev.example.com).

🔍 Verification Checkpoint: The screenshot below is an example of DNS mapping using AWS Route 53. You can use any DNS provider as per your requirements, and the domain mapping should be similar to what is shown in the screenshot.

9. Rancher installation

Install rancher from kubernetes/rancher directory (edit hostname):

RANCHER_HOSTNAME=rancher.example.com \
TLS=true \
./install.sh --set replicas=1 --version 2.9.3

Login to Rancher using the above hostname and bootstrap the admin user according to the instructions. After successfully logging in to Rancher as admin, save the new admin user password in local cluster, in cattle-system namespace, under rancher-secret, with key adminPassword.

🔍 Verification Checkpoint: Verify that all Rancher pods are running properly in the cattle-system namespace, and Rancher is accessible from your browser.

10. keycloak installation

Install keycloak from kubernetes/keycloak directory (edit hostname):

KEYCLOAK_HOSTNAME=keycloak.example.com \
TLS=true \
./install.sh --set replicaCount=1

Log in to Keycloak using admin credentials from the Keycloak namespace secrets in Rancher UI.

🔍 Verification Checkpoint: Verify Keycloak pods in the keycloak-system namespace and ensure it's accessible in your browser.

11. Integrating Rancher with Keycloak

Integrating Rancher with Keycloak enables centralized authentication and user management using Keycloak as the IdP.

🔍 Verification Checkpoint: Once you attempt to log in using rancher.hostname.org, you will be redirected to authenticate via Keycloak. Log in using your Keycloak credentials. In Rancher, your user status should appear as "Active," as shown in the screenshot.

12. Creating a project and namespace

Continue to use the same cluster (local cluster) for OpenG2P modules installation.

In Rancher, create a project and namespace, on which the OpenG2P modules will be installed.

The rest of this guide assumes the namespace to be dev, as the TLS certificates were created for the domain dev.example.com during the certificate setup.

In rancher -> namespaces menu, enable Istio Auto Injection for dev namespace.

🔍 Verification Checkpoint: Verify Istio injection is enabled for the dev namespace in the DEV project.

13. Istio gateway setup

Set up an Istio gateway on dev namespace.

  1. Provide your hostname and run this to define the variables:

    export NS=dev
    export WILDCARD_HOSTNAME='*.dev.example.com'
  2. Go to kubernetes/istio directory and run this to apply gateway.

    envsubst < istio-gateway-tls.yaml | kubectl apply -f -

🔍 Verification Checkpoint: Once created, the gateway will appear in Rancher UI under Istio > Gateway in the dev namespace.

14. Cluster Monitoring installation

Install Prometheus and Monitoring enable cluster monitoring directly from the Rancher UI.

🔍 Verification Checkpoint: Once monitoring is installed in Rancher, navigate to the Monitoring section where you'll see options for Alertmanager and Grafana. You can click on these to access their respective dashboards.

15. Cluster Logging installation

Install Logging and Fluentd is used to collect and parse logs generated by applications within the Kubernetes cluster.

🔍 Verification Checkpoint: Once logging is installed, verify that all pods in the cattle-logging-system namespace are up and running, and ensure that logs are being collected for each service.

OpenG2P module's installation

You can follow the below links to install OpenG2P modules via Rancher UI.

  1. Install SocialRegistry Module.

  2. Install PBMS Module.

  3. Install SPAR Module.

🔍 Verification Checkpoint: Once you deploy any of the modules mentioned above, you can also deploy the OpenG2P Landing Page. All services should be accessible from landing page.

FAQ

How is "In a Box" different from V4? Why should this not be used for production?

  • In-a-box does not use the Nginx Load Balancer. The HTTPS traffic directly terminates on the Istio gateway via Wireguard. However, Nginx is required in production as described here.

  • The SSL certificates are loaded on the Istio gateway while in V4 the certificates are loaded on the Nginx server.

  • The Wireguard bastion runs inside the Kubernetes cluster itself as a pod. This is not recommended in production where Wireguard must run on a separate node.

  • A single private access channel is enabled (via Wireguard). In production, you will typically need several channels for access control.

  • In-a-box does not offer high availability as the node is a single point of failure.

  • NFS runs inside the box. In production, NFS must run on a separate node with its access control, allocated resources and backups.

Last updated

Was this helpful?