OpenG2P Deployment Model

OpenG2P’s deployment model offers a production-grade, Kubernetes-based platform designed to deliver secure, scalable, and reliable deployments of OpenG2P modules. Built on a robust Kubernetes orchestration framework, it supports multiple isolated environments—such as Development, QA, and Demo sandboxes—within a single organisational setup, enabling seamless management across the entire software lifecycle.

The OpenG2P deployment model is inspired by V4 deployment architecture developed by OpenG2P team. Considering OpenG2P's use cases, resource availability with departments of countries, and ease of deployment, we have adapted the V4 architure to be deployed in a "single box" - the entire installation in one sufficiently sized virtual machine or bare metal.

This deployment model ensures secure access for internal development teams and has been rigorously tested, earning an A+ rating in third-party penetration testing, underscoring its strong security posture. By leveraging the same deployment model for development as well as production, it facilitates an easy and efficient transition from development to production environments, significantly reducing complexity and risks.

For System Integrators, the OpenG2P deployment model represents a substantial time and resource saver by eliminating the need to build production-grade deployment setups from scratch. This turnkey solution accelerates implementation while maintaining enterprise-level security and operational excellence, making it the ideal foundation for organisations aiming to deploy OpenG2P at scale with confidence.

The deployment is offered as a set of instructions, scripts, Helm charts, utilities and guidelines.

Key concepts

  • Each environment like 'qa', 'dev', 'staging', 'production' is installed in a separate Kubernetes namespace on the same cluster.

  • Access to each environment (namespace) can be controlled via private access channels.

  • Firewall is outside the purview of this deployment.

  • Git repo and Docker Registry are assumed externally hosted (public or private). In case of production deployments, these should be hosted within private network.

  • As this deployment is based on Kubernetes, the system can be easily scaled up by adding more nodes (machines).

Role of various components

The deployment utilizes several open source third party components. The concept and role of these components is given below:

Component
Description

Wireguard

Wireguard is a fast secure & open-source VPN, with P2P traffic encryption that can enable secure (non-public) access to the resources. A combination of Wireguard, Nginx and Isto gateway is used to enable fine-grained access control to the environments. See Private Access Channels.

Note that the terms Wireguard, Wireguard Bastion and Wireguard Server are used interchangeably in this document.

Nginx

Nginx as a reverse-proxy for incoming external (public) traffic. It serves as HTTPS termination and together with Wireguard and Istio Gateway it can be used to create private access channels. Nginx isolates the internal network such that traffic does not directly fall on the Istio Gateway of the Kubernetes cluster.

Ingress Gateway

Rancher

Keycloak

Istio

NFS

Prometheus & Grafana

FluentD

OpenSearch

Resource requirements

For a full deployment you need the following

  1. Hardware requirements mentioned below.

  2. Public IP assigned to machine if public access is enabled (for public facing portals and apps)

Hardware requirements

Domain names

To access resources on cluster, domain names and mappings are required. The suggested domain name convention is as follows:

<module>.<environment>.<organisation>.<tld>

Example:

  • spar.dev.openg2p.org

  • socialregistry.uat.openg2p.org

Domain mapping

Requirement Description
Domain Name (examples)
Mapped to

Domain mapping to sandbox

  • dev.openg2p.net

  • uat.openg2p.net

  • staging.openg2p.org

"A" Record mapped to Load Balancer IP (For sandbox, where LB is not used, this can be mapped directly to nodes of the K8s cluster, at least 3 nodes).

Wild card mapping to modules

  • *.dev.openg2p.net

  • *.uat.openg2p.net

  • *.staging.openg2p.org

"CNAME" Record mapped to the domain of the above "A" record. (This is a wildcard DNS mapping)

The domain name mapping needs to be done on your domain service provider. For example, on AWS this is configured on Route 53.

Certificates

At least one wildcard certificate is required depending on the above domain names used. This can also be generated using Letsencrypt. See guide here.

Deployment instructions

CONCETPS: Before proceeding with deployment, read up on the following topics to better understand each infrastructure component required for a successful setup:

  1. 🧑‍💻 Rancher

  2. 📝 Logging and Fluentd

Last updated

Was this helpful?