Automation
Single-node deployment automation
The entire deployment process for a single-node setup has been automated as avaialable as shell scripts. This will be very useful for bringing up an OpenG2P sandbox on your own machine. See detailed manual below on this automation:
Overview
Automated single-node deployment of the complete OpenG2P platform — from bare Ubuntu to running modules. Two scripts handle the entire lifecycle:
openg2p-infra.sh
Base infrastructure (K8s, Istio, Rancher, Keycloak, monitoring, SSO)
Once per machine
openg2p-environment.sh
Environment + modules (namespace, commons, Registry, PBMS, etc.)
Once per environment
The source code for all automation scripts lives in the openg2p-deployment repository under automation/.
Domain Modes
The infrastructure script supports two modes — set domain_mode in your config:
local
Sandboxes, demos, pilots, air-gapped
Just a VM + its IP
dnsmasq (auto-installed)
Local CA + self-signed certs
custom (default)
Production, public-facing
Domain names + DNS records
Your DNS provider
Let's Encrypt (DNS-01)
Local mode
Designed for getting OpenG2P running the same day, with zero external dependencies.
Installs
dnsmasqon the VM to resolve*.openg2p.testto the VM's IPGenerates a local Certificate Authority with self-signed certs
Configures Wireguard VPN with split tunnel (only cluster traffic routed through VPN)
Hostnames are auto-derived:
rancher.openg2p.test,keycloak.openg2p.test, etc.
Can be migrated to custom mode later when real domain names are available.
Custom mode
For production deployments with proper domain names. Requires DNS A records pointing to the VM and uses Let's Encrypt for trusted TLS certificates.
Manual DNS (default)
dns
Script pauses, shows TXT record to create, waits for confirmation
Cloudflare automated
dns-cloudflare
Fully automated via Cloudflare API token
Route53 automated
dns-route53
Fully automated via AWS credentials
HTTP challenge
http
Requires port 80 open to internet
Prerequisites
VM
Ubuntu 24.04 LTS, 16 vCPU, 64 GB RAM, 128 GB SSD
Same
Access
Root/sudo on the VM
Same
Internet
Required for downloading packages and Helm charts
Same
DNS
Not needed (dnsmasq handles it)
A records for Rancher + Keycloak hostnames
TLS
Not needed (local CA handles it)
DNS access for TXT records (Let's Encrypt)
Quick Start
Step 1: Infrastructure Setup
SSH into the VM as root:
Only node_ip is required — everything else has defaults:
Requires domain names and DNS records:
For AWS, also set the public IP for Wireguard:
Before running the script, create the security group:
After, attach it and disable source/dest check:
Takes ~15-25 minutes. Idempotent — re-run on failure.
Step 2: Environment Setup
After infrastructure is ready, create environments:
Everything is auto-derived from the infra config:
This creates namespace dev with domain dev.openg2p.test.
Set base_domain and Keycloak credentials explicitly:
Run the script multiple times with different configs:
Takes ~15-20 minutes per environment.
Post-Infrastructure Steps (on your laptop)
After the infra script completes, follow these steps to access the cluster.
1. Wireguard VPN
Import peer1.conf into the Wireguard client app and activate the tunnel.
The default is split tunnel — only Wireguard subnet + VPC traffic routes through the VPN. Your internet stays direct and fast.
2. DNS Resolution (local mode only)
Note:
digbypasses the macOS resolver system. Usedscacheutil -q host -a name rancher.openg2p.testorpingto verify.
Run in PowerShell as Administrator:
3. CA Certificate (local mode only)
Copy /etc/openg2p/ca/ca.crt from the VM to your laptop, then install:
Or double-click ca.crt → System Settings → General → Profiles.
Double-click ca.crt → Install Certificate → Local Machine → "Trusted Root Certification Authorities"
4. kubectl / helm Access
Requires Wireguard VPN to be active — the K8s API is on the private IP.
5. Login to Rancher
Open Rancher at https://rancher.<domain> — you should see a "Login with Keycloak" button.
Keycloak login (recommended): Use the email address configured in keycloak.admin_email (default: [email protected]). Retrieve the Keycloak admin password:
Local admin login: Username admin. Retrieve the password:
User Access & Roles
Rancher ships with built-in project roles, but all include full Secrets access. The automation script creates two additional custom roles that exclude secrets:
Project Owner
Rancher built-in
Full
Full control of the project
Project Member
Rancher built-in
Full
CRUD on workloads, services, configs, secrets
Project Member (No Secrets)
Created by automation
None
Same as Project Member, minus secrets
Project Read-Only (No Secrets)
Created by automation
None
View-only, no secrets
To give a user access to an environment:
Create the user in Keycloak (Admin Console → Users → Add user). Use their email as username.
In Rancher, go to Project (environment) → Members → Add Member.
Search by email and assign a role.
The Rancher admin global role (super admin) has access to everything. The initial admin user configured during setup already has this role.
Client-Manager Credentials
The infra script automatically creates a client-manager user in Keycloak's master realm. This service account is required by openg2p-environment.sh to create Keycloak clients for each environment.
Username:
client-manager@<your-domain>(e.g.,[email protected])Password: Auto-generated and displayed in the script's final output
Roles:
manage-clients,query-clients,view-clients
The password is saved on the VM at /var/lib/openg2p/deploy-state/client-manager-password.
Note down the client-manager password from the script output — you'll need it when running openg2p-environment.sh.
Environment Setup Details
Phase 1: Environment Infrastructure
E1.1
Validate prerequisites
Infra completed, kubeconfig works, credentials available
E1.2
TLS certificate
Local: wildcard cert signed by CA. Custom: Let's Encrypt wildcard
E1.3
Nginx server block
*.dev.openg2p.test → Istio ingress
E1.4
K8s namespace
Creates the namespace
E1.5
Rancher Project
Creates project and moves namespace into it (RBAC)
E1.6
Istio Gateway
Gateway resource for hostname routing
E1.7
Keycloak secret
keycloak-client-manager K8s secret in namespace
Phase 2: Module Installation
openg2p-commons is split into two Helm charts installed sequentially:
E2.1
openg2p-commons-base
PostgreSQL, Kafka, MinIO, OpenSearch, Redis, SoftHSM, keycloak-init
E2.2
openg2p-commons-services
eSignet, KeyManager, Superset, ODK, master-data, reporting
(future)
Registry, PBMS, SPAR, G2P Bridge
Will be added as separate Helm installs
The services chart automatically connects to base infrastructure via release name references (commons-postgresql, commons-redis, etc.).
Command Reference
Infrastructure
Environment
Uninstalling
Remove a single environment (keeps infrastructure intact):
Remove the entire infrastructure (destroys everything):
Infrastructure uninstall requires typing DELETE EVERYTHING to confirm. Removes: RKE2 cluster, Wireguard VPN, dnsmasq, Nginx, NFS, TLS certificates, and all state. The VM is left clean for a fresh installation.
File Structure
Troubleshooting
Script failed? Re-run it. Completed steps are skipped. Error messages include diagnostic commands.
Local DNS not resolving on your laptop? Ensure Wireguard VPN is connected. Configure per-domain DNS on your laptop (see Step 2 above). On macOS, dig bypasses the resolver system — use ping or dscacheutil to test.
Browser shows certificate warning in local mode? Install the CA certificate on your laptop (see Step 3 above).
Check cluster status:
This automation does not replace the Rancher UI. Your existing umbrella Helm charts with questions.yml continue to work for manual installs via the Rancher App Catalog.
Last updated
Was this helpful?

