Automation

Single-node deployment automation

The entire deployment process for a single-node setup has been automated as avaialable as shell scripts. This will be very useful for bringing up an OpenG2P sandbox on your own machine. See detailed manual below on this automation:

Overview

Automated single-node deployment of the complete OpenG2P platform — from bare Ubuntu to running modules. Two scripts handle the entire lifecycle:

Script
Purpose
Run when

openg2p-infra.sh

Base infrastructure (K8s, Istio, Rancher, Keycloak, monitoring, SSO)

Once per machine

openg2p-environment.sh

Environment + modules (namespace, commons, Registry, PBMS, etc.)

Once per environment

circle-info

The source code for all automation scripts lives in the openg2p-deploymentarrow-up-right repository under automation/.

Domain Modes

The infrastructure script supports two modes — set domain_mode in your config:

Mode
When to use
What you need
DNS
TLS

local

Sandboxes, demos, pilots, air-gapped

Just a VM + its IP

dnsmasq (auto-installed)

Local CA + self-signed certs

custom (default)

Production, public-facing

Domain names + DNS records

Your DNS provider

Let's Encrypt (DNS-01)

Local mode

Designed for getting OpenG2P running the same day, with zero external dependencies.

  • Installs dnsmasq on the VM to resolve *.openg2p.test to the VM's IP

  • Generates a local Certificate Authority with self-signed certs

  • Configures Wireguard VPN with split tunnel (only cluster traffic routed through VPN)

  • Hostnames are auto-derived: rancher.openg2p.test, keycloak.openg2p.test, etc.

circle-check

Custom mode

For production deployments with proper domain names. Requires DNS A records pointing to the VM and uses Let's Encrypt for trusted TLS certificates.

Challenge Method
Config value
How it works

Manual DNS (default)

dns

Script pauses, shows TXT record to create, waits for confirmation

Cloudflare automated

dns-cloudflare

Fully automated via Cloudflare API token

Route53 automated

dns-route53

Fully automated via AWS credentials

HTTP challenge

http

Requires port 80 open to internet

Prerequisites

Requirement
Local mode
Custom mode

VM

Ubuntu 24.04 LTS, 16 vCPU, 64 GB RAM, 128 GB SSD

Same

Access

Root/sudo on the VM

Same

Internet

Required for downloading packages and Helm charts

Same

DNS

Not needed (dnsmasq handles it)

A records for Rancher + Keycloak hostnames

TLS

Not needed (local CA handles it)

DNS access for TXT records (Let's Encrypt)

Quick Start

Step 1: Infrastructure Setup

SSH into the VM as root:

Only node_ip is required — everything else has defaults:

Takes ~15-25 minutes. Idempotent — re-run on failure.

Step 2: Environment Setup

After infrastructure is ready, create environments:

Everything is auto-derived from the infra config:

This creates namespace dev with domain dev.openg2p.test.

Takes ~15-20 minutes per environment.

Post-Infrastructure Steps (on your laptop)

After the infra script completes, follow these steps to access the cluster.

1. Wireguard VPN

Import peer1.conf into the Wireguard client apparrow-up-right and activate the tunnel.

circle-info

The default is split tunnel — only Wireguard subnet + VPC traffic routes through the VPN. Your internet stays direct and fast.

2. DNS Resolution (local mode only)

Note: dig bypasses the macOS resolver system. Use dscacheutil -q host -a name rancher.openg2p.test or ping to verify.

3. CA Certificate (local mode only)

Copy /etc/openg2p/ca/ca.crt from the VM to your laptop, then install:

Or double-click ca.crt → System Settings → General → Profiles.

4. kubectl / helm Access

circle-exclamation

5. Login to Rancher

Open Rancher at https://rancher.<domain> — you should see a "Login with Keycloak" button.

Keycloak login (recommended): Use the email address configured in keycloak.admin_email (default: [email protected]). Retrieve the Keycloak admin password:

Local admin login: Username admin. Retrieve the password:

User Access & Roles

Rancher ships with built-in project roles, but all include full Secrets access. The automation script creates two additional custom roles that exclude secrets:

Role
Source
Secrets Access
Permissions

Project Owner

Rancher built-in

Full

Full control of the project

Project Member

Rancher built-in

Full

CRUD on workloads, services, configs, secrets

Project Member (No Secrets)

Created by automation

None

Same as Project Member, minus secrets

Project Read-Only (No Secrets)

Created by automation

None

View-only, no secrets

To give a user access to an environment:

  1. Create the user in Keycloak (Admin Console → Users → Add user). Use their email as username.

  2. In Rancher, go to Project (environment) → Members → Add Member.

  3. Search by email and assign a role.

circle-info

The Rancher admin global role (super admin) has access to everything. The initial admin user configured during setup already has this role.

Client-Manager Credentials

The infra script automatically creates a client-manager user in Keycloak's master realm. This service account is required by openg2p-environment.sh to create Keycloak clients for each environment.

  • Username: client-manager@<your-domain> (e.g., [email protected])

  • Password: Auto-generated and displayed in the script's final output

  • Roles: manage-clients, query-clients, view-clients

The password is saved on the VM at /var/lib/openg2p/deploy-state/client-manager-password.

circle-exclamation

Environment Setup Details

Phase 1: Environment Infrastructure

Step
What
Details

E1.1

Validate prerequisites

Infra completed, kubeconfig works, credentials available

E1.2

TLS certificate

Local: wildcard cert signed by CA. Custom: Let's Encrypt wildcard

E1.3

Nginx server block

*.dev.openg2p.test → Istio ingress

E1.4

K8s namespace

Creates the namespace

E1.5

Rancher Project

Creates project and moves namespace into it (RBAC)

E1.6

Istio Gateway

Gateway resource for hostname routing

E1.7

Keycloak secret

keycloak-client-manager K8s secret in namespace

Phase 2: Module Installation

openg2p-commons is split into two Helm charts installed sequentially:

Step
Chart
Details

E2.1

openg2p-commons-base

PostgreSQL, Kafka, MinIO, OpenSearch, Redis, SoftHSM, keycloak-init

E2.2

openg2p-commons-services

eSignet, KeyManager, Superset, ODK, master-data, reporting

(future)

Registry, PBMS, SPAR, G2P Bridge

Will be added as separate Helm installs

circle-info

The services chart automatically connects to base infrastructure via release name references (commons-postgresql, commons-redis, etc.).

Command Reference

Infrastructure

Environment

Uninstalling

Remove a single environment (keeps infrastructure intact):

Remove the entire infrastructure (destroys everything):

triangle-exclamation

File Structure

Troubleshooting

circle-info

Script failed? Re-run it. Completed steps are skipped. Error messages include diagnostic commands.

Local DNS not resolving on your laptop? Ensure Wireguard VPN is connected. Configure per-domain DNS on your laptop (see Step 2 above). On macOS, dig bypasses the resolver system — use ping or dscacheutil to test.

Browser shows certificate warning in local mode? Install the CA certificate on your laptop (see Step 3 above).

Check cluster status:

circle-info

This automation does not replace the Rancher UI. Your existing umbrella Helm charts with questions.yml continue to work for manual installs via the Rancher App Catalog.

Last updated

Was this helpful?