Work in progress
Purpose | vCPUs | RAM | Storage (SSD) | Number of Virtual Machines* | Preferred Operating System |
---|---|---|---|---|---|
Purpose | vCPUs | RAM | Storage (SSD) | Number of Virtual Machines* | Preferred Operating System |
---|---|---|---|---|---|
TBD
All the machines in the same network.
Public IP assigned to the Wireguard machine.
The following domain names and mappings will be required. Examples:
One wildcard certificate is required at least, depending on the above domain names used. This can also be generated using letsencrypt.
Domain Name (examples) | Mapped to |
---|---|
Cluster nodes
8
32 GB
128 GB
3
Ubuntu Server 20.04
Wireguard
4
16 GB
64 GB
1
Ubuntu Server 20.04
Cluster nodes
8
32 GB
128 GB
3
Ubuntu Server 20.04
Wireguard
4
16 GB
64 GB
1
Ubuntu Server 20.04
Backup
4
16 GB
512 GB
1
Ubuntu Server 20.04
openg2p.<your domain>
uat.<your domain>
pilot.openg2p.<your domain>
"A" Record mapped to Load Balancer IP or at least 3 nodes of the K8s Cluster
*. openg2p.<your domain>
*.uat.<your domain>
*.pilot.openg2p.<your domain>
"CNAME" Record mapped to the above domain. (This is a wildcard DNS mapping)
Work in progress
Install letsencrypt and certbot.
Generate Certificate.
The above command will ask for _acme-challenge
, since the chosen challenge is of type DNS. Create the _acme-challenge
TXT DNS record accordingly, and continue with the above prompt to certs generation.
The generated certs should be present in /etc/letsencrypt
directory.
Run the same generate certs command to renew certs.
The above command will generate new pair of certificates. The DNS challenge needs to be performed again, as prompted.
Run the following to upload new certs back to Kubernetes Cluster. Adjust the certs path in the below command.
Work in progress
Rancher is used to managing multiple clusters. Being a critical component of cluster administration it is highly recommended that Rancher itself runs on a Kubernetes cluster with sufficient replication for high availability and avoiding a single point of failure.
Set up a new RKE2 cluster. Refer to the K8s Cluster Setup guide.
Do not remove the stock ingress controller in the server config.
No need to install Istio.
It is recommended to set up a double-node cluster for high availability. However, for the non-production environments, you may create a single node cluster to conserve resources
To install Rancher use this (hostname to be edited in the below command):
Configure/Create TLS secret accordingly.
Install Longhorn as a Rancher App.
From infra folder, run the following to install Keycloak (hostname to be edited in the below command).
Integrate Rancher and Keycloak using Rancher Auth - Keycloak (SAML) guide.
Work in progress
The following guide uses to set up the Kubernetes (K8s) cluster.
The requirements for setting up the cluster are met as given .
The following tools are installed on all the nodes and the client machine.
ufw
, wget
, curl
, kubectl
, istioctl
, helm
, jq
Set up firewall rules on each node. The following uses ufw
to setup firewall.
SSH into each node, and change to superuser.
Run the following command for each rule in the following table
Example
Enable ufw.
Additional Reference:
The following setup has to be done for each cluster node.
Choose odd number of server nodes. Example if there are 3 nodes, choose 1 server node and two agent nodes. If there are 7 nodes, choose 3 server nodes and 4 agent nodes.
For the first server node:
Configure rke2-server.conf.primary.template
,
SSH into the node. Place the file to this path: /etc/rancher/rke2/config.yaml
. Create the directory if not present already. mkdir -p /etc/rancher/rke2
.
Run this to download rke2.
Run this to start rke2 server:
For subsequent server and agent nodes:
Configure rke2-server.conf.subsequent.template
or rke2-agent.conf.template
, with relevant ips for each node.
SSH into each node place the relevant file to this path: /etc/rancher/rke2/config.yaml
, based on whether its a worker node, or control-plane node. (If worker use agent file. If control-plane use server file).
Run this to get download rke2.
To start rke2, use this
or, based on server or agent.
Execute these commands on a server node.
Navigate to Cluster Management section in Rancher.
Click on Import Existing
cluster. And follow the steps to import the newly created cluster.
After Rancher import, do not use the the kubeconfig from server anymore. Use it only via downloading kubeconfig from rancher.
The following setup can be done from the client machine. This install Istio Operator, Istio Service Mesh, Istio Ingressgateway components.
Gather Wildcard TLS certificate and key and run;
Create istio gateway for all hosts using this command:
Configure the the config.yaml with relevant values.
Run this to download rke2.
Run this to start rke2 node:
Protocol | Port | Should be accessible by only | Description |
---|
Clone the and go to directory.
Additional Reference:
This section assumes a Rancher server has already been setup and operational. in case not already done.
Use this to install longhorn.
From directory, configure the istio-operator.yaml, and run;
From directory, take either the rke2-server.conf.subsequent.template
or rke2-agent.conf.template
based on whether the new node is control plane node or Worker node. Copy this file to /etc/rancher/rke2/config.yaml
in the new node.
TCP | 22 | SSH |
TCP | 80 | Postgres ports |
TCP | 443 | Postgres ports |
TCP | 5432:5434 | Postgres ports |
TCP | 9345 | RKE2 agent nodes | Kubernetes API |
TCP | 6443 | RKE2 agent nodes | Kubernetes API |
UDP | 8472 | RKE2 server and agent nodes | Required only for Flannel VXLAN |
TCP | 10250 | RKE2 server and agent nodes | kubelet |
TCP | 2379 | RKE2 server nodes | etcd client port |
TCP | 2380 | RKE2 server nodes | etcd peer port |
TCP | 30000:32767 | RKE2 server and agent nodes | NodePort port range |
Work in progress
The guide here provides instructions to deploy OpenG2P on Kubernetes (K8s) cluster.
K8s cluster is set up as given here.
This section assumes the OpenG2P docker is already packaged. See Packaging Instructions.
Clone the https://github.com/OpenG2P/openg2p-packaging and go to charts/openg2p directory
Run, (This installs the ref-impl dockers):
If use different docker image or tag use:
From the charts/odk-central directory, run the following to install ODK helm chart.
Note: The above helm chart uses the following docker images built from https://github.com/getodk/central/tree/v2023.1.0, since ODK Central doesn't provide pre-built docker images for these.
Post installation:
Exec into the service pod, and create user (and promote if required).
Uninstallation:
To uninstall, just delete the helm installation of odk-central. Example: