Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Work in progress
Purpose | vCPUs | RAM | Storage (SSD) | Number of Virtual Machines* | Preferred Operating System |
---|---|---|---|---|---|
Purpose | vCPUs | RAM | Storage (SSD) | Number of Virtual Machines* | Preferred Operating System |
---|---|---|---|---|---|
TBD
All the machines in the same network.
Public IP assigned to the Wireguard machine.
The following domain names and mappings will be required. Examples:
One wildcard certificate is required at least, depending on the above domain names used. This can also be generated using letsencrypt.
Domain Name (examples) | Mapped to |
---|---|
Cluster nodes
8
32 GB
128 GB
3
Ubuntu Server 20.04
Wireguard
4
16 GB
64 GB
1
Ubuntu Server 20.04
Cluster nodes
8
32 GB
128 GB
3
Ubuntu Server 20.04
Wireguard
4
16 GB
64 GB
1
Ubuntu Server 20.04
Backup
4
16 GB
512 GB
1
Ubuntu Server 20.04
openg2p.<your domain>
uat.<your domain>
pilot.openg2p.<your domain>
"A" Record mapped to Load Balancer IP or at least 3 nodes of the K8s Cluster
*. openg2p.<your domain>
*.uat.<your domain>
*.pilot.openg2p.<your domain>
"CNAME" Record mapped to the above domain. (This is a wildcard DNS mapping)
Work in progress
Install letsencrypt and certbot.
Generate Certificate.
The above command will ask for _acme-challenge
, since the chosen challenge is of type DNS. Create the _acme-challenge
TXT DNS record accordingly, and continue with the above prompt to certs generation.
The generated certs should be present in /etc/letsencrypt
directory.
Run the same generate certs command to renew certs.
The above command will generate new pair of certificates. The DNS challenge needs to be performed again, as prompted.
Run the following to upload new certs back to Kubernetes Cluster. Adjust the certs path in the below command.
Work in progress
To deploy OpenG2P for sandbox, staging and production environments refer to the Deployment on Kubernetes guide.
To install OpenG2P on your work machine for development refer to the Getting Started guide in the Developer Zone.
Work in progress
Once the Odoo server is up, log in as admin. And enter debug mode
on odoo.
Go to Settings -> Technical -> System Parameters.
Configure web.base.url
to your required base URL.
Create another system parameter, with the name web.base.url.freeze
, and value True
.
Create another system parameter, with the name auth_oauth.authorization_header
, and value True
.
Go to the Apps sections on UI, and click on the Update Apps List action on top.
Search through and install required G2P Apps & Modules.
After all, apps are installed, proceed to create users and assign roles.
Do not use the admin
user after this step. Log back in as a regular user.
Configure ID Types
on Registry
-> Configuration
.
WIP.
Work in progress
Rancher is used to managing multiple clusters. Being a critical component of cluster administration it is highly recommended that Rancher itself runs on a Kubernetes cluster with sufficient replication for high availability and avoiding a single point of failure.
Set up a new RKE2 cluster. Refer to the guide.
Do not remove the stock ingress controller in the server config.
No need to install Istio.
It is recommended to set up a double-node cluster for high availability. However, for the non-production environments, you may create a single node cluster to conserve resources
To install Rancher use this (hostname to be edited in the below command):
Configure/Create TLS secret accordingly.
Work in progress
This page contains steps to be performed for packaging different components and addons, of OpenG2P and similar, into a docker image.
Clone the and go to directory
Create text file, example my-package.txt
. This signifies a package. This file should include all openg2p modules (repositories) to be packaged into a new docker. Each line describes one repository to include, and the structure of each line looks like this.
Any underscore in the repository name will be converted to hyphen during installation. For example:
This is internally converted to repo-name
.
The above configuration can be made via environment variables also.
Any variable with the prefix G2P_PACKAGE_my_package_
will be considered as repository to install i.e., G2P_PACKAGE_<package_name>_<repo_name>
. For example;
These env variables can be added in .env
file (in the same folder). The .env
file will automatically be considered.
If same package is available in my-package.txt
, .env
and environment variable, then this will be the preference order in which they are considered (highest to lowest).
.env
file
Environment variable.
my-package.txt
Use the .env
to overload packages from my-package.txt
Run the following to download all packages:
After downloading packages, run the following to build docker image:
Then push the image.
Notes:
The above uses bitnami's odoo image as base.
This script also pulls in any OCA dependencies configured, in oca_dependencies.txt
inside each package. Use this env variable to change the version of OCA dependencies to be pulled, OCA_DEPENDENCY_VERSION
(defaults to 15.0
).
This also installs any python requirements configured in requirements.txt
inside each package.
Reference packages can be found in packages directory inside packaging directory.
The table below enumerates various admin/user access to the entire deployment. This includes access to machines, Rancher, Kubernetes cluster as well as OpenG2P application.
Resource | Role | Password/key | Access method | Providing further access |
---|
The guide below provides steps to provide Wireguard access to users' devices (called peers). Note that the access must be provided to each unique device (like a desktop, laptop, mobile phone etc). Multiple logins with same conf file is not possible.
The Wireguard conf file MUST NOT be shared with any other users for security reasons.
Login to the Wireguard node via SSH.
Navigate to Wireguard conf folder
You will see several pre-created peer config files. You may assign any one of the file (not assigned before) to a new peer/user.
Editassigned.txt
file to assign a new the peer (client/user). Make sure a conf file is assigned to a unique user, already assigned file is never re-assigned to another user.
Add the peers with name as mentioned below. Example:
Share the conf file with the peer/user securely. Example: peer1/peer1.conf
Work in progress
The guide here provides instructions to deploy OpenG2P on Kubernetes (K8s) cluster.
K8s cluster is set up as given .
This section assumes the OpenG2P docker is already packaged. See Packaging Instructions.
Clone the and go to directory
Run, (This installs the ref-impl dockers):
If use different docker image or tag use:
Post installation:
Exec into the service pod, and create user (and promote if required).
Uninstallation:
To uninstall, just delete the helm installation of odk-central. Example:
Work in progress
The following guide uses to set up the Kubernetes (K8s) cluster.
The requirements for setting up the cluster are met as given .
The following tools are installed on all the nodes and the client machine.
ufw
, wget
, curl
, kubectl
, istioctl
, helm
, jq
Set up firewall rules on each node. The following uses ufw
to setup firewall.
SSH into each node, and change to superuser.
Run the following command for each rule in the following table
Example
Enable ufw.
Additional Reference:
The following setup has to be done for each cluster node.
Choose odd number of server nodes. Example if there are 3 nodes, choose 1 server node and two agent nodes. If there are 7 nodes, choose 3 server nodes and 4 agent nodes.
For the first server node:
Configure rke2-server.conf.primary.template
,
SSH into the node. Place the file to this path: /etc/rancher/rke2/config.yaml
. Create the directory if not present already. mkdir -p /etc/rancher/rke2
.
Run this to download rke2.
Run this to start rke2 server:
For subsequent server and agent nodes:
Configure rke2-server.conf.subsequent.template
or rke2-agent.conf.template
, with relevant ips for each node.
SSH into each node place the relevant file to this path: /etc/rancher/rke2/config.yaml
, based on whether its a worker node, or control-plane node. (If worker use agent file. If control-plane use server file).
Run this to get download rke2.
To start rke2, use this
or, based on server or agent.
Execute these commands on a server node.
Navigate to Cluster Management section in Rancher.
Click on Import Existing
cluster. And follow the steps to import the newly created cluster.
After Rancher import, do not use the the kubeconfig from server anymore. Use it only via downloading kubeconfig from rancher.
The following setup can be done from the client machine. This install Istio Operator, Istio Service Mesh, Istio Ingressgateway components.
Gather Wildcard TLS certificate and key and run;
Create istio gateway for all hosts using this command:
Configure the the config.yaml with relevant values.
Run this to download rke2.
Run this to start rke2 node:
Install.
From folder, run the following to install Keycloak (hostname to be edited in the below command).
Integrate Rancher and Keycloak using guide.
Follow the guide .
From the directory, run the following to install ODK helm chart.
Note: The above helm chart uses the following docker images built from , since ODK Central doesn't provide pre-built docker images for these.
Protocol | Port | Should be accessible by only | Description |
---|
Clone the and go to directory.
Additional Reference:
This section assumes a Rancher server has already been setup and operational. in case not already done.
Use this to install longhorn.
From directory, configure the istio-operator.yaml, and run;
From directory, take either the rke2-server.conf.subsequent.template
or rke2-agent.conf.template
based on whether the new node is control plane node or Worker node. Copy this file to /etc/rancher/rke2/config.yaml
in the new node.
TCP | 22 | SSH |
TCP | 80 | Postgres ports |
TCP | 443 | Postgres ports |
TCP | 5432:5434 | Postgres ports |
TCP | 9345 | RKE2 agent nodes | Kubernetes API |
TCP | 6443 | RKE2 agent nodes | Kubernetes API |
UDP | 8472 | RKE2 server and agent nodes | Required only for Flannel VXLAN |
TCP | 10250 | RKE2 server and agent nodes | kubelet |
TCP | 2379 | RKE2 server nodes | etcd client port |
TCP | 2380 | RKE2 server nodes | etcd peer port |
TCP | 30000:32767 | RKE2 server and agent nodes | NodePort port range |
Compute nodes | DevOps Super Admin | SSH Key | SSH into the node via private IP (via Wireguard) with the root user using SSH key | Users generate their own SSH Keys whose public keys are added to the nodes. |
Wireguard node | DevOps Super Admin | SSH Key | SSH into the node via public IP with the root user using SSH key |
Rancher (global) | Rancher Super Admin | Password | Open Rancher URL on browser and login via password | Individual cluster administrators can be created from Rancher UI. |
Rancher (cluster) | Cluster Admin | Password | Open Rancher URL on browser and login via password | Users can be added and provided RBAC by Cluster Administrator using Rancher UI. |
OpenG2P Application | Odoo Super Admin | Password | Open OpenG2P URL on browser and login via password | Users can be created and assigned fine-grained roles. |
To provide Wireguard access to users/clients refer to the below.