HA (High Availability) Deploy
Última actualización
Última actualización
Using this installation method, it is possible to deploy the OMniLeads Suite in a High Availability Cluster arrangement, grouping containers according to the following scheme:
To do this, four Linux instances (with any modern operating system) with Internet access are required. Since Ansible uses an SSH (secure shell) connection process to access the instance and run its playbook, it is a mandatory requirement to have the SSH public key and the known_hosts file configured appropriately on each host.
Below is a generic inventory file for a typical deployment in the Cluster HA scheme. In its first section, the different hosts are listed by tenant and by type of deployment to be executed (ha_instances):
In its second section, the inventory file allows parameterizing environment variables necessary for the action. Note: By default, all of them directly affect ALL declared instances, unless a variable (or group of variables) is specified in the host (or group of hosts) section in question. Finally, the last section covers the grouping of hosts based on the selected architecture. In our case, under the labels omnileads_aio and ha_omnileads_sql, the hosts corresponding to the instance(s) that are intended to be deployed would be listed ("_aio" will refer to nodes A and B of HA for Application, while "_sql " will refer to the PostgreSQL read-only and read-write nodes).
Below is an example:
Now, let's get to work!
As a first step, we proceed to create the instances folder in the root directory. Following this, inside it we will create a subfolder where we will host the example inventory file provided by the repository Note: Although we are within a versioned repository, the name "instances" is reserved and is ignored by the repository from the .gitignore file.
Based on what we understand in the sections of the inventory file, we will declare our future OMniLeads instance in HA in the ha_instances section. In our case, we will use the example name "eucalyptus" to define the tenant:
It is important to specify the scenario in which you will work. If we will use a VPS, the environment to configure will be "cloud", and it will be "lan" if a Virtual Machine is used. To do this, we will define the infra_env environment variable as appropriate: "cloud" (by default) or "lan". The variables tenant_id (name of the tenant) and ansible_host (IP address that Ansible must reach to run the Playbook) are mandatory to specify the tenant. In turn, a set of variables necessary for the correct functioning of the HA cluster is listed:
Parameters
bucket_url: The URL of the external bucket (object storage).
postgres_ro_vip: IP address of the read-only PostgreSQL node.
postgres_rw_vip: IP address of the read-write PostgreSQL node.
omnileads_vip: Virtual IP address for HTTPS access of the HA cluster.
aio_2: IP address of App node 2.
aio_1: IP address of App node 1.
postgres_2: IP address of PostgreSQL node 2.
postgres_1: IP address of PostgreSQL node 1.
netaddr and netprefix: Parameters used to describe the network and netmask of the environment.
omnileads_ha: This parameter instructs Ansible to run certain playbook tasks related to HA configuration.
ha_vip_nic: This parameter refers to the virtual IP to assign to the cluster. In a high availability environment, we need to specify the initial condition of each cluster node (ha_rol); this specifies the NIC name on which the VIP will be established
Finally, we must ensure that the last section contains the hosts of the cluster corresponding to the HA tenant (both in __aio and __sql). Below is an example with our tenant "eucalipto": The rest of the parameters can be customized as desired.
With the inventory file configured, we proceed to execute the installation action of the new tenant:
In the First Login section, you can review the necessary steps to obtain the first access to the UI with the Administrator user. For more information, we suggest visiting the documentation displayed in the official project repository.