Comment on page
Deploy utilizando Podman
git clone https://gitlab.com/omnileads/omldeploytool.git
cd omldeploytool/systemd
It is possible to manage hundreds of OMniLeads instances with Ansible inventories.
Then, for each running instance, a collection of components invoked as systemd services or docker-compose implement OMniLeads functionalities on the Linux instance (or set of instances).
Each OMniLeads instance involves the following collection of components that are run on a container. It is possible to group these containers on a single Linux instance or cluster them horizontally in a configuration.
Note: If working on a VPS with a public IP address, it is a mandatory requirement that it also has a network interface with the ability to associate a private IP address.
An instance of OMniLeads is launched on a Linux server (using Systemd and Podman) by running a bash script (deploy.sh) along with its input parameters and a set of Ansible files (Playbooks + Templates) that are invoked by the script.
This executable script triggers the deploy actions. It is responsible for receiving the action parameters to execute and the tenant on which to deploy the action.
The script searches for the inventory file of the tenant on which it needs to operate and then launches the root Ansible playbook (matrix.yml) through ansible-playbook with the corresponding tags to respond to the request made.
./deploy.sh --help
To run an installation, upgrades, backup or restore deployment, two parameters must be called.
- --action=
- --tenant=
for example:
./deploy.sh --action=install --tenant=tenant_folder_name
Ansible is structured in an inventory file, a root playbook (matrix.yml), and a series of playbooks that implement base actions on the VM or group of VMs, as well as specific tasks that deploy each of the OMniLeads components.
The inventory file is where the type of OMniLeads to generate (all in one, all in three, or high availability) is stored, along with configuration parameters such as connection data for postgres, asterisk, redis, object storage, etc.
There are three types of inventory files for Ansible:
- inventory_aio.yml: It should be invoked when deploying OMniLeads all in one. That is, when deploying all App components on a single Linux instance.
- inventory_ait.yml: It should be invoked when deploying OMniLeads all in three, that is, when deploying all App components on a cluster of three Linux instances (data, voice, & web).
- inventory_ha.yml: It should be invoked when deploying OMniLeads under an On-Premise High Availability scheme, based on two physical servers (hypervisors) with 8 VMs on which the components are distributed.
Each file is composed of a section where the hosts to operate on are declared along with their local variables. Depending on the format to be deployed (AIO, AIT or HA), it can be one or several hosts. For example:
---
all:
hosts:
omnileads_aio:
omlaio: true
ansible_host: X.X.X.X
omni_ip_lan: Z.Z.Z.Z
ansible_ssh_port: 22
Then we count the tenant variables to display, labeled/indented under vars:. Here we find all the adjustable parameters when invoking a deploy instance. each one is described by a # --- comment preceding it.
vars:
# --- ansible user auth connection
ansible_user: root
# --- Activate the OMniLeads Enterprise Edition - with "AAAA" licensed.
# --- on the contrary you will deploy OMniLeads OSS Edition with GPLV3 licensed.
enterprise_edition: true
# --- versions of each image to deploy
# --- versions of each image to deploy
omnileads_version: 1.26.0
websockets_version: 230204.01
nginx_version: 230215.01
kamailio_version: 230204.01
asterisk_version: 230204.01
rtpengine_version: 230204.01
postgres_version: 230204.01
# --- "cloud" instance (access through public IP)
# --- or "lan" instance (access through private IP)
# --- in order to set NAT or Publica ADDR for RTP voice packages
infra_env: cloud
# --- If you have an DNS FQDN resolution, you must to uncomment and set this param
# --- otherwise leave commented to work invoking through an IP address
#fqdn: fidelio.sephir.tech
Then, once OMnileads is deployed on the corresponding instance/s, each container on which a component works can be managed as a systemd service.
systemd start component
systemd restart component
systemd stop component
Behind every action triggered by the systemctl command, there is actually a Podman container that is launched, stopped, or restarted. This container is the result of the image invoked along with the environment variables.
For example, if we look at the systemd file of the Nginx component.
/etc/systemd/system/nginx.service looks like:
[Unit]
Description=Podman container-oml-nginx-server.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--sdnotify=conmon \
--replace \
--detach \
--network=host \
--env-file=/etc/default/nginx.env \
--name=oml-nginx-server \
--volume=/etc/omnileads/certs:/etc/omnileads/certs \
--volume=django_static:/opt/omnileads/static \
--volume=django_callrec_zip:/opt/omnileads/asterisk/var/spool/asterisk/monitor \
--volume=nginx_logs:/var/log/nginx/ \
--rm \
docker.io/omnileads/nginx:230215.01
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
/etc/default/nginx.env looks like:
DJANGO_HOSTNAME=172.16.101.221
DAPHNE_HOSTNAME=172.16.101.221
KAMAILIO_HOSTNAME=localhost
WEBSOCKETS_HOSTNAME=172.16.101.221
ENV=prodenv
S3_ENDPOINT=http://172.16.101.221:9000
This is the standard for all components.
In order to manage multiple instances of OMniLeads from this deployment tool, you must create a folder called instances at the root of this directory. The reserved name for this folder is instances since said string is inside the .gitignore of the repository.
The idea is that the mentioned folder works as a separate GIT repository, thus providing the possibility to maintain an integral backup in turn that the SRE or IT people is supported in the use of GIT.
cd omldeploytool/systemd
git clone your_tenants_config_repo instances
Then, for each instance to be managed, a sub-folder must be created within instances. For example:
mkdir instances/Subscriber_A
Once the tenant folder is generated, there you will need to place a copy of the inventory.yml file available in the root of this repository, in order to customize and tack inside the private GIT repository.
cp inventory_aio.yml instances/Subscriber_A/inventory.yml
git add instances/Subscriber_A
git commit 'my new Subscriber A'
git push origin main
Then, once we have adjusted the inventory.yml file inside the tenant's folder, we can trigger its deployment.
./deploy.sh --action=install --tenant=Subscriber_A
Última actualización 6mo ago