Architecture and components

Architecture and components

OMniLeads is an application based on multiple components that reside in individual GitLab repositories, where source and/or configuration code, build and deploy scripts, and CI/CD pipelines are stored.

Although when running an instance of OMniLeads the components interact as a unit through TCP/IP connections, the reality is that each one is its own entity with its own GitLab repository and DevOps cycle.

At the build level, each component is distributed from RPMs (for installations on Linux) and Docker-IMG (for installations on Docker Engine). Regarding the deploy, each component has a first_boot_installer.tpl script, with which it can be invoked as a Provisioner and thus easily deployed on a Linux host in an automated manner, or executed manually by editing variables on the script in question.

We can think of each component as a piece of a puzzle with its attributes:

Description of each component

Each component is described below:

  • OMLApp (https://gitlab.com/omnileads/ominicontacto): The web application (Python/Django) is contained in OMLApp. Nginx is the webserver that receives HTTPS requests and redirects those requests to OMLApp (Django/UWSGI). OMLApp interacts with various components, either to store/provision configuration, as well as call generation, or to return views of reports and agent/campaign monitoring. OMLApp uses PostgreSQL as SQL engine, Redis as cache and to provision Asterisk configuration, either through .conf files as well as generating certain key/value structures that are consulted by Asterisk in real time when processing calls over bells. OMLApp connects to the Asterisk AMI interface to generate calls and reload some other configuration, it also makes connections to the WombatDialer API when it is necessary to generate campaigns with predictive dialing.

  • Asterisk (https://gitlab.com/omnileads/omlacd): OMniLeads was based on the Asterisk framework as the basis of the ACD (Automatic Call Distributor). It is in charge of the implementation of business logic (telephone campaigns, recordings, reports and metrics of the telephone channel). At the network level, Asterisk receives AMI requests from OMLApp and from WombatDialer, while it needs to execute connections to PostgresSQL to leave logs, to Redis to query campaign parameters provisioned from OMLApp, and it also needs to access Nginx to establish the Websocket used. to bring the content of configuration files contained in Asterisk (etc/asterisk) and generated from OMLApp.

  • Kamailio (https://gitlab.com/omnileads/omlkamailio): This component is used in conjunction with RTPEngine (WebRTC bridge) when managing WebRTC communications (SIP over WSS) against agent users, while maintaining sessions (SIP over UDP) against Asterisk. Kamailio receives the REGISTERs generated by the webphone (JSSIP) from the agents, therefore it is in charge of the work of registering and locating users using Redis to store the network address of each user. For Asterisk, all the agents are available in the Kamailio URI, so Kamailio receives INVITEs (UDP 5060) from Asterisk when it needs to locate an agent to connect a call. Finally, it is worth mentioning the fact that Kamailio generates connections to RTPEngine (TCP 22222) requesting SDP when establishing SIP sessions between Asterisk VoIP and WebRTC agents.

  • RTPEngine (https://gitlab.com/omnileads/omlrtpengine): OMniLeads relies on RTPEngine when it comes to transcoding and bridges between WebRTC technology and VoIP technology from the audio point of view. The component maintains sRTP-WebRTC audio channels with the user agents on one side, while on the other it establishes RTP-VoIP channels against Asterisk. RTPEngine receives connections from Kamailio to port 22222.

  • Nginx (https://gitlab.com/omnileads/omlnginx): The web server of the project is Nginx, and its task is to receive TCP 443 requests from users, as well as from some components such as Asterisk. On the one hand, Nginx is called every time a user accesses the URL of the deployed environment. If user requests are targeted to render some view of the Django web application, then Nginx redirects the request to UWSGI, while if user requests are targeted to the REGISTER of their JSSIP webphone, then Nginx redirects the request to Kamailio (to establish a SIP websocket). Also, Nginx is called by Asterisk when establishing the websocket against the OMniLeads Websocket-Python, which provisions the configuration provided from OMLApp.

  • Python websocket (https://gitlab.com/omnileads/omnileads-websockets): OMniLeads relies on a websockets server (based on Python), used to run background tasks (reports and CSV generation) and receive an asynchronous notification when the task is complete, which optimizes application performance. In turn, it is used as a bridge between OMLApp and Asterisk in the provisioning of the .conf file configuration (etc/asterisk). When starting Asterisk, a process is launched that establishes the websocket against that component, and from there, you receive notifications whenever configuration changes are provided. In its default configuration, it raises TCP port 8000, and received connections are always redirected from Nginx.

  • Redis (https://gitlab.com/omnileads/omlredis): Redis is used for 3 very specific purposes. On the one hand, as a cache to store the results of recurring queries involved in the campaign and agent supervision views; on the other hand it is used as DB for the presence and location of the users; and finally for the storage of the Asterisk configuration (etc/asterisk/) as well as the configuration parameters involved in each module (campaigns, trunks, routes, IVR, etc.), replacing the native Asterisk alternative (AstDB ).

  • PostgreSQL (https://gitlab.com/omnileads/omlpgsql): PGSQL is the DB SQL engine used by OMniLeads. From there, all the reports and metrics of the system materialize. Also there, all the configuration information that must persist over time is stored. It receives connections on its TCP port 5432 from the OMLApp components (read/write) and from Asterisk (write logs).

  • WombatDialer (https://docs.loway.ch/WombatDialer/index.html): To work with predictive dialing campaigns, OMniLeads uses this third-party software called WombatDialer. This dialer has a TCP 8080 API over which OMLApp generates connections to provision campaigns and contacts, while WombatDialer uses the AMI interface of the Asterisk component when generating automatic outgoing calls, and checking the status of the agents of each Campaign. This component uses its own MySQL engine to operate.

Deploy and environment variables

Having processed the previous exposition about the function of each component and their interactions in terms of networking, we move on to address the issue of deployment. Each component has a bash script and Ansible playbook that allow the materialization of the component, either on a dedicated Linux-Host or coexisting with other components on the same host. This is thanks to the fact that the Ansible playbook can be invoked from the bash script called first_boot_installer.tpl, in the case of using it as a dedicated Linux-Host provider to host the component within a cluster framework, as well as also imported by the Ansible playbook of the OMLApp component when deploying several components on the same host where the OMLApp application runs.

Therefore, we conclude that each component can either exist on a standalone host, or coexist with OMLApp on the same host. These possibilities are covered by the installation method. Said installation method is completely based on environment variables that are generated in the deploy and have the purpose, among other things, of containing the network and port addresses of each component necessary to achieve the interaction. That is, all the configuration files of each OMniLeads component look for their peer by calling OS environment variables. For example, the Asterisk component points its AGIs to the envvar $REDIST_HOST and $REDIS_PORT when trying to generate a connection to Redis. Thanks to the environment variables, compatibility between the bare-metal approaches and docker containers is achieved, that is, we can deploy OMniLeads by installing all the components on a host, distributing them on several hosts or directly on Docker containers.

The fact of provisioning the configuration parameters via environment variables, and also considering the possibility of always deploying the application safeguarding the data that must be persisted (call recordings and PostgreSQL DB) on resources mounted on the Linux file system that hosts each component, we can then consider working with immutable infrastructure as an option if we want to. We can easily destroy and recreate each component, without losing important data when resizing the component or planning updates. We can simply discard the host where a version runs and deploy a new one with the latest update. We have the potential of the approach posed by the paradigm of infrastructure as code or immutable infrastructure, raised from the perspective of the new IT generations that operate within the DevOps culture. This approach is somewhat optional, since updates can be handled from the more traditional point of view without having to destroy the instance that hosts the component.

The potential of going to cloud-init as a provisioner

Cloud-init is a software package that automates the initialization of cloud instances during system boot. cloud-init can be configured to perform a variety of tasks. Some examples of tasks that cloud-init can perform are:

  • Configure a hostname.

  • Installing packages on an instance.

  • Execution of provisioning scripts.

  • Suppress the default behavior of the virtual machine.

As of OMniLeads 1.16, each component contains a script called fisrt_boot_installer.tpl. This script can be called precisely at the cloud-init level, so that a fresh installation of the component can be launched at the first boot of the operating system.

As we have mentioned, it is possible to call the script when creating a cloud VM:

Another option is to render as a Terraform template to be launched as provisioning of each instance created from Terraform.

Beyond the component in question, the purpose of the first_boot_installer.tpl is:

  • Install some packages.

  • Adjust some other configuration of the virtual machine.

  • Determine network parameters of the new Linux instance.

  • Run the Ansible playbook that installs the component on the operating system. The first 3 steps are skipped when the component is installed from OMLApp sharing

Última actualización