Architecture and components

Archtecture and components

We can think of each component as a puzzle piece with its attributes: At the build level, each component is distributed via containers (images). While at runtime, an instance of OMniLeads has components interacting as a unit through TCP/IP connections, the reality is that each one is its own entity with its GitLab repository and DevOps lifecycle. OMniLeads is an application based on multiple components residing in individual GitLab repositories, where source and/or configuration code, build scripts, deploy scripts, and CI/CD pipelines are stored.

Description of each component

Below is a description of each component:

  • OMLApp (https://gitlab.com/omnileads/ominicontacto): OMLApp uses PostgreSQL as the SQL engine, Redis as a cache, and provisions Asterisk configurations through .conf files and key/value structures that Asterisk queries in real-time for call processing in campaigns. OMLApp connects to Asterisk's AMI interface to generate calls and reload configurations, and it also connects to the WombatDialer API for predictive dialing campaigns. Nginx handles HTTPS requests and redirects them to OMLApp (Django/UWSGI). OMLApp interacts with multiple components for storing/provisioning configurations, generating calls, and providing report views and agent/campaign monitoring. The web application (Python/Django) is contained within OMLApp.

  • Asterisk (https://gitlab.com/omnileads/omlacd): OMniLeads is based on the Asterisk framework as the foundation of its ACD (Automatic Call Distributor). It handles the implementation of business logic (phone campaigns, recordings, reports, and telephone channel metrics). On the networking level, Asterisk receives AMI requests from OMLApp and WombatDialer, while it needs to make connections to PostgreSQL to log data, to Redis to query campaign parameters provisioned from OMLApp, and it also needs to access Nginx for setting up the WebSocket used to fetch configuration files contained in Asterisk (etc/asterisk) and generated from OMLApp.

_images/arq_omlacd.png

  • Kamailio (https://gitlab.com/omnileads/omlkamailio): This component is used in conjunction with RTPEngine (WebRTC bridge) when managing WebRTC communications (SIP over WSS) against agent users, while maintaining sessions (SIP over UDP) against Asterisk. Kamailio receives the REGISTERs generated by the webphone (JSSIP) from the agents, therefore it takes care of the task of registering and locating users using Redis to store the network address of each user. For Asterisk, all agents are available in the Kamailio URI, so Kamailio receives INVITEs (UDP 5060) from Asterisk when it requires locating an agent to connect a call. Finally, it is worth mentioning the fact that Kamailio generates connections to RTPEngine (TCP 22222) requesting an SDP when establishing SIP sessions between Asterisk VoIP and WebRTC agents.

  • RTPEngine (https://gitlab.com/omnileads/omlrtpengine): OMniLeads relies on RTPEngine when it comes to transcoding and bridging between WebRTC technology and VoIP technology from an audio point of view. The component maintains sRTP-WebRTC audio channels with the agent users on one hand, while on the other it establishes RTP-VoIP channels against Asterisk. RTPEngine receives connections from Kamailio to port 22222.

  • Nginx (https://gitlab.com/omnileads/omlnginx): The project's web server is Nginx, responsible for receiving TCP 443 requests from users and various components like Asterisk. Nginx is invoked whenever a user accesses the deployed environment's URL. If users' requests aim to render a view of the Django web application, Nginx redirects the request to UWSGI. If the requests are intended for the REGISTER of their JSSIP webphone, Nginx redirects the request to Kamailio (to establish a websocket SIP). Nginx is also invoked by Asterisk when establishing the websocket against the Websocket-Python of OMniLeads, which provisions the configuration provided by

  • Python websocket (https://gitlab.com/omnileads/omnileads-websockets): When Asterisk starts, a process is launched that establishes the websocket with the component. From there, it receives notifications whenever configuration changes are provided. In its default configuration, it raises TCP port 8000, and incoming connections are always redirected from Nginx. OMniLeads relies on a websocket server (based on Python) used to run background tasks (reports and CSV generation) and receive asynchronous notifications when a task is completed, optimizing the application's performance. It is also used as a bridge between OMLApp and Asterisk for provisioning configuration files (.conf) located in etc/asterisk.

  • Redis (https://gitlab.com/omnileads/omlredis): Redis is used for three specific purposes. Firstly, as a cache to store results of recurring queries involved in the supervision views of campaigns and agents. Secondly, it is used as a DB for user presence and location. Lastly, it is used for storing the configuration of

_images/arq_omlredis.png

  • PostgreSQL (https://gitlab.com/omnileads/omlpgsql): PGSQL is the SQL database engine used by OMniLeads. All reports and metrics within the system are generated from it. Additionally, it stores all the configuration information that needs to persist over time. It accepts connections on its TCP 5432 port from the OMLApp components (read/write) and from Asterisk (log writing).

Deploy enviroment variables

Having processed the previous presentation about the function of each component and its interactions in terms of networking, we move on to address the issue of deploy. Each component has a bash script and Ansible playbook that allow the materialization of the component, either on a dedicated Linux-Host or coexisting with other components on the same host. This is thanks to the fact that the Ansible playbook can be invoked from the bash script called first_boot_installer.tpl, in the case of using it as a provider of a dedicated Linux-Host to host the component within the framework of a cluster, as well as also imported by the Ansible playbook of the OMLApp component when deploying several components on the same host where the OMLApp application runs.

Therefore, we conclude that each component can either exist on a standalone host, or also coexist with OMLApp on the same host. These possibilities are contemplated by the installation method. This installation method is completely based on environment variables that are generated in the deploy and whose purpose, among other things, is to contain the network and port addresses of each component necessary to achieve the interaction. That is, all the configuration files of each OMniLeads component search for their peer by invoking OS environment variables. For example, the Asterisk component points its AGIs to the $REDIST_HOST and $REDIS_PORT envvar when trying to generate a connection to Redis. Thanks to the environment variables, compatibility is achieved between bare-metal approaches and Docker containers, meaning that we can deploy OMniLeads by installing all the components on a host, distributing them on several hosts or directly on Docker containers.

The fact of provisioning the configuration parameters via environment variables, and also considering the possibility of always deploying the application safeguarding the data that must persist (call recordings and PostgreSQL DB) on resources mounted on the Linux file system that hosts each component, we can then consider working with immutable infrastructure as an option if we wanted to. We can easily destroy and recreate each component, without losing important data when resizing the component or considering updates. We can simply discard the host where a version is running and deploy a new one with the latest update. We have the potential of the approach proposed by the infrastructure paradigm as code or immutable infrastructure, raised from the perspective of the new IT generations that operate within the DevOps culture. This approach is somewhat optional, since updates can be handled from the more traditional perspective without having to destroy the instance that hosts the component.

Última actualización