Intro to Cisco CCE – Part 3 – Architecture
In the last two posts I outlined what CCE is and roughly how it works together with other pieces to handle calls. In this post, I’ll reference the two boxes from the last diagram labeled “CCE” and “CVP”, and blow them up to show you all the parts that come together to make them work. If you are used to CCX, I will warn you that it is a tad more complicated. Where CCX does everything on a single monolithic server, CCE breaks it’s functionality out across multiple processes and servers. All in all it makes the product significantly more difficult to set up and maintain, but the end result is significantly more scalability and architectural flexibility. Let’s dig in.
There are actually two different CCE product lines, Unified Contact Center Enterprise (UCCE) and Packaged Contact Center Enterprise (PCCE). PCCE was introduced relatively recently as a less expensive, streamlined version targeted towards medium-sized call centers. UCCE (so named because adding the word “Unified” to everything was what all the cool communications companies were doing in 2006) is the original, more flexible and scalable, yet more complex product suite. Ultimately, the core software is the same. The only real change is how the components are deployed (and even that isn’t really true anymore as of v11.5), so this discussion applies equally to both PCCE and UCCE. When they differ, I will point it out.
There are many ways to divide up the functions and services that make up CCE and countless optional components and add-ons, not to mention Cisco’s penchant for constantly changing the names of things. I however consider the following the seven components to be the most fundamental, and critical to understanding how to configure and support CCE:
- Peripheral Gateway (PG)
- CTI Server
- CTIOS / Finesse / CAD
- Administrative Workstation
- Historical Data Server
Let’s go through what each of these does:
The Router is perhaps poorly named; It does route calls, but does much more than that. It is what you might call the “brains” of the entire CCE system, determining what calls should execute which scripts, processing and executing scripts, tracking system-wide real-time stats, and managing communications which virtually every other component that makes up CCE. It’s kind of a big deal.
If you guessed that the Logger “logs,” then good guess but wrong. The Logger is not actually responsible for “logging” in the usual software sense, it’s purpose is to read and write data to CCE’s core database. When the Router needs to load the current configuration on startup, it doesn’t access the database directly. Instead it passes this request to the Logger which handles low-level database connectivity. Same goes for any configuration changes that need to written back to the database. The Logger is the only component that talks directly to the core CCE database. This has architectural advantages, as other components do not need to worry about database specific code, and can instead focus on their respective functions.
A Peripheral Gateway, or PG for short, is responsible for communicating with devices external to the core CCE software. These external devices are referred to as “peripherals”. For example, both CVP and UCM are considered external peripherals from the perspective of CCE, even though they are often treated as part of the overall UCCE suite. As such, all communication to and from both CVP and UCM is handled by a respective PG. The advantage to this approach is similar to the advantages of using the Logger to exclusively talk to the database. With PGs, the CCE Router does not need to worry about speaking the language of every peripheral that CCE may connect to. Instead, PGs handle the low-level communication protocols of their respective peripherals, while translating the messages to a standard proprietary Router protocol consistent to all PGs. With this approach, a change in the protocols used by a peripheral, or the introduction of a new peripheral type need not affect the core CCE Router code. It also has an advantage when upgrading large complex systems, since you can often upgrade the PGs separate from the Router and Logger, allowing you to upgrade in phases.
With a typical CCE install, you will find a single PG known as a “Generic PG”. This special PG type is able to connect to both UCM and an IVR like CVP, since this combination is common to all CCE installs. Other peripherals, like Cisco’s auto-dialer, usually require their own separate PGs. I will discuss some of these later in this post.
The CTI Server is a service installed alongside any PG that manages agent phones, such as a UCM PG or the Generic PG mentioned above. Remember from my last post that CCE’s agent monitoring capabilities come from two fronts: 1) monitoring the status of the agent’s phone and 2) the agent software running on the agent’s PC. The CTI Server’s job is to combine information from these two sources to track and maintain an overall state for the agent.
CTIOS / Finesse / CAD
The agent’s software client connects directly to either CTIOS, Finesse, or CAD. CTIOS is Cisco’s older agent desktop platform, typically using a Windows thick client app for the agent facing software. Finesse is the latest and greatest web-based agent software offered by Cisco. Cisco Agent Desktop (CAD) is another older thick client, more fully featured out of box but without any source code as provided with CTIOS. It is actually produced by a third-party and resold by Cisco. I will speak of it no further.
Architecturally, the server-side components for each of these perform a similar function. They are responsible for providing APIs for the agent software running on agents’ PCs or browsers, and for communicating with CTI Server to update and maintain agent state.
The Administrative Workstation, or AW, provides a user interface for administrators to configure the system. Here is a screenshot for reference:
Oh sorry that’s Windows 3.1, here’s the AW:
As a thick client running on a server, it is a bit behind the times. Only one user can access each AW at a time, which can be limiting, but larger environments typically use multiple AWs. PCCE systems also have a pretty slick new web-based interface for nearly all configuration, though a few things are still configured via the AW. This same interface is gradually being extended to UCCE deployments, and in the latest versions (11 and 11.5 as of this writing), you can use it to manage agents, skills, and a few other items. Here is what it looks like in UCCE:
The AW also hosts a database that stores a local copy of configuration data and several real-time reporting tables.
Historical Data Server
The Historical Data Server or, HDS, stores all the long-term historical reporting data for CCE. I say long-term because the Logger also caches this data in its database, usually for about two weeks, then pushes this data down to the HDS. This allows the Logger to restore any historical data in case the HDS goes down, assuming it is brought back up within the two-week window.
You will nearly always see an HDS installed on the same server as an AW. In latest versions, you will also see a component referred to as a “DDS.” This stands for “Detailed Data Server” and is just an HDS with call detail data. When you install all these together, you will see this referred to as an “AW-HDS-DDS.” You generally will not see these components split out except in the largest environments.
The above seven are core to the product suite, and will generally be found in every CCE install, but there are a few other common components. I’ll cover them briefly here.
Though technically optional, Cisco Unified Intelligence Suite, or CUIC, is also generally found in every CCE install. CUIC is an out-of-box web-based reporting app for CCE and CVP. From it users can run both historical and real-time reports, create dashboards, and a few other nice report-y features.
The Cisco Dialer can automatically dial lists of customers at high volume and transfer these calls either to an agent or an automated IVR script. Uses include: making collections calls, sending automated notifications, conducting automated polls, or scamming the elderly. You know those telemarketing calls you hate? This does that. You are welcome.
The Dialer communicates with CCE via a special type of PG known as a “Media Routing” or MR PG.
Email Interaction Manager and Web Interaction Manager give agents the ability to interact with customers via email and web chat, respectively. Both products are resold under a Cisco label but are actually developed by a third-party. Depending on requirements, you may have either the Cisco-branded version or the full standalone product.
Like the Dialer, EIM and WIM integrate with CCE via an MR PG.
Enterprise Chat and Email (ECE)
Same concept as EIM/WIM above, and in fact primarily developed by the same third-party. It was newly released with version 11.5 of CCE, and is intended to be the replacement for EIM/WIM going forward, which is now considered End-of-Life.
SocialMiner is a relatively new entry in the CCE lineup. In principle, it acts as a social aggregator and allows agents to interact with customers via social media, Twitter and Facebook specifically. In practice, the social media features are not often used. It also includes a newer feature allowing customers to ask to be called back via the web or a mobile app (SocialMiner itself provides a web API for developer’s to provide this). It works great but is relatively new and not that well known, so you don’t see it out in the wild too much yet. Another even newer feature is the Task Routing API, which allows you to route arbitrary tasks (like chats, incident tickets, or whatever) to agents via UCCE. Finally SocialMiner provides email and chat features for UCCX, but that doesn’t affect us in the CCE world so we won’t talk about it.
For the new callback and Task Routing features, SocialMiner integrates with CCE via an MR PG. The other features are standalone and do not directly tie in to the CCE system at all.
Ok that’s quite a few things, but we’re not done yet (sorry). We still have to add our IVR: CVP.
I will break it down into six components:
The Call Server is the heart of CVP. It is responsible for routing calls, sending requests to CCE, and running simple IVR scripts.
Complex IVR scripts built with Cisco’s Call Studio tool (covered below) run on the VXML Server. Think of this as a web server that serves IVR applications instead of web apps.
The Media Server is simply a web server hosting the IVR’s audio files. This is not really a specific piece of software to install. By default, Cisco has you manually load your audio files into IIS (Microsoft’s web server that’s included in Windows Server) on the CVP servers. Optionally, you can use any other existing web server.
An optional component that stores IVR-specific historical data, like what options each caller selected for a given CVP script. It is also required if you are using the Courtesy Callback feature, which allows callers to hang up and be called back automatically when they are close to being routed to an agent. In that case, the Reporting Server stores the list of calls that are queued for callbacks.
A standalone piece of software used to develop and test complex IVR scripts. Unlike the other components Call Studio is not installed on a server. Instead, it is intended to be installed on script developers’ workstations (Windows only). Typically at least one copy is installed on a server so that it can be accessed by multiple developers.
The Operations Console is a web application used to configure and administer the CVP servers. Any configuration to apply to the Call, VXML, or Reporting Servers have to be done through Operations Console web app. The Media server is managed through Microsoft’s IIS Administration tools if you’re using IIS, or product-specific tools if you’re using a different web server to host your audio. If you’re using Call Studio to develop IVR scripts, you can also use the Operations Console to upload your scripts to the system.
Each component listed above (almost) is redundant, providing “High Availability” (HA). That means there are at least two instances. If one instance fails, the other takes it’s place with minimal or no service interruption. Typically the primary instance is referred to as “Side A” and the secondary is referred to as “Side B”. When referring to a specific instance of a component, you will typically see the component name and letter. For example, when referring to the Side A instance of the Router, you will likely see “Router A.” This varies a bit depending on which component you’re dealing with. CUIC and the AW-HDS for example are typically numbered (i.e. “CUIC 1”, “CUIC 2” etc.), since you can have more than just two of each.
The majority of UCCE’s components are redundant out-of-box, but there are a couple exceptions.
While the Dialer, the component that actually places calls, is redundant the process that feeds records to the Dialer is not. This process runs only on the Side A Logger and is referred to as the “Campaign Manager.” If Logger A fails, dialing across the system will stop. There are solutions to mitigate this by maintaining a backup of either the database or the Logger A VM and restoring Logger A on failure. Both require manual intervention and a fair amount of administrative overhead.
With the IVR, the CVP Reporting Server and Operations Console are not redundant. With the Operations Console this is not much of a concern, since it isn’t involved in active call processing. It’s primary job is to push core configuration changes to other CVP components, so in the case of a failure you will not be able to push changes while it is down, but calls will not be affected.
The Reporting Server provides a backup mechanism but does not provide HA (a second server that can automatically take over if the first fails). You may sometimes see two Reporting Servers installed, but in this case each one receives events from separate CVP servers. For example, with two Reporting Servers and two Call Servers, Reporting Server 1 will only receive data from Call Server 1, and Reporting Server 2 will only receive data from Call Server 2. If Reporting Server 1 dies, historical data from Call Server 1 will be lost while Reporting Server 1 is down, while data from Call Server 2 will continue writing to Reporting Server 2. Only one Reporting Server can handle the Courtesy Callback feature, so if that one dies, Courtesy Callback stops working.
Finally, SocialMiner is standalone and does not offer any built-in redundancy.
Most of the components run on Microsoft Windows, with the exception of CUIC, Finesse, and SocialMiner, which run on a Cisco flavor of Linux. For the most part, the components above are each installed as a single Windows service, which you can see if you look at the standard Windows service control:
The one named “CG” is the CTI Server, and “Distributor” is the single service installed for both Administrative Workstations and Historical Data Servers.
While each of these components can be installed separately on their own servers, depending on product line and sizing needs, you will typically see some of these combined into single servers. Let us take a look at the common architectures by product line. With modern versions of UCCE, each server is a virtual machine, bare metal installs are not supported. Note that we are including UCM (introduced in the previous post in this series) for completeness even though we do not talk about it much in this post.
Your typical UCCE install looks something like this:
In legacy installs, you may have CTIOS and CAD services (both A and B) installed on the same VMs as the Agent PG instead of Finesse, or you could have these services in addition to Finesse if you’re still transitioning to Finesse.
In earlier versions PCCE had a slightly different standardized way to organize services as compared to the above, so you may that out in the wild, but as of v11.5 the above is also the standard design for PCCE.
One more thing and we’re done, promise! This bit is slightly more technical so if networking isn’t relevant to you then you can safely skip it.
The CCE components require at minimum two separate networks to connect to one another: the “Private” network and the “Public” or “Visible” network. If you know a bit about computer networking, do not get thrown off by the names; they are just a CCE convention with no relation to the common terms “private network” and “public network.”
The Public network simply refers to the standard network your servers communicate on within your data center. Technically speaking this will still be a “private” network in the sense that it will use your internal private IP address space. It will NOT be publicly accessible, unless your network engineers just want to watch the world burn. You will also commonly see this referred to as the “Visible” network, though that term is also a bit misleading, albeit less so than “Public.”
The Private network is a separate network intended only for highly available CCE components to communicate and stay synchronized with their partner (i.e. Router A synchronizing with Router B and vice versa). The reason for having this separate network is that this traffic is high priority and latency sensitive. Putting it on a separate network isolates it from broadcast storms and other traffic spikes. It also ensures that there isn’t a single point of failure for communication between Sides A and B.
The only CCE components that communicate over the Private network are the PGs, Routers, and Loggers. The other components either don’t actively synchronize with Side B — they just act as a generic pool of resources that other devices can distribute calls to (like CVP), or they synchronize over the standard Public network (like UCM, CUIC, and Finesse).
Here is our schematic updated with network connectivity:
Wrapping Up and Next Steps
Well, that wraps up our introduction to CCE architecture. If you are new to CCE I know it is a lot to take in, but hopefully the past three posts have given you a solid foundation for understanding what CCE is about and how it looks. There is certainly a whole lot more to cover to really understand everything in depth, and we will be looking to continue this series with the next post likely to focus on scripting and configuration. We will also be publishing several other UCCE/CVP related posts in the meantime. If you’re interested, sign up for email updates below and we’ll notify you whenever we post something new.