Contact centers often struggle to deliver the best possible customer experiences and meet customer demands.
Without the insights provided by data analytics, contact centers are unable to understand customer journey, behavior, and preferences, and are therefore limited in their ability to improve operations and drive business growth.
By using data analytics, contact centers can gain valuable insights into their customers and use this information to deliver better customer experiences and meet customer demands. This can lead to improved operations, increased customer satisfaction, and ultimately, business growth.

Data analytics has become an increasingly important tool for contact centers looking to improve their operations and provide better customer experiences. By analyzing large amounts of data collected from various sources, contact centers can gain insights into customer behavior, preferences, and journey, and use this information to deliver more personalized and effective customer service.

One of the key benefits of data analytics in contact centers is the ability to track and analyze customer journeys. By collecting data on customer interactions with the contact center, such as phone calls, emails, and chat conversations, contact centers can get a better understanding of how customers are interacting with their organization and where they may be experiencing issues or frustrations. This data can be used to identify patterns and trends, and to identify areas where the customer journey could be improved.

Another important use of data analytics in contact centers is to understand customer behavior and preferences. By analyzing data on customer interactions, contact centers can gain insights into what types of products and services customers are most interested in, what channels they prefer to use for communication, and what factors influence their decision-making process. This information can be used to tailor customer interactions and experiences to better meet the needs and preferences of individual customers.

In addition to understanding customer behavior and preferences, data analytics can also help contact centers deliver on customer demands. By analyzing data on customer interactions, contact centers can identify areas where customers are not being serviced adequately or where there are delays in service. This information can be used to improve processes and systems to better meet customer demands and expectations.

Let's take a look at some areas where analytics can be applied:

  1. What are the top ticket drivers?
  2. What is the time series PBA Volume view of the tickets – Year/Month/Week/Day/Hour?
  3. What is the time series PBA Response Time view of the tickets – Year/Month/Week/Day/Hour?
  4. What is the time series PBA Resolution Time view of the tickets – Year/Month/Week/Day/Hour
  5. Are there any observable PBA patterns in the Volume/Response Time/Resolve Time data?
  6. Are high Volume/Response/Resolve Times due to agent, requestor, KB, Infra, App, etc. issues
  7. Are high Volume/Response/Resolve times related to same call driver? same agent? same requestor?
  8. What Users/Country/Department are generating the tickets
  9. Who is resolving these tickets (Resolver Name/Group)
  10. What is the priority of these tickets
  11. How were the tickets generated (Email, Phone, Chat, Web, Fax)?
  12. What is the status of the tickets
  13. How long are the open tickets aging?
  14. Are they repeat tickets for the same issue (Agents not resolving or HW/SW/NW Bugs)?
  15. Does MTTR vary between Agents, Time of Day/Week/Month, Application, Region, Business Unit, etc. Are there any observable SPC patterns in the MTTR data?
  16. Does quality of the documentation vary between Agents, Time of Day/Week/Month, etc.
  17. Do the negative CSAT surveys map to Agents, Time of Day/Week/Month, Application, Resolver Group, Business Unit, Country, etc.
  18. Is any training to the agents or requestors needed based on quality of the tickets, MTTR reviews, Resolution Codes, and L2 groups tickets are sent to?
  19. Is any update to the KB documentation needed based on quality of the tickets, MTTR reviews, Resolution Codes, and L2 groups tickets are sent to?
  20. Does the Self-Help Portal need to be updated based on quality of the tickets, MTTR reviews, Resolution Codes, and L2 groups tickets are sent to?
  21. Do we need to distribute e-newsletters, FAQs, have DSUG meetings based on quality of the tickets, MTTR reviews, Resolution Codes, and L2 groups tickets are sent to?
  22. Are Tickets being flagged in a way to exclude them from SLA calculations and CSAT surveys
  23. Are Agents prolonging tickets at certain periods of the day to avoid taking additional calls
  24. Are Agents offloading tickets at certain periods of the day (Breaks, Lunches, Dinner, Shift End)
  25. How is the case to call ratio?
  26. Is ticket documentation quality varying based on agents, time of day, etc.?
  27. Are Agents closing tickets prematurely to inflate FCR?
  28. Are Agents referring higher amount of tickets at certain times of the day?
  29. Are Customers requesting L1.5/DSS teams versus working with SD?
  30. Are repeat calls confined to a certain group of agents?
  31. Were the Business requirements not clear for the implementation which ultimately led to an outage immediately after implementation of shortly afterwards (i.e. a few business days in or week etc.)
  32. Were the Business requirements clear though not implemented properly?  What was the cause (procedural, tools, human error etc.)?
  33. Did we make changes without consulting with the business for any incidents?
  34. Are the incidents (or what percentage of them) as a result of failed Technical or Business PIV or both?
  35. Was there PIV performed by both areas after implementation?  If not, why (i.e. not required, oversight, Business not available, Technical resource shortages etc.)
  36. How many incidents were related to the TEST/UAT environment not being like for like Production thus incomplete testing?
  37. Is the ratio of incidents in Application or Infrastructure higher than the other LOB applications?
  38. Is the largest percentage of Application outages isolated to a finite group of application? If so, what is that telling us?
  39. Is the ratio of Changes larger against a specific set of Applications/Infrastructure versus the remainder of Applications? Why?
  40. Is the ratio of Applications being downgraded in Priority levels as a result of incorrect batch flow automation or human error when creating the ticket?  What ratio of each type?
  41. Is it the same suite of Applications being incorrectly categorized with respect to the priority being downgraded?
  42. Are there any incidents being reported under different Priority ratings (i.e. similar impact though reported as P1, P2, P3, P4 at different times) thus not consistent?
  43. How many incidents are related to changes that were implemented to correct other errors?
  44. How many changes have been implemented to correct improperly applied changes?
  45. How many and which Applications/Infrastructure have a higher ratio of incidents related to Single Points of Failure, Technology Currency, Patching and/or Resiliency exposures?
  46. Do we have a proportionate number of incidents related to Software versus Hardware issues?
  47. Can we ascertain how many incidents are repeats and/or could have been avoided if we properly executed on the Problem Management Process to determine & execute on Root Cause/Avoidance at the first incident?
  48. For the incidents identified as repeat, was the problem management process performed with the first incident or not and why?
  49. If the Problem Management process was executed & RCA clearly identified why were you not able to avert the subsequent outage?
  50. How long after a Change has been implemented did you suffer the first outage (i.e. when a new feature is first utilized etc.)
  51. What are the common incident outage themes (i.e. People, Process, Documentation, Technical, Tools) across both Application/Infrastructure?
  52. On the Infrastructure side can you ascertain outage ratios against Database-Middleware, Mainframe, Mid-Range, Windows, Virtual, Network, Storage, Unix etc. to strike a theme?  We can then dig further into this output.
  53. For incidents – how many do you learn from the client (i.e. to Service Desk) first before you know and can react on the Application-Infrastructure side via alerting/monitoring?
  54. What is the normal time gap as this occurs?
  55. Where can you implement synthetic testing of application functionality or transactions to simulate a flow and put up an alert if a certain condition is met before a client calls (i.e. similar to some Online Banking test scripts)?
  56. For alerting/monitoring – if you have a reaction time gap is this because of short staff on the Application/Infrastructure side to deal with the volume?
  57. For alerting/monitoring – if you have a reaction time gap could it be because you are reliant on a non-Command Centre 7/24 structure (eyes on glass) or Follow the Sun Model to react real time as opposed to the alert being sent to a L2/L3 support person that is offsite and the support person needs to login in to validate the error, then take action resulting in delays?
  58. For alerting/monitoring – are you seeing a negative trend with either Infrastructure versus Application alerting & reaction times?
  59. How many incidents do not have alerting/monitoring defined due to oversight or because you cannot monitor a service, function, alert etc.
  60. What is the average time lag between an Alert->Business Impact->Incident Ticket Open-Resolved-Closed-MTTR-Problem Ticket Open/Close
  61. What are the trend Root Cause Codes (Infrastructure/Application)?
  62. What is the time delay to Root Cause identification for Application/Infrastructure/3rd Party-Vendor?
  63. Are the Incident trends more prevalent during the Monday-Friday time period versus Saturday-Sunday?
  64. Are the Incident trends more prevalent during the 8AM-4PM, 4PM-Midnight, Midnight to 08:00AM time periods on either Monday-Friday versus Saturday-Sunday?
  65. Are the Incident trends more prevalent during the Month End, First Business Day, Mid-Month, Quarter End time frames?
  66. Are the incident trends more prevalent during short staffing periods or compressed timelines for projects/processing times etc. thus resulting in a higher ratio of incidents (i.e. Rushed Work is not Quality Work)?
  67. As per industry trend are you worse than, equal to or better than a potential peer group?
  68. What is the competition doing that you are not if your Application/Infrastructure architecture-configuration, size, interfaces and dependencies are equal and you have a higher ratio of outages?
  69. How much data or tickets did you discard during this exercise? Was it material that could have altered the outcome of this report?
  70. Did you surface trends by users/groups?
  71. The automated alerting which was reported –was it more prevalent in one Application or portfolio of applications?
  72. Were there specific trends on a day of the week?
  73. Do you have more details on repeat trends?
  74. Were you able to report on trends relative to alert/outage/ticket open-response times and the gaps within?
  75. You need to create a Service Management roadshow which includes a Contact Center/Application support Incident engagement flow in order to educate the field.
  76. Are tickets being addressed at the appropriate layers (Service Desk, Tier 2, Management etc.)
  77. Proactive Trend Analysis needs to be done consistently at the Application level. How will this be introduced?
  78. Are the trends/spikes in line with the interfacing apps which feed the highlight applications in this report?
  79. Alert Settings –Are the Performance & Capacity Management settings being reviewed with the Application space with respect to Trends/Insights?
  80. Do you have more details around Change Related Event-Incident Trends?
  81. Do you have more details around Vendor related incidents to extract trends?
  82. How can you expand on the inbound quality issues (i.e. feeder applications)?
  83. What are you learning or missing in the P3-P4 trends?
  84. Why are certain Service Request volumes higher across the portfolio of applications?
  85. Did you see behaviors across the Applications that are consistent within a specific dept?
  86. Did you extract any Infrastructure related Alert-Incident data to match the themes as part of the overall exercise?
  87. Are there recommendations that support the establishment of an Application Command Centre Model (i.e. 7/24 eyes on glass support)?
  88. Who is receiving reporting on these negative trends or address tickets in their queues?
  89. Who will review the Alert to Incident variables to ensure a sanity check has been done

Overall, data analytics enables contact centers to gain valuable insights into customer journey, behavior, and preferences, and to use this information to deliver better customer experiences and meet customer demands. By leveraging the power of data analytics, contact centers can improve their operations, increase customer satisfaction, and drive business growth.

Subscribe To My Blog To Learn More: https://www.imadlodhi.com/subscribe