Home Conversations and Dashboard | Monitoring ClaudIA
👀

Conversations and Dashboard | Monitoring ClaudIA

Conduct audits to optimize ClaudIA's content and monitor all your conversations and generated data
Fabrício Rissetto
Ariana Carvalho
By Fabrício Rissetto and 4 others
8 articles

📊 How to Improve Claudia's Retention?

Explanation of each tab, chart, and table, along with important notes on how to use and interpret the data. 🧭 General Structure of the Retention Section 📌 Important: - The start and end date filters do not apply; they only apply to the main metrics tab. Data is organized in fixed and complete weeks to ensure performance and standardization of analyses. - All percentages shown in the tables are relative to the total tickets for that week. For example, if an escalation reason appears with 2%, it means that 2% of all tickets in that week had that cause. 🧩 Tab 1 – Retention Metrics - Overall This is the managerial tab, used to monitor progress, goals, and offenders in a consolidated manner. 🔶 Chart: “Total Tickets × % Retained per Week” - Orange bars: total tickets received per week. - Orange line: percentage of tickets retained by Claudia (without escalation). - Helps understand volume trends and AI effectiveness. 🔶 Side indicators: - Retention goal (set in the current operation's target) - Difference from last week vs. goal - Last week's retention - Maximum retention in the last 8 weeks - Average retention in the last 8 weeks 🔶 Table: “Percentage of Tickets by N2 Reason” - Shows the main reasons for escalation to N2, week by week. - Highlights the main offender from the past 4 weeks, with an average value. - Helps identify persistent patterns or reasons, such as use of N2 content or transfer by Eddie. 🛠 Tactical / Operational Tabs These tabs aim to support fine-tuning, content review, and case analysis. The structure is similar across all: 📁 Tab: Used N2 Content 🔶 Table: “Retention Gain Potential by Content – Last 8 Weeks” - Displays the most recurring N2 articles in escalated tickets. - Weekly columns show the percentage contribution of each content item relative to total tickets handled by AI. - Use to prioritize adjustments and migration of content to N1 or interactive (Eddie). 🔶 Table: “Retention Gain Potential by Tags – Last 8 Weeks” - Shows the most recurring tags in escalated tickets. - Weekly columns indicate the contribution percentage of each tag relative to total tickets handled by AI. - Use to prioritize adjustments related to themes/tags. 🔶 Chart: “Top Contents and % Retention Potential – Last 14 Days” - Shows the accumulated retention percentage that could be achieved by optimizing the Top N contents. - Example: optimizing the top 10 contents could yield an approximate 7.7% gain in retention. - Clearly demonstrates the impact of prioritizing adjustments on the right articles. 🔶 Table: “Section Selection Errors – Used vs. Correct” - Shows when Claudia used incorrect content, based on support feedback. - Displays which section was used and which should have been (N1 or INTERACTIVE). - Helps identify content usage issues. 📁 Tab: Transferred by Eddie (by design) 🔶 Table: “Retention Potential (%) by Eddie Flow” - Indicates which Eddie flows appear most frequently in escalations. - Values per week, allowing trend analysis. - Used to prioritize content or flow structure adjustments. 🔶 Table: “Tickets Escalated by Flow” - Lists real tickets, transfer reasons, and flow links. - Serves for validation and context review. 📁 Tab: Customer Requested Human 🔶 Chart: “Agent Interactions Before Handover” - Shows how many messages the customer exchanged with Claudia before requesting human support. - If most cases fall in the 3 to 5 interactions range, it may indicate lack of initial engagement or ineffective content. 🔶 Table: “Tickets by Interaction Range” - Lists tickets corresponding to the interaction range depicted in the chart. 🔶 Chart: “Top 5 Topics Customers Ask for Human Support” - Shows the most common topics where customers request human support. - Column: absolute volume of tickets | Line: % of total requests. - Helps identify topics with perceived low performance. 🔶 Table: “Content Used Before Customer Requests Human” - Indicates which contents were shown by Claudia before the customer requested support. - Useful to understand if content contributed to frustration or confusion. 📁 Tab: Transferred Due to Eddie Call Failure 🔶 Table: “Eddie Call Error % by Flow” - Shows flows that failed in technical calling, leading to escalation. - Used to identify infrastructure issues or bugs. 🔶 Table: “Tickets with Call Error by Flow” - Lists tickets and flows with errors. - Ideal for validation with the technical team and specific fixes. 📁 Tab: Transferred Due to Eddie Repetition 🔶 Table: “% of Eddie Flow Repetitions by Period” - Indicates which flows reached the attempt limit without resolution. - Shows problems with content effectiveness or the need to adjust the number of attempts. 🧠 Best Practices for Analysis - Use the main tab to monitor overall progress, goals, and major offenders. - Use the tactical tabs to explore specific causes and prioritize content, flow, or response model adjustments. - Always consult ticket links when validating with real examples. If you have questions on how to interpret any data or want to suggest dashboard improvements, contact our team! We’re here to help 🧡

Last updated on Aug 24, 2025

Sessions Used by ClaudIA: What They Are, Why They Matter, and How to Audit

In this FAQ and video, we detail how the logic behind the selection of sections used by ClaudIA works and the tagging functionality (Tagger) when closing a ticket. We also explain how to audit this data directly in the Hub, so you can gain insights through ClaudIA's Dashboard to improve tagging. https://www.loom.com/share/f6601cbe38b6400da3830e8fd0d52230 Sessions Used by ClaudIA: What They Are, Why They Matter, and How to Audit What Are “Used Sessions”? Every time ClaudIA responds to a customer, it consults one or more sections of its knowledge base (N1 or N2) to formulate the response. These sections are called used sessions. In other words, they are the blocks of knowledge that the AI used to formulate the response sent to the customer. 🎯 Why Is This Important? This information is essential to: 1. Understand how ClaudIA is reasoning in each response. 2. Audit whether it is actually using the right content. 3. Train and improve the AI's behavior, identifying when sessions are being used incorrectly. 4. Define the conversation's tag: the session with the highest weight influences the automatic tagging algorithm. 5. Detect when there is a transfer to N2, and if it was based on correct content. What Has Been Launched Now, you can: - Directly see which sessions were used in each response from ClaudIA. - Provide positive or negative feedback on the use of these sessions. - Audit the final tag of the conversation based on the sessions used. How to View Used Sessions 1. Access any ticket in the Hub. 2. Find ClaudIA's response. 3. Click the ℹ️ icon to open the response details. 4. You will see: - ClaudIA's reasoning (consulted term + intention). - The used sessions that composed the response. - An icon identifying which sessions were used (e.g., ✅ or highlighted icon). - If there are N2 sessions, they will also be highlighted. Practical Example Customer's Question: “Do you integrate with Zendesk and Intercom?” ClaudIA's Response: “Yes! We have native integration with Intercom. In the case of Zendesk…” Used Sessions: - N1 - Intercom Integrations - N2 - Zendesk Integration via API Outcome: - ClaudIA used content from two sessions. - Since one of them was N2, the conversation was automatically escalated. - The conversation tag was: Interest in hiring, as it was the most relevant session. How to Audit Used Sessions 1. When reviewing a conversation, click the ℹ️ icon on any response. 2. See which sessions were marked as used. 3. Assess if they make sense based on the response. 4. Click 👍 or 👎 to provide your feedback on the use of the session. Important: This only evaluates if the marked session was used correctly, not if the response was good or bad. 5. If the response is incorrect, proceed with the normal audit at the end of the ticket. What Happens with This Feedback? - It serves to improve ClaudIA's behavior, adjusting the use of sessions. - It is used to calibrate the automatic tagging algorithm. - It helps our team understand if N2 sessions are being used correctly — which affects both the AI's performance and the escalation criteria. Quick Tips - ⚠️ Sometimes, ClaudIA consults a session to confirm something, even if the response does not have literal excerpts from the content. This still counts as valid use. - If you do not recognize the content of a session in the response, mark 👎 and briefly describe why. - The session feedback does not replace the conversation audit. It complements it. What Influences the Conversation Tag? At the end of the ticket, ClaudIA assigns a tag based on the session used with the highest score (relevance). You can review this tag in the audit process: 1. Go to the end of the conversation. 2. Review if the assigned tag corresponds to the content and intention. 3. If you want to change it, select the correct tag and submit the audit. Where Does All This Impact? - Dashboards of addressed topics - Retention and escalation indicators - ClaudIA's performance by session or subject - Quality of automatic suggestions for improvement - Revenue (in some contracts based on sessions)

Last updated on Aug 12, 2025