Home ClaudIA Features
đź§ 

ClaudIA Features

Detailed overview of some of the main features and behaviors of ClaudIA
Gabriel Bolner
By Eduardo Fukumoto and 7 others
• 14 articles

Giving Instructions for ClaudIA (Modularized Prompts)

Problem Breakdown: The previous base prompt model consisted of a single large block of text, varying significantly between projects. This made standardization, maintenance, and conducting A/B tests focused on specific parts of the prompt difficult. Objective (Problem Solution): Migrate all projects to a modularized base prompt model, where each section of the instruction is separated by modules with defined themes. This allows for greater control, consistency across projects, and more flexibility to evolve the system. Features: - The new model assembles the base prompt from the combination of modules in a standardized order. - The structure defined for all prompts follows the order: Company Context → General Instructions → Content Rules → Tone of Voice → Retrieved Sections. - The definitions of each module can be viewed directly in the Hub, on the project settings screen. - Modularization allows: - Standardizing instructions across clients. - Conducting A/B tests with just one or more specific modules. - Facilitating mass adjustments and keeping prompts updated in a centralized manner. - The modular structure was created based on the analysis of existing base prompts. - The content of the prompts was preserved as much as possible during the migration, avoiding significant text changes. Configuration: - Modularization has been applied to all active clients (including those in A/B). - The functionality allows for enabling or disabling the basePrompt by modules. - Changes can be made through the Settings Menu → Base Prompt Modules, where each prompt module can be edited individually. Instructions for the Operations Team: Until we decide whether to maintain only the modular format or the single block model: ⚠️ Whenever you edit a client's base prompt, remember to update: - basePrompt (single block) and also the - basePromptModules (modularized version)

Last updated on Aug 12, 2025

File Interpretation by Claudia (PDF, DOCX, and XLSX)

For more details on how the Image Transcription and document functionalities work, you can access this image transcription article. Claudia is capable of understanding and transcribing files sent by clients, such as PDFs, DOCX, XLSX, and other types of documents. This functionality is available for all projects and is automatically activated. How it works: - ClaudIA automatically detects links to files sent in conversations. - For documents, it uses AI models and OCR (Optical Character Recognition) to extract text, supporting the project's language. - The transcription of files is used in the vector search of content. Additionally, we clearly indicate the format and title of the file to assist in searching for content on how to respond to specific types of files. - The transcription of files is also utilized in generating responses, making customer service more efficient and contextualized. Files appear in the hub as attachments, accompanied by the textual transcription (as shown in the image below). When we click on the file, a modal opens to check the transcription obtained from the document: Persistence and Expiration: At the moment of receiving the file, Claudia performs the transcription immediately and stores the extracted text permanently. The original file may expire or be removed by the helpdesk after some time, but the transcription remains available, ensuring that the history and context are not lost. Configuration The functionality to read PDF and DOC is enabled by default in all projects. If you wish to disable it for any special reason, please contact Customer Service. How to Test You can test file uploads directly through our Playground screen. Or you can perform an end-to-end test by creating a ticket directly in your helpdesk. This way, it will be possible to verify how the image is transcribed, interpreted, and utilized by AI in real customer service. Important Notes 1. If you have added any instructions in your base prompt (very uncommon) or created any content in IDS (more common) to guide Claudia's responses in the event of receiving images, it is recommended to undo these adjustments to avoid functionality conflicts. 2. Due to the use of OCR technology, the quality of the document and file is crucial to ensure good transcription. 3. Future improvements mapped: The format of sending the image directly to a LLM model was not utilized because in tests they did not reach the expected minimum speed for quality interaction (<2min). However, we will reassess this possibility in the future.

Last updated on Aug 12, 2025

Message Transformation Using Conversation History

In this article, you will understand how the transformation of the customer's message works using the conversation history. This is a tool developed to enhance the quality of the user's first interaction. How Message Transformation Works When the user's first message arrives, we consider the previous messages marked as history in the conversation. In the following example, the history consists of the following messages: We take these messages, along with the user's most recent message, to make a modification to the question sent to Claudia. This allows us to alter the message used for session queries, making it more precise. If we analyze the sessions used for the response generated in the previous example, we can see that the search term is different from the messages sent by the user: The search term "how to access and take advantage of the tax slips mentioned in the notification" was generated by the conversation history query transformation process, ensuring a more accurate search. Without this functionality, the original search term would be simply "how can I do this?", which could lead to less precise answers. Activating the Functionality Currently, this functionality cannot be activated by the HUB. When necessary, please contact the Cloud team for manual activation. Activation should be done as follows: Within the project configuration, add the following JSON object at the root of the document: { "conversationHistoryQueryTransformationSettings": { "enabled": true, "prompt": "Analyze the messages from the user and the agent below: \n\n {CONVERSATION_HISTORY} \n\n Summarize the USER's question considering the messages sent by the AGENT as context. Respond only with the summarized question, nothing more.", "_class": "com.cloudhumans.claudia.domain.entities.ConversationHistoryQueryTransformationSettings" } } JSON Parameters: - enabled: Indicates that the feature is activated. - prompt: Instruction used to transform the user's message. - {CONVERSATION_HISTORY}: Placeholder replaced by the conversation history. Example of Placeholder Replacement In the previously mentioned example, the placeholder was replaced with the following history: AGENT: Notification sent by Agilize: Hello, Rafael Viana! How are you? At the beginning of each month, we would like to remind you about the taxes to be paid. Here are the important information for this month: 🗓️ Release Dates: The slips will be available between the 1st and 15th of each month. 📍 Where to Access the Slips: You... USER: I didn't know, how can I take advantage of this? USER: How can I do this? With this approach, the tool significantly improves the accuracy of the response, ensuring a more effective and relevant interaction for the user.

Last updated on Aug 12, 2025

ClaudIA Validates Situations with Ambiguous or No Content (Classifier and Clarifier)

https://www.loom.com/share/ddeb6950025947eb83a042fce6c5f581 The classification and clarification feature works after the content segmentation by topics, bringing the full context of related content and including the entire conversation context: ![Image showing classified content] What is it used for? This functionality was developed to make customer service smarter and more efficient, preventing Claudia from choosing incorrect sessions due to similarity and ambiguity between them. Examples: - Platform payment vs. Payment module - Multiple product types that are different but considered the same type Who is it recommended for? In our tests, we observed an increase of (p.p. = percentage points, absolute difference between percentages): - About 50% of projects: Better projects with Clarifier: +3 to 7% retention, +3 to 8% CSAT, and +3 to 10% customer sentiment (evaluated by AI) - About 40% of projects: Projects with little distinction: -2% to +1% retention, -2 to +2% CSAT stable, and customer sentiment between -1 and +7% - About 10% of projects: -1 to -5% retention, -3 to -7% CSAT, and -7 to +1% customer sentiment This happens because the Clarifier feature has the highest value in "disambiguating" similar contents, especially when contents are more complex or have a higher risk of being confused. Therefore, the effectiveness of a good clarifier depends on how well-structured the IDS (Intelligent Dialogue System) is. How does it work? After the content segmentation by topics, clarification acts to identify one of three possible scenarios. It receives the content that would normally go directly to Claudia and "sorts" the received content, understanding whether the content is sufficient and unique: 1. No valid content - Content is insufficient: When the question is clear but there is no content to formulate an answer. In this case, Claudia will ask a more generic question asking the customer to reformulate their response. 2. Basic prompt - Only 1 sufficient content or possible to merge contents with high confidence: When there is exactly one useful section to answer the customer's question. In this case, Claudia will generate the response directly for the customer, as she usually does. 3. Clarification with content - More than 1 content is sufficient to respond or contains selected content indicating the need to clarify before answering: This is the preferred scenario when the model has doubts and is always activated when more than one possible content exists to generate the reply. In this case, Claudia will generate a clarification question, but it will be more targeted to the customer's specific context. How to set it up? Currently, this functionality cannot be activated directly through the hub. To enable it, contact us so that the configuration can be activated for you. When active, two fields can be configured: 1. Number of clarifications with content: Defines how many times the classifier can ask for clarification even when a relevant section is identified. 2. Number of clarifications without content: Defines how many times the classifier can attempt to get more information when there are no sections in the IDS that can answer the question. ![Image showing configuration settings] This feature replaces the Enlightenment Question. That is, it cannot be activated at the same time as Enlightenment Question. Mapped behaviors: - We do not recommend increasing the number of clarifications above 3, as these are messages with little customization. It may cause the feeling that you're talking to a non-intelligent agent. - If there is a session in the IDS indicating the need to clarify, the Clarifier will choose to execute it. - Semantic search is not yet optimized for clarification situations; therefore, in some cases, the content could be retrieved by looking at message history, but fragmented customer messages may trigger clarifications in a less optimized way for customer experience. - The content Clarifier does not share information along with the clarification but performs contextual investigation with relevant questions based on the content. --

Last updated on Aug 21, 2025

Automatic Message Breaking: How It Works and How to Set It Up

What is this feature? This feature allows ClaudIA and Eddie to automatically break long messages into smaller parts, creating a reading experience that is smoother, more humanized, and resembles a real messaging app, especially in fast messaging channels like WhatsApp, Chat, and Messenger. It also allows you to: - Set the maximum number of characters per message - Configure the time between messages Why is this important? Previously, ClaudIA could only respond with a long text — difficult to read, tiring, and incompatible with the standard of apps like WhatsApp, Telegram, etc. Now, with this feature activated, the message resembles what a person would say. We are humanizing ClaudIA more. What exactly is configurable? Accessing the Message Delivery menu in the Hub (under settings), you can: Split large messages: - Activates automatic breaking - Default: activated Max characters per message - Sets the maximum number of characters per message - Default: 3000 characters Time between messages - Interval between messages (in seconds) - Default: 1 second How does it work in practice? ➤ Breaking long text When ClaudIA's response exceeds the defined number of characters (e.g., 800, 3000…), it will be divided into multiple messages using the last full stop. The division respects word breaks and, whenever possible, breaks between paragraphs. 📌Attention: If the content is a step-by-step instruction or numbered list, the break will still be made based only on the number of characters, which may cut in the middle of a step. This is a known limitation. ➤ Time between messages You can configure the time between sending each part of the message in the “Time between messages” field. It defines how long ClaudIA (or Eddie) waits before sending the next message of a split text. It is set in seconds. ➤ Breaking Eddie's messages by bubble Previously, when there were several text bubbles in a row in Eddie, ClaudIA concatenated everything into a single message. Now, each bubble is sent separately, with an interval between them. Where does this work? - WhatsApp - Playground (for testing) - Webchat (if configured with delay) ❌ Does not work in email — by default, emails keep long messages in a single block. Otherwise, multiple emails would be sent. In email, there is already a normal break by paragraph, standard for this medium of communication. How to activate or adjust? This feature is already enabled by default with: - Max characters per message: 3000 characters - Time between messages: 1 second But you can adjust or disable it at any time: Hub → Settings → Message Delivery To completely disable: - Uncheck the Split Large Messages box Practical example Let’s assume the limit is set to 800 characters: - If two consecutive paragraphs have 700 characters each, it will send the first paragraph and then the second. - If one paragraph has 900 characters and the text up to the penultimate full stop has 600 characters, it will send that part and then the final part. - If one paragraph has 900 characters and the text up to the penultimate full stop has 800 characters and up to the ante-penultimate has 400, it will send the message up to the ante-penultimate full stop. - If one paragraph has 900 characters and the text only has the last full stop and none other, it will send the entire expression, up to the last full stop at once. What is the only downside? As the separation is based on the number of characters, it does not understand the logic of step-by-step instructions. In other words, if you are explaining a process step by step, such as: 1. Access the menu 2. Click on “Users” 3. Select an item… …the division may happen in the middle of a step, making the reading a bit less clear. Best practices for size of breaks - If your main channel is WhatsApp, we recommend keeping the limit between 800 and 3000 characters. - If your channel is chat, which is a faster and shorter channel, we recommend using shorter breaks (~500–1000) to improve readability. - Test in the Playground before activating changes in production. Best practices for time between messages 🎯 General recommendation: WhatsApp / Instagram / Messenger - Suggested interval: 0.8s to 1.5s - Standard human typing time. Provides a natural rhythm without seeming robotic or rushed. Webchat - Suggested interval: 0.5s to 1s - Users tend to be on desktop and expect a quick response. Environments with many Eddie bubbles - Suggested interval: 0.5s - So it doesn’t seem stuck or too slow. More formal or complex environments (e.g., healthcare, finance) - Suggested interval: 1.2s to 3s - Helps convey a sense of care and calm.

Last updated on Aug 12, 2025

Section Reuse Limit Rule

What is it and why does it matter? During a conversation, ClaudIA may refer to the same response sections (such as instruction blocks, error messages, or automated explanations) multiple times—especially if the customer is confused or insisting on the same point. The problem? If the AI repeats the same content too much, the conversation starts to feel stuck, tedious, or even frustrating for the customer. Sometimes this is indeed useful and brings more quality and automation. Examples: - The customer rephrased the question, and ClaudIA provides the answer in different ways to facilitate understanding. - When the content (IDS) has more information, and ClaudIA gives the answer to the customer, but the customer asks a second question that has the answer in the same section of the IDS used earlier. To avoid this “loop” effect, we created a new feature: Limit how many times the same section can be used on a ticket. What has been launched A new setting that defines the maximum number of times the same section can be used per ticket. By default, the rule is set to 4 repetitions. That is, if ClaudIA tries to use a section for the 5th time, the conversation is automatically escalated to N2 (human). How to decide the best limit for your case? To facilitate the definition of a safe limit, a new chart has been created in the Hub: Access: Hub → Dashboard → Retention Metrics Chart name: Distribution of Highest Frequency of Section Reuse per Conversation How to interpret the impact chart by number of repetitions This chart helps you decide the best limit for section repetitions for ClaudIA, balancing AI retention (N1) and customer satisfaction (CSAT). It shows the distribution of conversations by the highest number of repetitions of the same section and simulates what would happen if you limited the number of times the AI can use a section per ticket. ‼️IMPORTANT CSAT data will not appear for everyone, as we can only import CSAT from certain helpdesks (Zendesk, Intercom, Cloud Chat, and Hubspot) and if you are capturing with the standards we work with. For more information, visit this FAQ. In these cases, you will need to make the decision based solely on retention data. How the chart is structured: - Num Repetition: Maximum number of times a section has been repeated in the conversation. - % of total conversations: Percentage of conversations that had this level of repetition among conversations that had 1 or more repetitions (the sum of this column is 100%). - % of current general N2: Current N2 rate over the entire period (current value for reference). - % of new N2: Simulation of the N2 percentage if we limited repetitions to this level. - difference in p.p N2: Difference in percentage points between the current N2 and the simulated one (the higher, the worse for the N2 rate, leading to more conversations escalated to a human). - num CSAT responses: Number of satisfaction responses collected at this level. - current general CSAT: Current satisfaction rate over the entire period (current value for reference). - new CSAT: Simulation of satisfaction if we limited repetitions to this level. - difference in p.p of CSAT: Difference in percentage points (p.p.) between the current CSAT and the simulated one (the higher, the better). Example of reading (based on the chart): - Today, the general CSAT is 96.85% and the N2 is at 25.67%. - If we limited the AI to repeat at most 1x each section per conversation: - The N2 would rise to 47.39% → an increase of 21.72 percentage points (p.p.). - The CSAT would rise to 97.91% → a gain of +1.06pp. This shows that limiting repetitions can improve satisfaction but worsen retention by the AI. Now, if we limited to a maximum of 2 repetitions: - The N2 would rise to 30.86% - But the CSAT would drop to 96.59% → meaning a negative impact. In this case, both retention and CSAT worsened. Questions the chart helps answer - To what extent is it worth limiting the AI's repetitions to improve the experience? - Does the increase in the number of escalations to humans (N2) outweigh an improvement in CSAT? - Or the opposite: is it worth sacrificing some CSAT in exchange for retaining more with the AI? Practical tip: Use this chart as support to define the “Section Reuse Limit” parameter. Where to configure You can adjust the limit by accessing the screen below, following this path: Hub → Settings → N2 Handover → Section Repetition Limit Just choose the maximum number of repetitions allowed before escalation. The number you set will be the maximum allowed repetitions. E.g., you set the limit to 3. If the section used were used again for a 4th time, the ticket would automatically be transferred to a human (N2). Best practices - Do not set the limit as 1 or 2 without looking at the chart, or you may escalate many conversations unnecessarily. - If your AI deals with sensitive topics or impatient customers, it may be worth testing more conservative limits (e.g., 3). - If the AI flows are longer or the subjects are complex, perhaps 4 or even 5 repetitions make sense. What is a “section” exactly? Section = an entry from the IDS that has been defined by our software as having been the section used in the response. To learn how the used section is defined, see this FAQ. The limit prevents it from excessively reusing the same section in the same ticket.

Last updated on Aug 12, 2025

Image Interpretation by Claudia

Claudia is now capable of interpreting images sent during the conversation! This functionality is available in the main helpdesks we integrate with, such as Cloud Chat, Zendesk, Hubspot, and Intercom. How it works When an image is sent to Claudia: - It is automatically transcribed by an LLM agent, generating a semantic summary of the image. - This summary includes extracted texts and may also contain the context of the situation. - The summary is used to improve vector search for relevant content. - The transcription occurs only once per image, at the moment it is received. - When responding, Claudia also receives the original image, not just the transcribed text, ensuring a more complete analysis to better guide the service. Supported image formats: âś… PNG (.png) âś… JPEG (.jpeg, .jpg) âś… WEBP (.webp) Supported file formats DOCX and PDF: [access the link] Not supported: đźš« Videos. Example of use In the conversation below, the user sends a photo of a web page with a message indicating that he is not enabled to complete his certification via videoconference. Then, Claudia transcribes the image, analyzes the context, and decides to escalate to N2: Looking more closely at Claudia's reasoning, we can see that the top relevant section is precisely an N2 section that addresses the situation of not being enabled for videoconference: Thus, the image transcription not only improves the search for relevant sections but also allows Claudia to understand the visual context and conduct the service even more accurately. Configuration The functionality can be activated through the Hub interface, in the Attachments tab. Just check the checkbox below: Advanced prompt configuration: However, if you wish to customize the prompt used to transcribe and interpret images for any specific reason, please contact Support. How to test You can test sending files directly through our Playground screen. Or you can perform an end-to-end test by creating a ticket directly in your helpdesk. This way, it will be possible to see how the image is transcribed, interpreted, and used by the AI in real service. Important notes 1. If you have already added any instructions in your base prompt (very uncommon) or created any content in the IDS (more common) to guide Claudia's responses in case of receiving images, it is recommended to undo those adjustments to avoid functionality conflicts. 2. If your Helpdesk is not correctly sending images to Claudia with the .jpg, .png, etc., the image will not be interpreted. So far, all the channels and helpdesks we have tested are sending this data correctly.

Last updated on Aug 12, 2025

Automatic Translation of ClaudIA Responses

What is this functionality? This feature allows ClaudIA to automatically detect the user's language from the first messages of the conversation and start responding in that language — even if the project was originally set up with a different language. It also allows you to: - Set the project's default language (used as a fallback) - Enable or disable automatic language detection Why is this important? Previously, ClaudIA always responded in the project's default language — except in some cases where the end user directly requested a change or, occasionally, when the language model (LLM), being non-deterministic, identified and applied the change on its own. However, this behavior was not controlled or reliable. Now, with this functionality activated, the behavior is standardized, secure, and configurable, ensuring that ClaudIA's responses are consistently made in the most appropriate language for the user, whenever it can be identified with certainty. What exactly is configurable? By accessing the General Settings menu in the Hub, you can: Enable automatic detection of the user's message language - Default: enabled (when enabled by the operations team) Project's default language - Defines the language to be used when it is not possible to safely identify the user's language - Default: Brazilian Portuguese How does it work in practice? ➤ Automatic detection from the first messages ClaudIA analyzes the first messages from the user to identify the language with the highest degree of certainty possible. If it is not possible to determine it with certainty right away, the system continues evaluating the following messages until it can identify it. If one of the messages is sufficient — even if short — and the model has high confidence in the detection, ClaudIA will start responding in that language. ➤ Fallback to default language While ClaudIA does not have sufficient confidence in the detection, it continues responding in the project's default language. This language is configured directly in the Hub. ➤ Deterministic behavior Once the language is confidently identified, it becomes the primary language of the conversation, ensuring consistency and avoiding unexpected changes throughout the service. 💡 But it is worth reinforcing: this is the designed logic — that is, ClaudIA should continue responding in this language until the end of the conversation. However, as it is a non-deterministic language model, the AI can still hallucinate or change the language in specific situations, especially if the user writes something very clear requesting a change (e.g., "now respond in English"). These cases are rare but possible. Therefore, although the standard behavior is stable, there are no absolute guarantees of 100% predictability. Which projects can enable it? This functionality is activated manually by the operations team and is available only for projects configured as multilingual. By default, it comes with: - Automatic language detection: enabled - Default language: the same as the project's base language If you wish to enable this functionality in your project, please contact the operations team. Practical example Suppose the project's default language is set to Portuguese: - If the user sends: "Hola, tengo una duda sobre mi plan" → ClaudIA detects Spanish with high confidence and responds in Spanish. - If the user sends: "Hello" → Short message, but if the model confidently detects it as English, ClaudIA responds in English. - If the user sends: "Oi" → Not sufficient to guarantee language → ClaudIA responds in Portuguese (default language). - If the user sends: "Oi, gostaria de falar sobre meu plano" → ClaudIA detects Portuguese with confidence and continues in Portuguese. What is the only downside? Language detection depends on the first messages of the conversation. If they are vague, ambiguous, or inconclusive, the system may end up using the default language — which is not an error, but may lead to a slightly less personalized experience. 📌 This is intentional to avoid incorrect translations based on few signals. The priority is to ensure safety in detection. Best practices 🎯 General recommendation: - Use this functionality only in multilingual projects. - Correctly configure the default language to avoid inappropriate responses. - Guide the support and curation teams about this behavior. - Test in the Playground simulating different initial messages.

Last updated on Aug 12, 2025

How to configure Claudia's name rotation in support interactions

The name rotation allows Claudia to introduce herself with different names each time, instead of always using "Claudia" or a fixed name. This helps make the support more natural, dynamic, and personalized in cases where we want it to not appear robotic. 🎯 Benefits - Avoids repeating the same name during frequent interactions. - Allows the use of male, female, or mixed names. - Makes the support more varied and personal. ⚙️ How to activate 1. Adjust the company and agent context module Add the following to the context prompt: You are a support agent for the company [insert company name]. To make the support more personal, you can introduce yourself using any of the following names — choose one per interaction, as appropriate: Lucas, Marina, Rafael, Ana, João, Beatriz, Gabriel, Camila, Pedro, Sofia, Tiago, Laura, Bruno, Fernanda, or Diego. In the same conversation, NEVER introduce yourself with two names or say you might have other names; stick with the initially chosen name throughout the conversation. 📌 Important: in a single conversation, the name must be maintained until the end. 2. Configure the greeting message In the initial greeting, set: <Start the response with the greeting in the format “Hi! How are you? I am [NAME], your support consultant! How can I help?” where [NAME] can be Lucas, Marina, Rafael, Ana, João, Beatriz, Gabriel, Camila, Pedro, Sofia, Tiago, Laura, Bruno, Fernanda, or Diego (choose one per interaction). - Use the placeholder [Name] to automatically pull a name from the list. - Use the same list defined in the context. 3. If the project starts with Eddie flow If the greeting occurs within an Eddie flow, follow these steps: 1. Add a GPT card with the instruction below: You are a support agent for the company insert company name. To make the support more personal, you can introduce yourself using any of the following names — choose one per interaction, as appropriate: Daniela, Marina, Leandro, Nicolas, Rodrigo, Beatriz, Rafaela, Camila, Fernanda, Sofia, Pablo, Danilo, Caio, Fernanda, or Diego. Return only the name and NOTHING ELSE. 2. Create a variable name inside the card. 3. In the greeting message, use this variable: "Hi, how are you? I am {{name}}." - ❗ Important - Claudia does not change photos in the helpdesk - When configuring name rotation, be attentive to the name that appears in the customer's contact channel! This name is altered by this setting, and Claudia does not change it automatically. - Suggestion: leave a generic name like "Cloud Humans Support" - Customizable list: you can set your preferred names. - Consistency: keep the same list of names in the context and in the greeting.

Last updated on Sep 01, 2025