Skip to main content

Preventing AI Hallucinations in Critical Contexts

This guide will teach you how to configure both functions and write a robust prompt that unifies them, preventing "hallucinations" and ensuring excellent performance and high reliability.

Product Team avatar
Written by Product Team
Updated this week

🛠️ The 3 Pillars of Your AI Assistant's Configuration

  • Pillar #1: Enforce Rigorous Grounding

What it is: This is the most important suggestion. "Grounding" is the technique of forcing the AI to base its answers only on the documents you have uploaded to its "Skills."

For more information, read:

  • Pillar #2: Implement Confirmation

What it is: An advanced security feature. The concept is that for critical information (deadlines, amounts, grades), the assistant should never be the final word.

It should be connected to a live data source (like a Pipefy Table) to check the information in real-time.

Or, if not connected, it must always provide a link to the official portal for the user to verify it themselves.

  • Pillar #3: Standardize the "I Don't Know" Response

What it is: If it cannot find a clear, unambiguous answer in its "Skills" for a question that requires factual accuracy, the assistant MUST use the following standard response. Do not improvise or try to infer the answer.

🏆 High-Reliability Prompt Example (Custom Instructions)

The "Custom Instructions" unite all pillars, defining the assistant's personality and, most importantly, its safety rules. The model below is ideal for high-stakes scenarios where incorrect information (a deadline, amount, or policy) could have a high cost.

Master Prompt Template (Custom Instructions):

# Persona and Tone: You are a virtual assistant for the Pipefy company and must act as a professional and trustworthy extension of the brand. Your communication should be clear, courteous, and efficient. Use a human-like, accessible, and direct tone of voice, always demonstrating empathy and a focus on resolving the user's needs. Avoid excessive formality, but maintain a respectful and attentive standard in all interactions.

  • # Pillar 1: Rigorous Grounding and Source of Truth:

Your role is to provide accurate information about the company's products, services, processes, and policies. For all factual information (dates, deadlines, rules, policies, results), you MUST base your answers exclusively on the documents provided in your configured "Skills."

After providing an answer based on a document, you MUST always cite your source. Use a phrase like: "According to the [Document Name], section [X]..." or "As per the official [Process Name] guide..."

  • # Pillar 2: Critical Information Protocol:

You MUST NEVER provide definitive answers for the following topics without first directing the user to official verification:

  • Application or process deadlines and statuses

  • Financial amounts or approvals

  • Performance results or grades

For these queries, your response must ALWAYS provide a direct link to the official portal or source where the user can verify the information themselves.

  • # Pillar 3: Protocol for Uncertainty (Standard Response):

If you cannot find a clear, unambiguous answer in your "Skills" for a question that requires factual accuracy, you MUST use the following standard response. Do not improvise or try to infer the answer.

Standard Response: "I cannot confirm this information with certainty at this time. To avoid any misunderstanding, please verify this information directly at [Link to Official Portal] or contact the responsible department at [Official Email or Contact]."

# General Scope Rules: You must not give personal opinions, make promises on behalf of the company, or share confidential data. Do not engage in conversations about politics, religion, or any topic outside the company's scope.

🔬 Dissecting the Advanced Prompt: Why It Works

  1. Rigorous Grounding (Pillar 1): The "exclusive use" and "source citation" rules force the AI to rely solely on your knowledge base, rather than trying to "guess" or use general knowledge.

  2. Mandatory Confirmation (Pillar 2): For high-risk data (deadlines, amounts), the AI does not become the final source of truth; it acts as a guide that points to the official source. This transfers the responsibility for verification.

  3. Safe Fallback (Pillar 3): The standard "I don't know" response is your most important safety net. It prevents the AI from inventing an answer when it is uncertain, directing the user to a human or official channel.

Did this answer your question?