🔐 Available for all plans except the Starter plan 👤 For super admins and company admins 🎯 Enables custom API key integration to enhance Pipefy AI capabilities |
How to Use the Pipefy AI - LLM Provider Feature
The "Pipefy AI - Bring Your Own Keys" feature allows users to integrate their own API keys, enabling greater customization and enhanced AI Agents capabilities within Pipefy. This guide will walk you through configuring and using this feature.
What is the "Bring your own Keys" Feature?
This feature provides the flexibility to use your own API keys with Pipefy AI. This allows you to integrate AI services of your choice, enhancing the customization and efficiency of workflows in Pipefy.
Note: It is important to highlight that this configuration applies exclusively to AI Agents, ensuring greater personalization and efficiency in Pipefy workflows.
Step-by-Step Guide to Configuring Your Own Keys
Step 1: Access Pipefy Settings
Log in to your Pipefy account.
In the upper-right corner, click your profile icon to access the Administration Panel.
Step 2: Select LLM Provider (AI).
Step 3: Add API Keys
📌 The process of adding a new provider may vary, as specific fields and requirements depend on the chosen provider.
📢 To integrate a language model (LLM) with Pipefy, you’ll need an API key provided by the LLM provider. Look for a section labeled Settings, Credentials, API Keys, or something similar within your chosen provider's platform.
Once the LLM provider is added to Pipefy:
If needed, you can restore the default Pipefy model by clicking the reload icon within the active model card or navigating to the Pipefy card and selecting Set as Provider.
List of supported LLM options:
OpenAI:
gpt-4o, gpt-4o-mini, and dated release versions (e.g., gpt-4o-2024-08-06, gpt-4o-2024-11-20)
gpt-4.1-2025-04-14
gpt-5 and latest dated release versions (e.g., gpt-5-2025-08-07)
Azure OpenAI: Same models as OpenAI
Google Vertex (API only — contact support):
gemini-1.5-pro, and gemini-1.5-pro-002
gemini-2.0-flash, and gemini-2.0-flash-lite
AWS Bedrock (check requirements below)
Custom Provider (check requirements below)
Limitations and Requirements for BYOLLM Usage
If you’re planning to use a custom LLM provider with Pipefy’s BYOLLM feature, please review the following technical requirements and current limitations:
Requirements for Custom URL Providers
When using the “Custom” option (i.e., providing your own inference endpoint), ensure that your LLM and setup meet the following criteria:
OpenAI-Compatible Response Format: Required so Pipefy AI Agents can correctly process the LLM’s output (e.g., choices, message, content).
Vision Support: Needed if you plan to use image and/or document input capabilities in your AI Agents.
Tool Calling Support: Ensures AI Agents can execute multi-step workflows and use agent skills effectively.
OAuth 2.0 Authentication: Not yet supported — this feature is currently under development.
Behavior Across the Platform
When any BYOLLM provider is selected (e.g., OpenAI, Azure OpenAI, or Custom), all AI Agent runs will be executed using your configured LLM provider, except for one specific case:
Document Embedding Exception: For skills of the type “Access to Documents”, the step that converts document text into searchable vectors (“embedding”) will still use Pipefy’s internal LLM provider. This means the retrieval accuracy for these skills will remain consistent, even if you switch to a custom LLM for other steps.
We are actively working to support BYOLLM for this step as well in a future update.
📢Tips for Using the Feature:
Security: Keep API keys secure and avoid sharing them publicly.
Limitations: Check the limitations and usage policies of the API you’re integrating with to avoid service interruptions.
This feature is a powerful way to customize your experience with Pipefy AI, allowing your operations to be more agile and tailored to your specific needs.
🛠Troubleshooting Common Issues:
Authentication Error: Verify that the API key is correct and active.
Inactive Integration: Make sure the third-party service is operational.
If you have additional questions, contact Pipefy support or refer to the official documentation of the API you’re using.