
Add Qualifire to Your n8n Flows, Fast
Convert any LLM or agent step into a safe, auditable response path using Qualifire’s real time evaluation, guardrails and hallucination detection.
Why this matters
N8N workflows are prone to AI risks such as hallucinations, unexpected behavior, data leakage and compliance and policy violations. This guide shows a minimal, production ready pattern for n8n, a low code workflow automation tool with a visual, node based editor that connects APIs, databases, webhooks and custom code. Run n8n self-hosted or in the cloud, design branching flows, and automate complex logic without building a custom integration service. Follow the pattern, output guardrail, optional input guardrail, observability tips and troubleshooting, to keep agents reliable, reduce human handoffs, and capture audit data for compliance.
TL;DR
- Add one HTTP Request node after your agent to call Qualifire Evaluate.
- Gate the result with an IF node that treats unknowns as flagged.
- Optionally add the same pattern before the agent to block unsafe inputs and jailbreak attempts.
- Log evaluationResults for audits.
Prerequisites
- An n8n instance (self-hosted or Cloud).
- A Qualifire API key.
- Your LLM/agent step is already producing a response (e.g., OpenAI Chat node, custom function, etc.).
A full step by step guide can be found here.
Output guardrail, step-by-step
1. Add the HTTP Request node
Place an HTTP Request node immediately after your agent output node.
HTTP node settings
- Method: POST
- URL: https://proxy.qualifire.ai/api/evaluation/evaluate
- Send: JSON
- Headers:
- Content-Type: application/json
- X-Qualifire-API-Key: {{$env.QUALIFIRE_API_KEY}} Or attach an HTTP Header Auth credential with that header.
2. Body, use an object expression
Click fx on the Body field and paste this object expression so n8n handles escaping and dynamic fields:
={{{
assertions: [],
consistency_check: true,
dangerous_content_check: true,
hallucinations_check: true, // enable hallucination detection
harassment_check: true,
hate_speech_check: true,
pii_check: true,
prompt_injections: true,
sexual_content_check: true,
// Pass minimal chat history (user + assistant). Adjust field names to match your flow.
messages: [
{ content: $json.chatInput ?? $json.user ?? '', role: 'user' },
{ content: $json.output ?? $json.message ?? '', role: 'assistant' },
],
}}}
If you prefer a raw JSON template, wrap dynamic fields with {{ JSON.stringify(...) }} and do not add extra quotes around the expression.
3. Gate the result with an IF node
Add an IF node after the HTTP Request to block or allow the response. Use a conservative expression that treats unknown or error states as flagged.
Example expression to evaluate in the IF node:
{
// Flag when API status is not success or any detector returns a blocking label
($json.status || "").toLowerCase() !== "success");
}
- True (flagged): route to fallback, human review, or a safer re-ask prompt.
- False (clean): continue to the normal response path.
FAQ
Q: Can I use this with hosted n8n?
A: Yes. Use env vars or credentials in your hosted instance as allowed.
Q: Does Qualifire return explainability info?
A: evaluationResults includes detector outputs and labels you can log and inspect.
Q: Will the HTTP node add latency?
A: Qualifire’s SLM guardrails are built to minimize the evaluation latency; It usually ranges between 20ms to 300ms. You can use async paths for non-blocking evals as well.
Add Qualifire to Your n8n Flows, Fast

