Silent workflow failures are automation killers. Your critical payment processing workflow stopped working three days ago. Customers have been complaining. Orders are stuck. Nobody on your team noticed because there was no alert, no notification, nothing. The workflow just quietly failed and kept failing.
This scenario happens more often than you might think. Workflows fail for countless reasons: API rate limits, expired credentials, network timeouts, malformed data, server outages. Without proper error handling, these failures accumulate silently until someone notices the downstream damage.
The Error Trigger node is n8nβs solution to this problem. It catches workflow failures and lets you respond: send Slack alerts, trigger email notifications, log errors to a database, or even attempt automatic recovery. Think of it as your workflowβs smoke detector, alerting you the moment something goes wrong.
The Silent Failure Problem
Every automation engineer learns this lesson eventually. A workflow that works perfectly during development fails silently in production. The reasons vary:
- API credentials expire after 30 days
- A third-party service changes their response format
- Rate limits trigger during high-volume periods
- Network connectivity drops briefly
- Input data contains unexpected values
Without proactive monitoring, you discover these failures reactively. A customer complains. A report shows missing data. A downstream system shows inconsistencies. By then, the damage is done.
The Error Trigger node flips this dynamic. Instead of discovering failures after the fact, you receive immediate notification when any workflow fails. You can respond within minutes instead of days.
What Youβll Learn
- When to use the Error Trigger node versus Continue On Fail settings
- The two different error data structures and what each contains
- How to build your first error notification workflow step by step
- Connecting a single error workflow to monitor multiple production workflows
- Setting up alerts via Slack, email, Telegram, and other channels
- Smart error prioritization based on workflow importance
- Combining Error Trigger with the Stop and Error node
- Common mistakes that break error workflows and how to fix them
- Real-world error handling patterns used in production
When to Use the Error Trigger Node
Before configuring error handling, understand the different approaches n8n offers and when each is appropriate.
| Approach | What It Does | Best For |
|---|---|---|
| Error Trigger node | Catches failures and runs a separate error workflow | Production monitoring, team alerts, error logging |
| Continue On Fail | Lets the workflow continue despite node errors | Expected failures, optional operations, graceful degradation |
| IF node error checking | Checks for error conditions in data | Validating API responses, business rule enforcement |
| Try/catch in Code node | Programmatic error handling within code | Complex logic, custom error recovery |
Error Trigger is the right choice when:
- You need immediate notification when workflows fail
- Multiple team members should know about failures
- You want centralized error logging and analytics
- Failed workflows should trigger recovery actions
- Production reliability is critical
Continue On Fail is better when:
- Failures are expected and acceptable (optional enrichment)
- The workflow should proceed despite partial failures
- You handle errors inline with conditional logic
Rule of thumb: Use Error Trigger for βmust knowβ failures that require human attention. Use Continue On Fail for βacceptableβ failures that should not stop the workflow.
For more on conditional error checking, see our If node guide.
Understanding Error Data
When a workflow fails and triggers your error workflow, the Error Trigger node receives detailed information about the failure. Understanding this data structure is essential for building useful notifications.
Standard Execution Error Data
Most workflow failures produce this data structure:
{
"execution": {
"id": "231",
"url": "https://your-n8n.com/execution/231",
"retryOf": "34",
"error": {
"message": "Request failed with status code 429",
"stack": "Error: Request failed with status code 429\n at createError..."
},
"lastNodeExecuted": "HTTP Request",
"mode": "trigger"
},
"workflow": {
"id": "15",
"name": "Daily CRM Sync"
}
}
| Field | Description | Always Present? |
|---|---|---|
execution.id | Unique ID of the failed execution | Only if saved to database |
execution.url | Direct link to view the execution | Only if saved to database |
execution.retryOf | ID of original execution if this was a retry | Only on retries |
execution.error.message | Human-readable error description | Yes |
execution.error.stack | Technical stack trace for debugging | Yes |
execution.lastNodeExecuted | Name of the node that failed | Yes |
execution.mode | How workflow was triggered (trigger, manual, webhook) | Yes |
workflow.id | Unique ID of the failed workflow | Yes |
workflow.name | Human-readable workflow name | Yes |
Trigger Node Error Data
When the error occurs in the workflowβs trigger node itself (not a later node), you receive a different structure:
{
"trigger": {
"error": {
"context": {},
"name": "WorkflowActivationError",
"cause": {
"message": "Webhook registration failed",
"stack": "Error: Webhook registration failed..."
},
"timestamp": 1654609328787,
"message": "Workflow could not be activated",
"node": {
"name": "Webhook",
"type": "n8n-nodes-base.webhook"
}
},
"mode": "trigger"
},
"workflow": {
"id": "15",
"name": "Webhook Handler"
}
}
This structure appears when:
- A webhook fails to register
- A polling trigger cannot connect to its source
- Schedule expressions are invalid
- Credentials for the trigger node are invalid
Accessing Error Data in Expressions
Use these expressions in notification nodes to include error details:
// Workflow information
{{ $json.workflow.name }} // "Daily CRM Sync"
{{ $json.workflow.id }} // "15"
// Error details
{{ $json.execution.error.message }} // "Request failed with status code 429"
{{ $json.execution.lastNodeExecuted }} // "HTTP Request"
// Execution link (for quick access)
{{ $json.execution.url }} // Direct link to failed execution
// For trigger errors
{{ $json.trigger.error.message }} // "Workflow could not be activated"
{{ $json.trigger.error.node.name }} // "Webhook"
For complex expression patterns, test them with our expression validator tool.
Your First Error Workflow
Letβs build a working error workflow from scratch. This workflow sends a Slack notification whenever any connected workflow fails.
Step 1: Create a New Workflow
- Open n8n and click New Workflow
- Name it clearly:
[System] Error HandlerorError Notifications - Save the workflow
Using a naming convention like [System] helps distinguish infrastructure workflows from business workflows.
Step 2: Add the Error Trigger Node
- Click + to add a node
- Search for βError Triggerβ
- Click to add it as your starting node
The Error Trigger has no configuration options. It simply starts when connected workflows fail.
Step 3: Add a Notification Node
Connect a Slack node (or your preferred notification channel):
- Click + after the Error Trigger
- Search for βSlackβ
- Select Slack and choose Send a Message
- Configure the Slack credentials
- Select the channel for error notifications
Set the message text using expressions:
Workflow Failed: {{ $json.workflow.name }}
Error: {{ $json.execution.error.message }}
Node: {{ $json.execution.lastNodeExecuted }}
Mode: {{ $json.execution.mode }}
View execution: {{ $json.execution.url }}
Step 4: Save and Activate
- Click Save to save the workflow
- Toggle the workflow to Active (switch in top right)
Your error workflow must be active to receive error notifications.
Step 5: Connect to Monitored Workflows
Now connect this error workflow to the production workflows you want to monitor:
- Open a production workflow you want to monitor
- Click the Settings icon (gear) in the top right
- Find Error Workflow in the settings panel
- Select your new error workflow from the dropdown
- Save the workflow
Repeat this for each workflow you want to monitor.
Step 6: Understand Testing Limitations
Important: You cannot test error workflows by running them manually. The Error Trigger node only activates when an automated (triggered) workflow execution fails.
To test your error workflow:
- Create a simple test workflow with a Code node
- Add code that deliberately throws an error:
throw new Error('Test error') - Connect the test workflow to your error workflow in settings
- Activate the test workflow
- Trigger it (via webhook, schedule, or other trigger)
- Verify your Slack notification arrives
This limitation exists because error workflows are designed for production monitoring, not manual testing scenarios.
Connecting Error Workflows to Production Workflows
A single error workflow can monitor multiple production workflows. This centralized approach simplifies maintenance and ensures consistent error handling.
Setting the Error Workflow
For each workflow you want to monitor:
- Open the workflow in the editor
- Click Settings (gear icon in top right)
- Scroll to Error Workflow
- Select your error handler from the dropdown
- Save the workflow
The dropdown shows all workflows containing an Error Trigger node.
One Error Workflow for Multiple Workflows
You do not need separate error handlers for each workflow. Configure 10, 50, or 100 workflows to use the same error workflow. The error data includes the workflow name, so your notifications identify which workflow failed.
This centralized approach offers advantages:
- Single point of maintenance for notification logic
- Consistent alert formatting across all workflows
- Easier to add new notification channels
- Simpler to update Slack channels or email recipients
Organizing Error Workflows
For larger organizations, consider multiple error workflows based on criticality:
| Error Workflow | Monitors | Notification |
|---|---|---|
[System] Critical Errors | Payment, auth, core business | PagerDuty + Slack |
[System] Standard Errors | Data sync, reports, integrations | Slack only |
[System] Background Errors | Cleanup, maintenance, optional | Daily email digest |
This separation prevents alert fatigue while ensuring critical failures get immediate attention.
Notification Channels
The Error Trigger provides the error data. What you do with it depends on your notification needs.
Slack Notification
The most common pattern. Configure a Slack node with this message template:
:rotating_light: *Workflow Failed*
*Workflow:* {{ $json.workflow.name }}
*Error:* {{ $json.execution.error.message }}
*Failed Node:* {{ $json.execution.lastNodeExecuted }}
*Mode:* {{ $json.execution.mode }}
<{{ $json.execution.url }}|View Execution>
For rich formatting, use Slackβs Block Kit in the node options.
Email Notification
Use Gmail, SMTP, or any email node:
Subject:
[n8n Error] {{ $json.workflow.name }} failed
Body:
Workflow "{{ $json.workflow.name }}" failed at {{ $now }}.
Error Message:
{{ $json.execution.error.message }}
Failed Node: {{ $json.execution.lastNodeExecuted }}
Stack Trace:
{{ $json.execution.error.stack }}
View the execution here:
{{ $json.execution.url }}
For email troubleshooting, see our authentication errors guide.
Telegram Notification
Use the Telegram node for mobile-first alerts:
Workflow Failed
Workflow: {{ $json.workflow.name }}
Error: {{ $json.execution.error.message }}
Node: {{ $json.execution.lastNodeExecuted }}
{{ $json.execution.url }}
Telegram works well for urgent alerts that need immediate mobile visibility.
Discord Notification
For teams using Discord for operations:
**Workflow Failed**
**Workflow:** {{ $json.workflow.name }}
**Error:** {{ $json.execution.error.message }}
**Node:** {{ $json.execution.lastNodeExecuted }}
[View Execution]({{ $json.execution.url }})
Multi-Channel Approach
For critical workflows, send to multiple channels simultaneously:
- Error Trigger connects to multiple notification nodes in parallel
- Slack gets the rich formatted message
- Email goes to the on-call engineer
- PagerDuty creates an incident for after-hours failures
This redundancy ensures failures never go unnoticed.
Smart Error Prioritization
Not all workflow failures are equally urgent. A failure in your payment processing workflow demands immediate response. A failure in your daily newsletter generator can wait until morning.
Using the Switch Node for Routing
Add a Switch node after the Error Trigger to route errors by workflow name:
Switch Configuration:
| Rule | Condition | Output |
|---|---|---|
| Critical | Workflow name contains βPaymentβ OR βAuthβ OR βOrderβ | Route to PagerDuty + Slack |
| Standard | Workflow name contains βSyncβ OR βReportβ | Route to Slack only |
| Low Priority | Default/Fallback | Route to email digest |
Code Node for Advanced Prioritization
For more complex routing logic, use a Code node:
const workflowName = $json.workflow.name.toLowerCase();
const errorMessage = $json.execution.error.message.toLowerCase();
let priority = 'low';
let channels = ['email'];
// Critical workflows
const criticalWorkflows = ['payment', 'auth', 'checkout', 'subscription'];
if (criticalWorkflows.some(w => workflowName.includes(w))) {
priority = 'critical';
channels = ['pagerduty', 'slack', 'email'];
}
// High priority errors regardless of workflow
const criticalErrors = ['authentication failed', 'rate limit', 'timeout'];
if (criticalErrors.some(e => errorMessage.includes(e))) {
priority = 'high';
if (!channels.includes('slack')) channels.push('slack');
}
return [{
json: {
...$json,
priority,
channels,
requiresImmediate: priority === 'critical'
}
}];
Then use an If node to route based on the priority field.
Priority Response Matrix
| Priority | Response Time | Notification Channels | Example Workflows |
|---|---|---|---|
| Critical | Immediate | PagerDuty + Slack + Email | Payments, authentication, core API |
| High | Within 1 hour | Slack + Email | Customer-facing integrations, CRM sync |
| Medium | Same business day | Slack | Reports, analytics, data enrichment |
| Low | Next business day | Email digest | Cleanup jobs, maintenance, archival |
Combining with Stop and Error Node
The Stop and Error node lets you deliberately trigger errors in your workflows. This triggers the error workflow just like natural failures do. For a complete overview of n8nβs error handling capabilities, see the official error handling documentation.
What Stop and Error Does
This node:
- Immediately stops workflow execution
- Generates an error with a message you specify
- Triggers the connected error workflow
- Logs the error in execution history
Use Cases for Deliberate Errors
Business Rule Violations:
If order total is negative β Stop and Error: "Invalid order total: negative values not allowed"
Validation Failures:
If required field is empty β Stop and Error: "Missing required field: customer_email"
Data Quality Issues:
If duplicate record detected β Stop and Error: "Duplicate order ID detected: {{ $json.orderId }}"
Circuit Breaker Pattern:
If API has failed 5 times in a row β Stop and Error: "Circuit breaker open: API unavailable"
Example: Validation with Error Trigger
Build a workflow that validates incoming data and triggers proper error handling:
Webhook β IF (email valid?)
β True: Continue processing
β False: Stop and Error ("Invalid email format")
When the Stop and Error node executes:
- The workflow stops immediately
- The execution is marked as failed
- Your error workflow receives the error data
- Your Slack notification includes the custom error message
This pattern lets you create meaningful error messages that help diagnose issues quickly.
Continue On Fail vs Error Trigger
These two features solve different problems. Understanding when to use each prevents both silent failures and unnecessary workflow interruptions.
Feature Comparison
| Aspect | Continue On Fail | Error Trigger |
|---|---|---|
| Scope | Single node | Entire workflow |
| Execution | Workflow continues | Separate workflow runs |
| Error data | Available in $json.error | Full execution context |
| Use case | Expected, recoverable failures | Unexpected failures needing attention |
| Configuration | Per-node setting | Workflow settings |
| Testing | Works in manual runs | Only works in automated runs |
When to Use Continue On Fail
Enable Continue On Fail on a node when:
- The operation is optional (enrichment, logging)
- Failure is expected sometimes (checking if record exists)
- You handle the error inline with conditional logic
- The workflow should complete despite partial failures
Example: An HTTP Request node fetches optional product images. If the image service is down, the workflow should continue with a placeholder image instead of failing entirely.
For HTTP Request error handling patterns, see our HTTP Request node guide.
When to Use Error Trigger
Use Error Trigger when:
- Failures are unexpected and need investigation
- Team notification is required
- Error logging and analytics matter
- Automated recovery might be needed
- Compliance requires error documentation
Example: A payment processing workflow fails. Someone needs to know immediately, investigate the cause, and potentially retry the transaction.
Combining Both Approaches
The most robust workflows use both:
- Continue On Fail on nodes with expected, recoverable failures
- Error Trigger to catch unexpected failures that slip through
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Production Workflow β
β β
β Webhook β HTTP Request (Continue On Fail: ON) β
β ββ Success: Process response β
β ββ Failure: Use fallback data β
β β
β β Database Insert (Continue On Fail: OFF) β
β ββ Failure: Triggers Error Workflow β
β β
β Settings: Error Workflow = [System] Error Handler β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The HTTP Request can fail gracefully. But if the database insert fails, that is a critical issue requiring notification.
Common Mistakes and How to Fix Them
These mistakes cause the most frustration when setting up error handling.
Mistake 1: Testing Manually
Symptom: You click βTest Workflowβ on your error workflow. Nothing happens.
Why it fails: The Error Trigger node only activates when an automated workflow execution fails. Manual test runs do not trigger it.
Fix: Create a test workflow with a deliberate error, connect it to your error workflow in settings, activate it, and trigger it automatically (via webhook, schedule, etc.).
Mistake 2: Forgetting to Activate
Symptom: Production workflows fail but your error workflow never runs.
Why it fails: The error workflow must be active to receive triggers.
Fix: Toggle your error workflow to Active. The switch is in the top right of the editor.
Mistake 3: Not Connecting in Settings
Symptom: Error workflow is active but never triggers.
Why it fails: Each monitored workflow must explicitly specify the error workflow in its settings.
Fix: Open each production workflow, go to Settings, and select your error workflow in the βError Workflowβ dropdown.
Mistake 4: Ignoring Trigger Errors
Symptom: Notifications show βundefinedβ for error fields.
Why it fails: Trigger node errors have a different data structure than execution errors. Your expressions assume the standard structure.
Fix: Handle both structures:
// Check which structure we received
const errorMessage = $json.execution
? $json.execution.error.message
: $json.trigger.error.message;
const workflowName = $json.workflow.name;
Or use optional chaining:
{{ $json.execution?.error?.message ?? $json.trigger?.error?.message ?? "Unknown error" }}
Mistake 5: Creating Error Loops
Symptom: Your error workflow runs repeatedly, flooding Slack with notifications.
Why it happens: If your error workflow fails (e.g., Slack credentials expired), and it has itself set as its own error workflow, it triggers itself in a loop.
Fix:
- Never set an error workflow to use itself as its error handler
- Create a separate, minimal backup error handler for your primary error workflow
- Keep error workflows simple to minimize failure risk
Mistake 6: Not Saving Executions
Symptom: Error notifications lack execution URLs.
Why it happens: n8n only generates execution IDs and URLs when executions are saved to the database. If you disabled execution saving, these fields are empty.
Fix: Enable execution saving in n8n settings, or accept that these fields will be missing for lightweight deployments.
For debugging complex issues, try our workflow debugger tool.
Real-World Examples
Example 1: Centralized Slack Alerting
Scenario: All production workflows notify a single Slack channel on failure.
Workflow:
Error Trigger β Slack (Post to #alerts channel)
Slack Message:
:warning: *Workflow Failed*
*Workflow:* {{ $json.workflow.name }}
*Error:* {{ $json.execution?.error?.message ?? $json.trigger?.error?.message }}
*Node:* {{ $json.execution?.lastNodeExecuted ?? $json.trigger?.error?.node?.name ?? "Trigger" }}
<{{ $json.execution?.url ?? "No URL available" }}|View Execution>
This handles both error structures gracefully.
Example 2: Email Digest of Daily Failures
Scenario: Non-critical workflows log errors to a database. A daily email summarizes failures.
Error Workflow:
Error Trigger β Airtable (Insert error record)
Separate Daily Workflow:
Schedule (daily 8am) β Airtable (Get today's errors) β Gmail (Send summary)
This reduces alert fatigue while ensuring visibility.
Example 3: Priority Routing with Escalation
Scenario: Critical failures go to PagerDuty immediately. Standard failures go to Slack.
Workflow:
Error Trigger β Switch (by workflow name)
ββ Contains "Payment" or "Auth" β PagerDuty
ββ Default β Slack
For PagerDuty, use the HTTP Request node to call their Events API:
POST https://events.pagerduty.com/v2/enqueue
Example 4: Error Logging to Google Sheets
Scenario: Track all errors in a spreadsheet for trend analysis.
Workflow:
Error Trigger β Google Sheets (Append row)
Columns:
- Timestamp
- Workflow Name
- Error Message
- Failed Node
- Execution URL
This creates a searchable error history for debugging recurring issues.
Example 5: Automatic Retry Trigger
Scenario: Some failures are transient. Automatically retry the failed execution.
Workflow:
Error Trigger β IF (is transient error?)
ββ True: HTTP Request (call n8n API to retry)
ββ False: Slack notification
Use the n8n API to retry executions:
POST /executions/{{ $json.execution.id }}/retry
Caution: Implement retry limits to prevent infinite loops. Track retry counts and give up after 3 attempts.
Pro Tips and Best Practices
1. Name Error Workflows Clearly
Use a consistent naming convention:
[System] Error Handler[System] Critical Error Alerts[Infrastructure] Error Logger
This makes them easy to identify in the workflow list and dropdown menus.
2. Always Include Execution URLs
The execution URL lets you jump directly to the failed execution for debugging. Always include it in notifications:
View execution: {{ $json.execution.url }}
3. Log Errors to a Database
Beyond notifications, store errors for analysis:
- Track error frequency by workflow
- Identify patterns in failure times
- Measure mean time to detection
- Report on workflow reliability
Google Sheets, Airtable, or a proper database all work. Choose based on your analysis needs.
4. Keep Error Workflows Simple
Complex error workflows can fail themselves. Minimize the risk:
- Use few nodes
- Avoid external API calls when possible (except for notifications)
- Test thoroughly before deployment
- Have a backup notification method
5. Create a Backup Error Handler
Your primary error workflow needs its own error handling:
Primary: Error Trigger β Slack + Database logging
Backup: Error Trigger β Simple email (minimal dependencies)
Set the primaryβs error workflow to the backup. This ensures you know if your alerting itself fails.
6. Test Error Scenarios During Development
Before deploying, verify error handling works:
- Create test workflows with deliberate errors
- Connect them to your error workflow
- Trigger the test workflows
- Verify notifications arrive correctly
- Check all error fields display properly
For workflow testing strategies, see our workflow best practices guide.
7. Monitor Error Workflow Health
Add monitoring for the error workflow itself:
- Track execution counts
- Alert if the error workflow has not run in X days (might indicate a problem)
- Periodically trigger test errors to verify the system works
For comprehensive monitoring in self-hosted environments, see our self-hosting guide.
When to Get Help
Error handling seems simple until edge cases appear. Some scenarios benefit from expert assistance:
- Complex retry logic with backoff and circuit breakers
- Multi-environment setups (dev/staging/prod with different alerting)
- Compliance requirements for error logging and audit trails
- High-volume workflows where error storms could overwhelm notifications
- Integration with enterprise tools like ServiceNow or Jira
Our workflow development services include production-ready error handling patterns. For strategic guidance on reliability engineering, explore our consulting services.
Frequently Asked Questions
Why canβt I test my error workflow by clicking βTest Workflowβ?
The Error Trigger node only activates when an automated workflow execution fails. Manual test runs do not trigger it. This is intentional: error workflows monitor real production failures, not simulated ones.
To test your error workflow:
- Create a test workflow with a Code node containing
throw new Error('Test error') - Connect this test workflow to your error workflow via Settings
- Activate and trigger the test workflow (via webhook, schedule, etc.)
- Verify your notification arrives
This tests the full production path rather than an artificial manual scenario.
How do I set up the same error workflow for multiple workflows?
For each workflow you want to monitor:
- Open the workflow
- Click Settings (gear icon, top right)
- Find βError Workflowβ dropdown
- Select your error handler
- Save
You can connect dozens or hundreds of workflows to a single error workflow. The error data includes the failing workflowβs name ($json.workflow.name) and ID, so notifications clearly identify which workflow failed.
This centralized approach simplifies maintenance and ensures consistent alert formatting.
Whatβs the difference between Continue On Fail and the Error Trigger node?
Continue On Fail is a per-node setting. The workflow keeps running even if that node fails. Error info becomes available in $json.error for downstream handling. Use it for expected, recoverable failures like optional API calls.
Error Trigger catches workflow-level failures and runs a separate error workflow. Use it for unexpected failures needing human attention, logging, or notification.
Key difference: Continue On Fail keeps the workflow running. Error Trigger responds after a workflow has already failed.
Many production setups use both: Continue On Fail on nodes with acceptable failure modes, and Error Trigger to catch unexpected failures that slip through.
How do I access the error message and stack trace in my notifications?
For standard execution errors:
- Error message:
{{ $json.execution.error.message }} - Stack trace:
{{ $json.execution.error.stack }}
For trigger node errors (webhook registration failures, etc.):
- Error message:
{{ $json.trigger.error.message }} - Stack trace:
{{ $json.trigger.error.cause.stack }}
To handle both cases gracefully, use optional chaining:
{{ $json.execution?.error?.message ?? $json.trigger?.error?.message ?? "Unknown error" }}
Stack traces are verbose. Consider including them in emails but omitting from Slack, or truncating to the first few lines using a Code node.
Can error workflows fail and trigger themselves in a loop?
Yes. This can flood your notification channels.
If your error workflow fails (Slack credentials expired, for example) and it uses itself as its own error handler, you get an infinite loop. Each failure triggers another execution, which fails, which triggers another.
To prevent this:
- Never set an error workflow to monitor itself
- Create a minimal backup error workflow with simple email notification
- Set your primary error workflow to use the backup as its error handler
- Keep error workflows simple to minimize failure risk