n8n Error Trigger Node
🚨
Trigger Node

n8n Error Trigger Node

Master the n8n Error Trigger node for workflow error handling. Learn to build error workflows, send failure alerts to Slack/email, access error data, and implement smart error prioritization.

Silent workflow failures are automation killers. Your critical payment processing workflow stopped working three days ago. Customers have been complaining. Orders are stuck. Nobody on your team noticed because there was no alert, no notification, nothing. The workflow just quietly failed and kept failing.

This scenario happens more often than you might think. Workflows fail for countless reasons: API rate limits, expired credentials, network timeouts, malformed data, server outages. Without proper error handling, these failures accumulate silently until someone notices the downstream damage.

The Error Trigger node is n8n’s solution to this problem. It catches workflow failures and lets you respond: send Slack alerts, trigger email notifications, log errors to a database, or even attempt automatic recovery. Think of it as your workflow’s smoke detector, alerting you the moment something goes wrong.

The Silent Failure Problem

Every automation engineer learns this lesson eventually. A workflow that works perfectly during development fails silently in production. The reasons vary:

  • API credentials expire after 30 days
  • A third-party service changes their response format
  • Rate limits trigger during high-volume periods
  • Network connectivity drops briefly
  • Input data contains unexpected values

Without proactive monitoring, you discover these failures reactively. A customer complains. A report shows missing data. A downstream system shows inconsistencies. By then, the damage is done.

The Error Trigger node flips this dynamic. Instead of discovering failures after the fact, you receive immediate notification when any workflow fails. You can respond within minutes instead of days.

What You’ll Learn

  • When to use the Error Trigger node versus Continue On Fail settings
  • The two different error data structures and what each contains
  • How to build your first error notification workflow step by step
  • Connecting a single error workflow to monitor multiple production workflows
  • Setting up alerts via Slack, email, Telegram, and other channels
  • Smart error prioritization based on workflow importance
  • Combining Error Trigger with the Stop and Error node
  • Common mistakes that break error workflows and how to fix them
  • Real-world error handling patterns used in production

When to Use the Error Trigger Node

Before configuring error handling, understand the different approaches n8n offers and when each is appropriate.

ApproachWhat It DoesBest For
Error Trigger nodeCatches failures and runs a separate error workflowProduction monitoring, team alerts, error logging
Continue On FailLets the workflow continue despite node errorsExpected failures, optional operations, graceful degradation
IF node error checkingChecks for error conditions in dataValidating API responses, business rule enforcement
Try/catch in Code nodeProgrammatic error handling within codeComplex logic, custom error recovery

Error Trigger is the right choice when:

  • You need immediate notification when workflows fail
  • Multiple team members should know about failures
  • You want centralized error logging and analytics
  • Failed workflows should trigger recovery actions
  • Production reliability is critical

Continue On Fail is better when:

  • Failures are expected and acceptable (optional enrichment)
  • The workflow should proceed despite partial failures
  • You handle errors inline with conditional logic

Rule of thumb: Use Error Trigger for β€œmust know” failures that require human attention. Use Continue On Fail for β€œacceptable” failures that should not stop the workflow.

For more on conditional error checking, see our If node guide.

Understanding Error Data

When a workflow fails and triggers your error workflow, the Error Trigger node receives detailed information about the failure. Understanding this data structure is essential for building useful notifications.

Standard Execution Error Data

Most workflow failures produce this data structure:

{
  "execution": {
    "id": "231",
    "url": "https://your-n8n.com/execution/231",
    "retryOf": "34",
    "error": {
      "message": "Request failed with status code 429",
      "stack": "Error: Request failed with status code 429\n    at createError..."
    },
    "lastNodeExecuted": "HTTP Request",
    "mode": "trigger"
  },
  "workflow": {
    "id": "15",
    "name": "Daily CRM Sync"
  }
}
FieldDescriptionAlways Present?
execution.idUnique ID of the failed executionOnly if saved to database
execution.urlDirect link to view the executionOnly if saved to database
execution.retryOfID of original execution if this was a retryOnly on retries
execution.error.messageHuman-readable error descriptionYes
execution.error.stackTechnical stack trace for debuggingYes
execution.lastNodeExecutedName of the node that failedYes
execution.modeHow workflow was triggered (trigger, manual, webhook)Yes
workflow.idUnique ID of the failed workflowYes
workflow.nameHuman-readable workflow nameYes

Trigger Node Error Data

When the error occurs in the workflow’s trigger node itself (not a later node), you receive a different structure:

{
  "trigger": {
    "error": {
      "context": {},
      "name": "WorkflowActivationError",
      "cause": {
        "message": "Webhook registration failed",
        "stack": "Error: Webhook registration failed..."
      },
      "timestamp": 1654609328787,
      "message": "Workflow could not be activated",
      "node": {
        "name": "Webhook",
        "type": "n8n-nodes-base.webhook"
      }
    },
    "mode": "trigger"
  },
  "workflow": {
    "id": "15",
    "name": "Webhook Handler"
  }
}

This structure appears when:

  • A webhook fails to register
  • A polling trigger cannot connect to its source
  • Schedule expressions are invalid
  • Credentials for the trigger node are invalid

Accessing Error Data in Expressions

Use these expressions in notification nodes to include error details:

// Workflow information
{{ $json.workflow.name }}            // "Daily CRM Sync"
{{ $json.workflow.id }}              // "15"

// Error details
{{ $json.execution.error.message }}  // "Request failed with status code 429"
{{ $json.execution.lastNodeExecuted }} // "HTTP Request"

// Execution link (for quick access)
{{ $json.execution.url }}            // Direct link to failed execution

// For trigger errors
{{ $json.trigger.error.message }}    // "Workflow could not be activated"
{{ $json.trigger.error.node.name }}  // "Webhook"

For complex expression patterns, test them with our expression validator tool.

Your First Error Workflow

Let’s build a working error workflow from scratch. This workflow sends a Slack notification whenever any connected workflow fails.

Step 1: Create a New Workflow

  1. Open n8n and click New Workflow
  2. Name it clearly: [System] Error Handler or Error Notifications
  3. Save the workflow

Using a naming convention like [System] helps distinguish infrastructure workflows from business workflows.

Step 2: Add the Error Trigger Node

  1. Click + to add a node
  2. Search for β€œError Trigger”
  3. Click to add it as your starting node

The Error Trigger has no configuration options. It simply starts when connected workflows fail.

Step 3: Add a Notification Node

Connect a Slack node (or your preferred notification channel):

  1. Click + after the Error Trigger
  2. Search for β€œSlack”
  3. Select Slack and choose Send a Message
  4. Configure the Slack credentials
  5. Select the channel for error notifications

Set the message text using expressions:

Workflow Failed: {{ $json.workflow.name }}

Error: {{ $json.execution.error.message }}
Node: {{ $json.execution.lastNodeExecuted }}
Mode: {{ $json.execution.mode }}

View execution: {{ $json.execution.url }}

Step 4: Save and Activate

  1. Click Save to save the workflow
  2. Toggle the workflow to Active (switch in top right)

Your error workflow must be active to receive error notifications.

Step 5: Connect to Monitored Workflows

Now connect this error workflow to the production workflows you want to monitor:

  1. Open a production workflow you want to monitor
  2. Click the Settings icon (gear) in the top right
  3. Find Error Workflow in the settings panel
  4. Select your new error workflow from the dropdown
  5. Save the workflow

Repeat this for each workflow you want to monitor.

Step 6: Understand Testing Limitations

Important: You cannot test error workflows by running them manually. The Error Trigger node only activates when an automated (triggered) workflow execution fails.

To test your error workflow:

  1. Create a simple test workflow with a Code node
  2. Add code that deliberately throws an error: throw new Error('Test error')
  3. Connect the test workflow to your error workflow in settings
  4. Activate the test workflow
  5. Trigger it (via webhook, schedule, or other trigger)
  6. Verify your Slack notification arrives

This limitation exists because error workflows are designed for production monitoring, not manual testing scenarios.

Connecting Error Workflows to Production Workflows

A single error workflow can monitor multiple production workflows. This centralized approach simplifies maintenance and ensures consistent error handling.

Setting the Error Workflow

For each workflow you want to monitor:

  1. Open the workflow in the editor
  2. Click Settings (gear icon in top right)
  3. Scroll to Error Workflow
  4. Select your error handler from the dropdown
  5. Save the workflow

The dropdown shows all workflows containing an Error Trigger node.

One Error Workflow for Multiple Workflows

You do not need separate error handlers for each workflow. Configure 10, 50, or 100 workflows to use the same error workflow. The error data includes the workflow name, so your notifications identify which workflow failed.

This centralized approach offers advantages:

  • Single point of maintenance for notification logic
  • Consistent alert formatting across all workflows
  • Easier to add new notification channels
  • Simpler to update Slack channels or email recipients

Organizing Error Workflows

For larger organizations, consider multiple error workflows based on criticality:

Error WorkflowMonitorsNotification
[System] Critical ErrorsPayment, auth, core businessPagerDuty + Slack
[System] Standard ErrorsData sync, reports, integrationsSlack only
[System] Background ErrorsCleanup, maintenance, optionalDaily email digest

This separation prevents alert fatigue while ensuring critical failures get immediate attention.

Notification Channels

The Error Trigger provides the error data. What you do with it depends on your notification needs.

Slack Notification

The most common pattern. Configure a Slack node with this message template:

:rotating_light: *Workflow Failed*

*Workflow:* {{ $json.workflow.name }}
*Error:* {{ $json.execution.error.message }}
*Failed Node:* {{ $json.execution.lastNodeExecuted }}
*Mode:* {{ $json.execution.mode }}

<{{ $json.execution.url }}|View Execution>

For rich formatting, use Slack’s Block Kit in the node options.

Email Notification

Use Gmail, SMTP, or any email node:

Subject:

[n8n Error] {{ $json.workflow.name }} failed

Body:

Workflow "{{ $json.workflow.name }}" failed at {{ $now }}.

Error Message:
{{ $json.execution.error.message }}

Failed Node: {{ $json.execution.lastNodeExecuted }}

Stack Trace:
{{ $json.execution.error.stack }}

View the execution here:
{{ $json.execution.url }}

For email troubleshooting, see our authentication errors guide.

Telegram Notification

Use the Telegram node for mobile-first alerts:

Workflow Failed

Workflow: {{ $json.workflow.name }}
Error: {{ $json.execution.error.message }}
Node: {{ $json.execution.lastNodeExecuted }}

{{ $json.execution.url }}

Telegram works well for urgent alerts that need immediate mobile visibility.

Discord Notification

For teams using Discord for operations:

**Workflow Failed**

**Workflow:** {{ $json.workflow.name }}
**Error:** {{ $json.execution.error.message }}
**Node:** {{ $json.execution.lastNodeExecuted }}

[View Execution]({{ $json.execution.url }})

Multi-Channel Approach

For critical workflows, send to multiple channels simultaneously:

  1. Error Trigger connects to multiple notification nodes in parallel
  2. Slack gets the rich formatted message
  3. Email goes to the on-call engineer
  4. PagerDuty creates an incident for after-hours failures

This redundancy ensures failures never go unnoticed.

Smart Error Prioritization

Not all workflow failures are equally urgent. A failure in your payment processing workflow demands immediate response. A failure in your daily newsletter generator can wait until morning.

Using the Switch Node for Routing

Add a Switch node after the Error Trigger to route errors by workflow name:

Switch Configuration:

RuleConditionOutput
CriticalWorkflow name contains β€œPayment” OR β€œAuth” OR β€œOrder”Route to PagerDuty + Slack
StandardWorkflow name contains β€œSync” OR β€œReport”Route to Slack only
Low PriorityDefault/FallbackRoute to email digest

Code Node for Advanced Prioritization

For more complex routing logic, use a Code node:

const workflowName = $json.workflow.name.toLowerCase();
const errorMessage = $json.execution.error.message.toLowerCase();

let priority = 'low';
let channels = ['email'];

// Critical workflows
const criticalWorkflows = ['payment', 'auth', 'checkout', 'subscription'];
if (criticalWorkflows.some(w => workflowName.includes(w))) {
  priority = 'critical';
  channels = ['pagerduty', 'slack', 'email'];
}

// High priority errors regardless of workflow
const criticalErrors = ['authentication failed', 'rate limit', 'timeout'];
if (criticalErrors.some(e => errorMessage.includes(e))) {
  priority = 'high';
  if (!channels.includes('slack')) channels.push('slack');
}

return [{
  json: {
    ...$json,
    priority,
    channels,
    requiresImmediate: priority === 'critical'
  }
}];

Then use an If node to route based on the priority field.

Priority Response Matrix

PriorityResponse TimeNotification ChannelsExample Workflows
CriticalImmediatePagerDuty + Slack + EmailPayments, authentication, core API
HighWithin 1 hourSlack + EmailCustomer-facing integrations, CRM sync
MediumSame business daySlackReports, analytics, data enrichment
LowNext business dayEmail digestCleanup jobs, maintenance, archival

Combining with Stop and Error Node

The Stop and Error node lets you deliberately trigger errors in your workflows. This triggers the error workflow just like natural failures do. For a complete overview of n8n’s error handling capabilities, see the official error handling documentation.

What Stop and Error Does

This node:

  • Immediately stops workflow execution
  • Generates an error with a message you specify
  • Triggers the connected error workflow
  • Logs the error in execution history

Use Cases for Deliberate Errors

Business Rule Violations:

If order total is negative β†’ Stop and Error: "Invalid order total: negative values not allowed"

Validation Failures:

If required field is empty β†’ Stop and Error: "Missing required field: customer_email"

Data Quality Issues:

If duplicate record detected β†’ Stop and Error: "Duplicate order ID detected: {{ $json.orderId }}"

Circuit Breaker Pattern:

If API has failed 5 times in a row β†’ Stop and Error: "Circuit breaker open: API unavailable"

Example: Validation with Error Trigger

Build a workflow that validates incoming data and triggers proper error handling:

Webhook β†’ IF (email valid?)
           β†’ True: Continue processing
           β†’ False: Stop and Error ("Invalid email format")

When the Stop and Error node executes:

  1. The workflow stops immediately
  2. The execution is marked as failed
  3. Your error workflow receives the error data
  4. Your Slack notification includes the custom error message

This pattern lets you create meaningful error messages that help diagnose issues quickly.

Continue On Fail vs Error Trigger

These two features solve different problems. Understanding when to use each prevents both silent failures and unnecessary workflow interruptions.

Feature Comparison

AspectContinue On FailError Trigger
ScopeSingle nodeEntire workflow
ExecutionWorkflow continuesSeparate workflow runs
Error dataAvailable in $json.errorFull execution context
Use caseExpected, recoverable failuresUnexpected failures needing attention
ConfigurationPer-node settingWorkflow settings
TestingWorks in manual runsOnly works in automated runs

When to Use Continue On Fail

Enable Continue On Fail on a node when:

  • The operation is optional (enrichment, logging)
  • Failure is expected sometimes (checking if record exists)
  • You handle the error inline with conditional logic
  • The workflow should complete despite partial failures

Example: An HTTP Request node fetches optional product images. If the image service is down, the workflow should continue with a placeholder image instead of failing entirely.

For HTTP Request error handling patterns, see our HTTP Request node guide.

When to Use Error Trigger

Use Error Trigger when:

  • Failures are unexpected and need investigation
  • Team notification is required
  • Error logging and analytics matter
  • Automated recovery might be needed
  • Compliance requires error documentation

Example: A payment processing workflow fails. Someone needs to know immediately, investigate the cause, and potentially retry the transaction.

Combining Both Approaches

The most robust workflows use both:

  1. Continue On Fail on nodes with expected, recoverable failures
  2. Error Trigger to catch unexpected failures that slip through
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Production Workflow                                         β”‚
β”‚                                                             β”‚
β”‚ Webhook β†’ HTTP Request (Continue On Fail: ON)               β”‚
β”‚           β”œβ”€ Success: Process response                      β”‚
β”‚           └─ Failure: Use fallback data                     β”‚
β”‚                                                             β”‚
β”‚        β†’ Database Insert (Continue On Fail: OFF)            β”‚
β”‚           └─ Failure: Triggers Error Workflow               β”‚
β”‚                                                             β”‚
β”‚ Settings: Error Workflow = [System] Error Handler           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The HTTP Request can fail gracefully. But if the database insert fails, that is a critical issue requiring notification.

Common Mistakes and How to Fix Them

These mistakes cause the most frustration when setting up error handling.

Mistake 1: Testing Manually

Symptom: You click β€œTest Workflow” on your error workflow. Nothing happens.

Why it fails: The Error Trigger node only activates when an automated workflow execution fails. Manual test runs do not trigger it.

Fix: Create a test workflow with a deliberate error, connect it to your error workflow in settings, activate it, and trigger it automatically (via webhook, schedule, etc.).

Mistake 2: Forgetting to Activate

Symptom: Production workflows fail but your error workflow never runs.

Why it fails: The error workflow must be active to receive triggers.

Fix: Toggle your error workflow to Active. The switch is in the top right of the editor.

Mistake 3: Not Connecting in Settings

Symptom: Error workflow is active but never triggers.

Why it fails: Each monitored workflow must explicitly specify the error workflow in its settings.

Fix: Open each production workflow, go to Settings, and select your error workflow in the β€œError Workflow” dropdown.

Mistake 4: Ignoring Trigger Errors

Symptom: Notifications show β€œundefined” for error fields.

Why it fails: Trigger node errors have a different data structure than execution errors. Your expressions assume the standard structure.

Fix: Handle both structures:

// Check which structure we received
const errorMessage = $json.execution
  ? $json.execution.error.message
  : $json.trigger.error.message;

const workflowName = $json.workflow.name;

Or use optional chaining:

{{ $json.execution?.error?.message ?? $json.trigger?.error?.message ?? "Unknown error" }}

Mistake 5: Creating Error Loops

Symptom: Your error workflow runs repeatedly, flooding Slack with notifications.

Why it happens: If your error workflow fails (e.g., Slack credentials expired), and it has itself set as its own error workflow, it triggers itself in a loop.

Fix:

  • Never set an error workflow to use itself as its error handler
  • Create a separate, minimal backup error handler for your primary error workflow
  • Keep error workflows simple to minimize failure risk

Mistake 6: Not Saving Executions

Symptom: Error notifications lack execution URLs.

Why it happens: n8n only generates execution IDs and URLs when executions are saved to the database. If you disabled execution saving, these fields are empty.

Fix: Enable execution saving in n8n settings, or accept that these fields will be missing for lightweight deployments.

For debugging complex issues, try our workflow debugger tool.

Real-World Examples

Example 1: Centralized Slack Alerting

Scenario: All production workflows notify a single Slack channel on failure.

Workflow:

Error Trigger β†’ Slack (Post to #alerts channel)

Slack Message:

:warning: *Workflow Failed*

*Workflow:* {{ $json.workflow.name }}
*Error:* {{ $json.execution?.error?.message ?? $json.trigger?.error?.message }}
*Node:* {{ $json.execution?.lastNodeExecuted ?? $json.trigger?.error?.node?.name ?? "Trigger" }}

<{{ $json.execution?.url ?? "No URL available" }}|View Execution>

This handles both error structures gracefully.

Example 2: Email Digest of Daily Failures

Scenario: Non-critical workflows log errors to a database. A daily email summarizes failures.

Error Workflow:

Error Trigger β†’ Airtable (Insert error record)

Separate Daily Workflow:

Schedule (daily 8am) β†’ Airtable (Get today's errors) β†’ Gmail (Send summary)

This reduces alert fatigue while ensuring visibility.

Example 3: Priority Routing with Escalation

Scenario: Critical failures go to PagerDuty immediately. Standard failures go to Slack.

Workflow:

Error Trigger β†’ Switch (by workflow name)
                β”œβ”€ Contains "Payment" or "Auth" β†’ PagerDuty
                └─ Default β†’ Slack

For PagerDuty, use the HTTP Request node to call their Events API:

POST https://events.pagerduty.com/v2/enqueue

Example 4: Error Logging to Google Sheets

Scenario: Track all errors in a spreadsheet for trend analysis.

Workflow:

Error Trigger β†’ Google Sheets (Append row)

Columns:

  • Timestamp
  • Workflow Name
  • Error Message
  • Failed Node
  • Execution URL

This creates a searchable error history for debugging recurring issues.

Example 5: Automatic Retry Trigger

Scenario: Some failures are transient. Automatically retry the failed execution.

Workflow:

Error Trigger β†’ IF (is transient error?)
                β”œβ”€ True: HTTP Request (call n8n API to retry)
                └─ False: Slack notification

Use the n8n API to retry executions:

POST /executions/{{ $json.execution.id }}/retry

Caution: Implement retry limits to prevent infinite loops. Track retry counts and give up after 3 attempts.

Pro Tips and Best Practices

1. Name Error Workflows Clearly

Use a consistent naming convention:

  • [System] Error Handler
  • [System] Critical Error Alerts
  • [Infrastructure] Error Logger

This makes them easy to identify in the workflow list and dropdown menus.

2. Always Include Execution URLs

The execution URL lets you jump directly to the failed execution for debugging. Always include it in notifications:

View execution: {{ $json.execution.url }}

3. Log Errors to a Database

Beyond notifications, store errors for analysis:

  • Track error frequency by workflow
  • Identify patterns in failure times
  • Measure mean time to detection
  • Report on workflow reliability

Google Sheets, Airtable, or a proper database all work. Choose based on your analysis needs.

4. Keep Error Workflows Simple

Complex error workflows can fail themselves. Minimize the risk:

  • Use few nodes
  • Avoid external API calls when possible (except for notifications)
  • Test thoroughly before deployment
  • Have a backup notification method

5. Create a Backup Error Handler

Your primary error workflow needs its own error handling:

Primary: Error Trigger β†’ Slack + Database logging
Backup: Error Trigger β†’ Simple email (minimal dependencies)

Set the primary’s error workflow to the backup. This ensures you know if your alerting itself fails.

6. Test Error Scenarios During Development

Before deploying, verify error handling works:

  1. Create test workflows with deliberate errors
  2. Connect them to your error workflow
  3. Trigger the test workflows
  4. Verify notifications arrive correctly
  5. Check all error fields display properly

For workflow testing strategies, see our workflow best practices guide.

7. Monitor Error Workflow Health

Add monitoring for the error workflow itself:

  • Track execution counts
  • Alert if the error workflow has not run in X days (might indicate a problem)
  • Periodically trigger test errors to verify the system works

For comprehensive monitoring in self-hosted environments, see our self-hosting guide.

When to Get Help

Error handling seems simple until edge cases appear. Some scenarios benefit from expert assistance:

  • Complex retry logic with backoff and circuit breakers
  • Multi-environment setups (dev/staging/prod with different alerting)
  • Compliance requirements for error logging and audit trails
  • High-volume workflows where error storms could overwhelm notifications
  • Integration with enterprise tools like ServiceNow or Jira

Our workflow development services include production-ready error handling patterns. For strategic guidance on reliability engineering, explore our consulting services.

Frequently Asked Questions

Why can’t I test my error workflow by clicking β€œTest Workflow”?

The Error Trigger node only activates when an automated workflow execution fails. Manual test runs do not trigger it. This is intentional: error workflows monitor real production failures, not simulated ones.

To test your error workflow:

  1. Create a test workflow with a Code node containing throw new Error('Test error')
  2. Connect this test workflow to your error workflow via Settings
  3. Activate and trigger the test workflow (via webhook, schedule, etc.)
  4. Verify your notification arrives

This tests the full production path rather than an artificial manual scenario.

How do I set up the same error workflow for multiple workflows?

For each workflow you want to monitor:

  1. Open the workflow
  2. Click Settings (gear icon, top right)
  3. Find β€œError Workflow” dropdown
  4. Select your error handler
  5. Save

You can connect dozens or hundreds of workflows to a single error workflow. The error data includes the failing workflow’s name ($json.workflow.name) and ID, so notifications clearly identify which workflow failed.

This centralized approach simplifies maintenance and ensures consistent alert formatting.

What’s the difference between Continue On Fail and the Error Trigger node?

Continue On Fail is a per-node setting. The workflow keeps running even if that node fails. Error info becomes available in $json.error for downstream handling. Use it for expected, recoverable failures like optional API calls.

Error Trigger catches workflow-level failures and runs a separate error workflow. Use it for unexpected failures needing human attention, logging, or notification.

Key difference: Continue On Fail keeps the workflow running. Error Trigger responds after a workflow has already failed.

Many production setups use both: Continue On Fail on nodes with acceptable failure modes, and Error Trigger to catch unexpected failures that slip through.

How do I access the error message and stack trace in my notifications?

For standard execution errors:

  • Error message: {{ $json.execution.error.message }}
  • Stack trace: {{ $json.execution.error.stack }}

For trigger node errors (webhook registration failures, etc.):

  • Error message: {{ $json.trigger.error.message }}
  • Stack trace: {{ $json.trigger.error.cause.stack }}

To handle both cases gracefully, use optional chaining:

{{ $json.execution?.error?.message ?? $json.trigger?.error?.message ?? "Unknown error" }}

Stack traces are verbose. Consider including them in emails but omitting from Slack, or truncating to the first few lines using a Code node.

Can error workflows fail and trigger themselves in a loop?

Yes. This can flood your notification channels.

If your error workflow fails (Slack credentials expired, for example) and it uses itself as its own error handler, you get an infinite loop. Each failure triggers another execution, which fails, which triggers another.

To prevent this:

  • Never set an error workflow to monitor itself
  • Create a minimal backup error workflow with simple email notification
  • Set your primary error workflow to use the backup as its error handler
  • Keep error workflows simple to minimize failure risk

Ready to Automate Your Business?

Tell us what you need automated. We'll build it, test it, and deploy it fast.

βœ“ 48-72 Hour Turnaround
βœ“ Production Ready
βœ“ Free Consultation
⚑

Create Your Free Account

Sign up once, use all tools free forever. We require accounts to prevent abuse and keep our tools running for everyone.

or

By signing up, you agree to our Terms of Service and Privacy Policy. No spam, unsubscribe anytime.