n8n Execution Data Node
🏷️
Action Node

n8n Execution Data Node

Master the n8n Execution Data node for tagging and filtering workflow executions. Learn to save custom metadata, track multi-tenant workflows, debug production issues, and search execution history effectively.

Finding a specific workflow execution among thousands is like searching for a needle in a haystack. Your payment processing workflow failed for customer ID 12345 last Tuesday. You need to find that exact execution, but the execution list only shows timestamps and generic success/failure status. You scroll through pages of executions, trying to remember the approximate time, hoping you can spot the right one.

This scenario plays out constantly in production n8n environments. Workflows run hundreds or thousands of times per day. When something goes wrong, or when you need to audit a specific transaction, the default execution metadata is not enough. You need context: customer IDs, order numbers, environment tags, error categories.

The Execution Data node solves this problem by letting you attach custom key-value metadata to workflow executions. Tag each execution with a customer ID, and suddenly you can filter the execution list to show only that customer’s runs. Tag with error types, and debugging becomes a search operation instead of a scrolling marathon.

The Execution Visibility Problem

Every n8n execution stores basic information automatically:

  • Execution status (success, failed, waiting)
  • Start time and duration
  • Input/output data for each node
  • The workflow that ran

But this default data answers “what happened” without answering “who” or “why.” Consider these real scenarios:

  • A multi-tenant SaaS runs the same workflow for different customers. Which execution belongs to which customer?
  • An order processing workflow fails. Which order failed?
  • A retry mechanism runs multiple times. Which execution was the first attempt, and which was a retry?
  • A workflow runs in both staging and production environments. How do you filter by environment?

Without custom metadata, you are stuck clicking into individual executions, examining their input data, and hoping you picked the right one.

What You’ll Learn

  • When to use the Execution Data node versus the Code node for custom data
  • The critical limitations that break workflows if ignored
  • How to tag executions for multi-tenant and debugging scenarios
  • Filtering execution history by custom metadata in the n8n UI
  • Combining with the Code node for read-write operations
  • Real-world patterns used in production workflows

When to Use the Execution Data Node

Before adding the node, understand when it is the right choice versus alternatives.

ApproachWhat It DoesBest For
Execution Data nodeSaves string key-value pairs to execution metadataSimple tagging via UI configuration
Code node with $execution.customDataSets and retrieves custom data programmaticallyComplex logic, dynamic values, reading data
Workflow variables ($vars)Stores runtime data during executionData shared between nodes, not persisted
Static dataPersists data across executionsConfiguration, counters, state between runs

Use the Execution Data node when:

  • You need simple, static tagging (customer ID, environment, category)
  • Values come directly from input data with minimal transformation
  • You only need to write data, not read it back during execution
  • You prefer visual configuration over code

Use the Code node when:

  • You need to read custom data later in the workflow
  • Values require computation or conditional logic
  • You need to set more than a few key-value pairs dynamically
  • Complex string formatting or validation is required

Rule of thumb: Start with the Execution Data node for straightforward tagging. Switch to the Code node when you need to read data back or apply complex logic.

For programmatic data manipulation, see our Code node guide.

Understanding the Execution Data Node

The Execution Data node is a core n8n node that writes metadata to the current execution record. This metadata becomes searchable and filterable in the Executions list.

Adding the Node

  1. Click + to add a node
  2. Search for “Execution Data”
  3. Click to add it to your workflow
  4. Configure the key-value pairs

Operations

The node offers a single operation: saving data. You configure one or more key-value pairs, and the node writes them to the execution record.

Parameters

ParameterDescriptionRequired
KeyThe identifier for this piece of data (e.g., “customerId”, “orderNumber”)Yes
ValueThe value to store, can use expressionsYes

You can add multiple key-value pairs to a single Execution Data node by clicking “Add Data.”

Critical Limitations

These constraints are not obvious from the UI, and ignoring them causes silent failures or workflow errors.

LimitationConstraintWhat Happens If Exceeded
Key lengthMaximum 50 charactersData may be truncated or rejected
Value lengthMaximum 255 charactersData may be truncated or rejected
Maximum items10 key-value pairs per executionOldest data removed to make room
Data typeStrings onlyNumbers, booleans, objects fail silently
RetrievalWrite-only nodeCannot read data with this node

The string-only limitation trips up most users. If you pass a number like {{ $json.orderId }} where orderId is 12345 (number type), you must convert it to a string first.

The retrieval limitation is equally important. The Execution Data node can only write. To read custom data during the same execution, you must use the Code node with $execution.customData.get().

Data Persistence

Custom execution data persists with the execution record. It follows the same retention rules as other execution data:

  • Stored in the n8n database alongside execution details
  • Subject to execution pruning settings (by age or count)
  • Available as long as the execution record exists
  • Not affected by workflow changes after execution

Practical Examples

These patterns address real problems from production n8n deployments.

Example 1: Multi-Tenant Tagging

Scenario: A SaaS platform runs the same workflow for multiple customers. Support needs to find all executions for a specific customer.

Configuration:

Key: customerId
Value: {{ $json.customerId }}

Key: customerName
Value: {{ $json.company.name }}

Why it works: Every execution gets tagged with the customer context. Support filters by customerId = "cust_12345" to see only that customer’s workflow runs.

Example 2: Order Processing Tracking

Scenario: An e-commerce workflow processes orders. When an order fails, the team needs to find that specific execution quickly.

Configuration:

Key: orderId
Value: {{ String($json.order.id) }}

Key: orderStatus
Value: {{ $json.order.status }}

Key: totalAmount
Value: {{ String($json.order.total) }}

Note the String() conversion: Order IDs and amounts are often numbers. Converting them to strings ensures reliable storage.

Example 3: Environment Tagging

Scenario: The same workflow runs in development, staging, and production. Developers need to filter executions by environment.

Configuration:

Key: environment
Value: {{ $vars.environment }}

Key: deploymentRegion
Value: {{ $vars.region }}

Why it works: Workflow variables distinguish environments. Filtering by environment = "production" shows only production runs.

Example 4: Error Category Tagging

Scenario: A workflow handles multiple types of errors differently. The team wants to analyze which error types occur most frequently.

Place the Execution Data node in your error handling branch:

Configuration:

Key: errorType
Value: payment_declined

Key: errorSource
Value: stripe_api

Key: retryable
Value: true

Why it works: Error executions get categorized. Filter by errorType = "payment_declined" to analyze payment failures specifically.

Example 5: Retry Tracking

Scenario: A workflow implements retry logic. You need to distinguish first attempts from retries and track retry counts.

Configuration (using Code node for complex logic):

// In a Code node before the Execution Data node
const retryCount = $json.retryCount || 0;
const isRetry = retryCount > 0;
const originalExecutionId = $json.originalExecutionId || $execution.id;

return [{
  json: {
    ...$json,
    retryCount: String(retryCount),
    isRetry: String(isRetry),
    originalExecutionId: originalExecutionId
  }
}];

Then in the Execution Data node:

Key: retryCount
Value: {{ $json.retryCount }}

Key: isRetry
Value: {{ $json.isRetry }}

Key: originalExecutionId
Value: {{ $json.originalExecutionId }}

Why it works: Each retry links back to the original execution, making it easy to trace the full retry chain.

Example 6: Audit Trail

Scenario: Compliance requires tracking which user initiated each workflow run and what action was performed.

Configuration:

Key: userId
Value: {{ $json.user.id }}

Key: userEmail
Value: {{ $json.user.email }}

Key: actionType
Value: {{ $json.action }}

Key: timestamp
Value: {{ $now.toISO() }}

Why it works: Every execution captures the user context. Audit queries filter by userId or actionType to generate compliance reports.

Using with the Code Node

The Execution Data node is write-only. For read-write operations, use the Code node with the $execution.customData object.

Setting Data in Code

// Set a single value
$execution.customData.set("customerId", "cust_12345");

// Set multiple values at once (replaces all existing data)
$execution.customData.setAll({
  "customerId": "cust_12345",
  "orderId": "ord_67890",
  "environment": "production"
});

Reading Data in Code

// Read a single value
const customerId = $execution.customData.get("customerId");

// Read all custom data as an object
const allData = $execution.customData.getAll();
console.log(allData.customerId, allData.orderId);

Full Example: Conditional Tagging

Scenario: Tag executions based on the outcome of earlier processing.

// Get input items
const items = $input.all();

// Determine execution category based on processing results
const successCount = items.filter(i => i.json.success).length;
const failCount = items.filter(i => !i.json.success).length;
const total = items.length;

// Calculate success rate
const successRate = total > 0 ? (successCount / total * 100).toFixed(1) : "0";

// Set execution metadata
$execution.customData.setAll({
  "totalProcessed": String(total),
  "successCount": String(successCount),
  "failCount": String(failCount),
  "successRate": successRate + "%",
  "batchId": $json.batchId || "unknown"
});

// Continue with the items
return items;

Result: Each execution shows processing statistics in the metadata, making it easy to find problematic batches by filtering on successRate or failCount.

Python Example

# Set a single value
_execution.customData.set("customerId", "cust_12345")

# Set multiple values
_execution.customData.setAll({
    "customerId": "cust_12345",
    "orderId": "ord_67890"
})

# Read a value
customer_id = _execution.customData.get("customerId")

# Read all data
all_data = _execution.customData.getAll().to_py()

Note: In Python, use .to_py() when reading all data to convert from JsProxy to a native Python dictionary.

When to Use Code vs Execution Data Node

ScenarioUse Execution Data NodeUse Code Node
Simple static key-value pairsYesOptional
Values from expressionsYesOptional
Need to read data later in workflowNoYes
Conditional logic determines valuesMaybePreferred
Complex string formattingNoYes
More than 10 key-value pairsN/A (limit applies to both)N/A

Filtering Executions by Custom Data

After tagging executions with custom data, you can filter the Executions list to find specific runs.

Step-by-Step Instructions

  1. Navigate to your workflow in the n8n editor
  2. Click the Executions tab in the left sidebar
  3. Click the Filters button (funnel icon)
  4. Select “Saved custom data” from the filter dropdown
  5. Enter the key you want to filter by (e.g., “customerId”)
  6. Enter the value to match (e.g., “cust_12345”)
  7. Apply the filter to see matching executions

Filter Operators

The custom data filter supports exact string matching. The value you enter must match the stored value exactly, including case.

Stored: customerId = "CUST_12345"
Filter: customerId = "cust_12345" → No match (case mismatch)
Filter: customerId = "CUST_12345" → Match

Tip: Use consistent casing in your values (lowercase recommended) to simplify filtering.

Combining Filters

You can combine custom data filters with other execution filters:

  • Status (success, failed, waiting)
  • Date range
  • Mode (manual, trigger)

This combination helps narrow down to specific execution subsets.

Common Errors and Fixes

These issues appear frequently when working with execution data.

Error/IssueCauseFix
Data not appearing in filtersValue is not a stringWrap in String() or use template literals
Value truncatedExceeds 255 charactersTruncate with {{ $json.longField.slice(0, 250) }}
Key not workingExceeds 50 charactersUse shorter, abbreviated keys
”Cannot read properties”Trying to read with Execution Data nodeUse Code node with $execution.customData.get()
Older data missingMore than 10 items storedReduce to 10 or fewer key-value pairs
Type conversion issuesPassing objects or arraysUse JSON.stringify() for complex data

Handling Complex Data Types

When you need to store objects or arrays, stringify them:

// In a Code node before Execution Data node
const orderDetails = {
  items: $json.lineItems.length,
  shipping: $json.shippingMethod,
  discount: $json.discountCode
};

return [{
  json: {
    ...$json,
    orderDetailsString: JSON.stringify(orderDetails).slice(0, 255)
  }
}];

Then reference {{ $json.orderDetailsString }} in the Execution Data node.

Caution: Stringified objects may exceed 255 characters. Always truncate or select only essential fields.

For JSON formatting help, try our JSON fixer tool.

Real-World Architecture Patterns

These patterns come from production n8n deployments handling thousands of daily executions.

Pattern 1: Customer Support Lookup System

Problem: Support agents need to find all workflow executions for a specific customer when investigating issues.

Architecture:

Webhook (receives request)
    ↓
Code Node (extract customer context)
    ↓
Execution Data Node (tag with customer info)
    ↓
Main Processing Logic
    ↓
Response

Execution Data Configuration:

Key: customerId
Value: {{ $json.customer.id }}

Key: accountTier
Value: {{ $json.customer.plan }}

Key: supportTicket
Value: {{ $json.ticketId || "none" }}

Support workflow: When a customer reports an issue, the agent searches by customerId to see all recent executions. The accountTier tag helps prioritize investigation for premium customers. The supportTicket links executions to specific support cases.

Pattern 2: A/B Testing and Feature Flags

Problem: You want to track which feature variant each execution used for later analysis.

Configuration:

Key: experimentId
Value: {{ $vars.ACTIVE_EXPERIMENT }}

Key: variant
Value: {{ $json.assignedVariant }}

Key: featureFlags
Value: {{ JSON.stringify($json.activeFlags).slice(0, 255) }}

Analysis approach: Export executions filtered by experimentId, then aggregate success rates by variant. This provides real production data for A/B test decisions without separate analytics infrastructure.

Pattern 3: Rate Limit and Quota Tracking

Problem: External API rate limits cause intermittent failures. You need to identify which executions hit limits.

Configuration:

Key: apiProvider
Value: stripe

Key: rateLimitRemaining
Value: {{ String($json.headers['x-ratelimit-remaining'] || 'unknown') }}

Key: quotaUsed
Value: {{ String($json.headers['x-quota-used'] || 'unknown') }}

Debugging approach: When rate limit errors spike, filter by apiProvider = "stripe" and examine rateLimitRemaining values. This reveals whether limits are hit at specific times or by specific customers.

For rate limiting strategies, see our API rate limits guide.

Pattern 4: Distributed Workflow Correlation

Problem: A parent workflow spawns child executions. You need to trace all related executions.

Parent workflow configuration:

// Generate a correlation ID
const correlationId = $execution.id + "-" + Date.now();

$execution.customData.set("correlationId", correlationId);
$execution.customData.set("workflowRole", "parent");
$execution.customData.set("childCount", String($json.itemsToProcess.length));

Child workflow configuration:

// Inherit the correlation ID from parent
$execution.customData.set("correlationId", $json.parentCorrelationId);
$execution.customData.set("workflowRole", "child");
$execution.customData.set("parentExecutionId", $json.parentExecutionId);
$execution.customData.set("itemIndex", String($json.itemIndex));

Trace approach: Search by correlationId to see the parent and all children. Filter by workflowRole = "child" and sort by itemIndex to reconstruct processing order.

Pattern 5: Data Pipeline Lineage

Problem: Data flows through multiple workflows. You need to trace data transformations from source to destination.

At each pipeline stage:

Key: pipelineId
Value: {{ $json.pipelineId || $execution.id }}

Key: pipelineStage
Value: ingestion | transformation | enrichment | loading

Key: sourceSystem
Value: {{ $json.source }}

Key: recordCount
Value: {{ String($json.records.length) }}

Lineage approach: Filter by pipelineId to see all stages a dataset passed through. Verify recordCount at each stage to detect data loss.

Combining with Error Handling

Execution data becomes particularly valuable when combined with error handling patterns.

Tagging Failed Executions

Place an Execution Data node in your error handling branch:

Main workflow → (error) → Execution Data → Error Trigger

Configuration in error branch:

Key: failedAt
Value: {{ $now.toISO() }}

Key: failedNode
Value: {{ $execution.lastNodeExecuted }}

Key: errorMessage
Value: {{ $json.error.message.slice(0, 250) }}

Result: Failed executions are tagged with failure context, making it easy to analyze failure patterns.

For complete error handling strategies, see our Error Trigger node guide.

Debugging Transient Failures

When debugging intermittent issues, tag executions with diagnostic data:

Key: apiResponseTime
Value: {{ String($json.responseTimeMs) }}

Key: apiStatusCode
Value: {{ String($json.statusCode) }}

Key: serverRegion
Value: {{ $json.headers['x-server-region'] }}

Filter by apiResponseTime to find slow executions, or by apiStatusCode to find specific error types.

Error Pattern Analysis

Tag errors with categories to identify systemic issues:

// In a Code node in your error handling branch
const errorMessage = $json.error.message.toLowerCase();

let errorCategory = "unknown";
if (errorMessage.includes("timeout")) errorCategory = "timeout";
else if (errorMessage.includes("rate limit") || errorMessage.includes("429")) errorCategory = "rate_limit";
else if (errorMessage.includes("authentication") || errorMessage.includes("401")) errorCategory = "auth";
else if (errorMessage.includes("not found") || errorMessage.includes("404")) errorCategory = "not_found";
else if (errorMessage.includes("validation")) errorCategory = "validation";

$execution.customData.setAll({
  "errorCategory": errorCategory,
  "errorTimestamp": $now.toISO(),
  "recoverable": String(["timeout", "rate_limit"].includes(errorCategory))
});

return $input.all();

Analysis approach: Filter by errorCategory to see all rate limit errors. Check recoverable = "true" to find errors that might succeed on retry.

For timeout-related issues, see our timeout troubleshooting guide.

Pro Tips and Best Practices

1. Plan Your Schema

Before implementing execution data, define your tagging schema:

  • What data do you need to filter by?
  • What keys will you use (keep under 50 characters)?
  • What values are possible (keep under 255 characters)?
  • How will you ensure consistency across workflows?

Document your schema so team members use consistent key names.

2. Use Lowercase Keys

Consistent key naming simplifies filtering:

Good: customerId, orderId, environment
Avoid: CustomerId, CUSTOMER_ID, customer-id

Pick one convention and stick with it across all workflows.

3. Set Data Early

Place the Execution Data node near the beginning of your workflow, right after extracting the relevant identifiers. This ensures executions are tagged even if later nodes fail.

Webhook → Extract IDs → Execution Data → Rest of workflow

4. Track Retry Context

When implementing retries, pass context through the retry chain:

$execution.customData.setAll({
  "retryAttempt": String(($json.retryAttempt || 0) + 1),
  "maxRetries": "3",
  "originalTriggerTime": $json.originalTriggerTime || $now.toISO()
});

This makes it easy to identify which executions are retries and how many attempts occurred.

5. Combine with Workflow Variables

Use workflow variables for environment-specific values:

Key: environment
Value: {{ $vars.ENV_NAME }}

Key: region
Value: {{ $vars.DEPLOYMENT_REGION }}

This keeps environment configuration centralized.

6. Monitor Execution Data Usage

Periodically review which custom data fields you actually use for filtering. Remove unused fields to stay within the 10-item limit and keep your tagging schema clean.

For workflow organization strategies, see our workflow best practices guide.

When to Get Help

Execution data is straightforward for simple tagging, but some scenarios benefit from expert guidance:

  • Multi-tenant architectures with complex customer isolation requirements
  • Compliance-driven audit trails needing structured logging
  • High-volume workflows where execution data impacts performance
  • Custom analytics dashboards consuming execution metadata
  • Complex debugging scenarios requiring correlated execution analysis

Our workflow development services include production-ready execution tagging patterns. For strategic guidance on workflow architecture, explore our consulting services.

Frequently Asked Questions

Can I retrieve custom data with the Execution Data node?

No. The Execution Data node is write-only.

To read custom data during the same execution, use the Code node:

// Read a single value
const customerId = $execution.customData.get("customerId");

// Read all custom data
const allData = $execution.customData.getAll();

This design separates simple visual tagging (Execution Data node) from programmatic read-write operations (Code node).

How many key-value pairs can I store per execution?

Maximum 10 key-value pairs per execution.

If you exceed this limit, older data may be removed to make room.

Workarounds if you need more:

  • Combine related values into a single stringified object
  • Focus on the most critical identifiers for filtering
  • Use a reference ID that links to full data stored elsewhere

Can I store objects or arrays in execution data?

No. Execution data accepts strings only.

Numbers, booleans, objects, and arrays must be converted to strings first.

For objects, use JSON.stringify():

// In a Code node
const summary = JSON.stringify($json.orderDetails).slice(0, 255);
$execution.customData.set("orderSummary", summary);

Remember the 255-character value limit. Consider storing only essential fields or a reference ID for complex data.

Does custom data persist after I modify the workflow?

Yes. Custom execution data is tied to the execution record, not the workflow definition.

Key points:

  • Data persists regardless of subsequent workflow changes
  • Follows the same retention rules as other execution data
  • Subject to your n8n instance’s execution pruning settings (by age or count)
  • Modifying or deleting the workflow does not affect historical execution data

How do I find executions with specific custom data in the UI?

Step-by-step:

  1. Go to your workflow in the n8n editor
  2. Click the Executions tab
  3. Click the Filters button (funnel icon)
  4. Select Saved custom data from the dropdown
  5. Enter the key name (e.g., “customerId”)
  6. Enter the exact value to match (e.g., “cust_12345”)

Important: The filter uses exact string matching, including case sensitivity.

For more details, see the official custom executions data documentation.

Ready to Automate Your Business?

Tell us what you need automated. We'll build it, test it, and deploy it fast.

âś“ 48-72 Hour Turnaround
âś“ Production Ready
âś“ Free Consultation
⚡

Create Your Free Account

Sign up once, use all tools free forever. We require accounts to prevent abuse and keep our tools running for everyone.

or

By signing up, you agree to our Terms of Service and Privacy Policy. No spam, unsubscribe anytime.