The Execute Command node is the escape hatch when your automation needs to touch the operating system. You need to run a Python script that processes files. Or execute a database backup with mysqldump. Or call a custom binary that your workflow depends on. The visual nodes handle data transformations, but sometimes you need raw shell access to get the job done.
This node runs shell commands directly on the machine hosting n8n, giving you the same power you have in a terminal session. That power comes with responsibility: security implications, environment configuration, and understanding how your n8n deployment affects what commands are available.
The System Access Problem
n8n excels at connecting APIs, transforming data, and orchestrating workflows. But some tasks require direct system interaction that no API can provide.
Consider these scenarios:
- Processing files with command-line tools like
ffmpeg,imagemagick, orpandoc - Running custom Python or Bash scripts that contain proprietary logic
- Executing Git operations for version-controlled deployments
- Performing database maintenance with native CLI tools
- Calling compiled binaries that have no HTTP interface
The Code node handles JavaScript and Python transformations within n8n’s sandbox. The Execute Command node steps outside that sandbox entirely, running commands in the host system’s shell.
Critical Limitation: Not Available on n8n Cloud
The Execute Command node is only available for self-hosted n8n instances. n8n Cloud does not expose this node because it would allow users to run arbitrary commands on shared infrastructure.
If you use n8n Cloud and need shell access, you have two options: migrate to a self-hosted deployment or restructure your workflow to use HTTP-based alternatives. For example, wrap your Python script in a FastAPI endpoint and call it via the HTTP Request node.
What You’ll Learn
- When to use Execute Command versus Code node or SSH node
- How shell execution differs between Docker and native installations
- Configuring commands with workflow data using expressions
- Extending your Docker image with Python, cURL, and other tools
- Running Python and Bash scripts with proper data passing
- Security best practices for production deployments
- Troubleshooting common errors like “command not found”
- Real-world examples for Git, databases, and file processing
When to Use the Execute Command Node
Before reaching for Execute Command, confirm it is the right tool. Using shell commands adds complexity and security considerations that simpler nodes avoid.
| Scenario | Best Node | Why |
|---|---|---|
| Transform JSON data | Code node | Safer sandbox, no shell escape needed |
| Call an external API | HTTP Request node | Purpose-built for HTTP, handles auth |
| Run Python data transformation | Code node (Python mode) | Built-in Python support |
| Execute Python script with dependencies | Execute Command | Need pip packages not in n8n |
| Run shell script on n8n host | Execute Command | Direct shell access required |
| Run commands on remote server | SSH node | Built for remote execution |
| Git operations | Execute Command | Git CLI provides full control |
| Database backup with mysqldump | Execute Command | CLI tools not available via API |
| Call compiled binary | Execute Command | No alternative for native binaries |
Rule of thumb: Use Execute Command only when you need direct access to the operating system or CLI tools. If an API or built-in node can accomplish the task, prefer that approach for better security and portability.
Self-Hosted Requirement
Since Execute Command is unavailable on n8n Cloud, workflows using this node are not portable to cloud deployments. Design with this limitation in mind. If you anticipate moving to n8n Cloud later, consider building an API wrapper around your shell commands that the HTTP Request node can call.
Understanding How Execute Command Works
The Execute Command node spawns a child process in the system’s default shell and captures the output. Understanding this mechanism helps you troubleshoot issues and write effective commands.
Shell Environment
The node executes commands in the default shell of the host machine:
- Linux: Usually
/bin/shor/bin/bash - macOS:
/bin/zsh(default since Catalina) - Windows:
cmd.exe
This means shell-specific features like bash arrays or zsh globbing may not work as expected if you assume a particular shell. For maximum compatibility, stick to POSIX-compliant syntax or explicitly invoke your target shell:
/bin/bash -c "your bash-specific command here"
Docker vs Native Installation
This distinction causes more confusion than any other aspect of Execute Command. Understanding it prevents hours of debugging.
Native installation (n8n installed directly on OS):
- Commands execute on the same system running n8n
- All installed packages and tools are available
- File paths reference the host filesystem
- Environment variables from the host are accessible
Docker installation:
- Commands execute inside the n8n container, not on the Docker host
- Only packages included in the n8n image are available
- The default n8n image is minimal (Alpine Linux) and lacks common tools
- File paths reference the container filesystem unless volumes are mounted
If you run n8n in Docker and your command fails with “command not found,” the tool simply is not installed in the container. You must either extend the Docker image or mount tools from the host.
The Execute Once Toggle
The Execute Command node has an “Execute Once” option that controls how it handles multiple input items.
| Setting | Behavior | Use When |
|---|---|---|
| Execute Once: On | Command runs once, regardless of input items | Running a single backup script |
| Execute Once: Off | Command runs once per input item | Processing each file from a list |
When “Execute Once” is off, the command executes for each item in the input. Use expressions to inject item-specific data into each command invocation.
Input and Output Data Flow
Input: The node receives items from the previous node. You can access this data in your command using expressions.
Output: The node returns an item containing:
stdout: Standard output from the commandstderr: Standard error outputexitCode: The command’s exit code (0 typically means success)
Subsequent nodes can branch logic based on exitCode using the If node to handle success and failure differently.
Configuration Deep Dive
The Execute Command node has minimal configuration, but the details matter for reliable execution.
The Command Parameter
Enter your shell command in the Command field. This can be:
- A single command:
ls -la /tmp - A command with arguments:
python3 /scripts/process.py --input data.json - A pipeline:
cat file.txt | grep "pattern" | wc -l
Running Multiple Commands
You have several options for running multiple commands in sequence.
Using && (run next only if previous succeeds):
cd /backup && mysqldump mydb > backup.sql && gzip backup.sql
If any command fails (non-zero exit code), subsequent commands do not run.
Using ; (run all regardless of exit codes):
echo "Starting"; might_fail; echo "Done"
All commands run even if might_fail returns an error.
Using separate lines:
cd /backup
mysqldump mydb > backup.sql
gzip backup.sql
Commands on separate lines execute sequentially like a script. This approach is cleaner for complex sequences.
Accessing Workflow Data in Commands
To inject data from previous nodes into your command, use n8n expressions. The syntax depends on what data you need.
Access a specific field from the current item:
echo "Processing user: {{ $json.userName }}"
Access data from a specific node:
python3 process.py --id {{ $('HTTP Request').item.json.id }}
Pass entire JSON as an argument:
The tricky part is converting n8n data to a format shell commands can accept. For simple values, direct expressions work. For complex objects, you need JSON serialization:
echo '{{ JSON.stringify($json) }}' | python3 process_json.py
Escaping special characters:
Shell commands interpret certain characters specially. If your data contains quotes, spaces, or special characters, wrap expressions in quotes:
echo "Message: {{ $json.message.replace(/"/g, '\\"') }}"
For complex data passing, consider writing to a temporary file in a Code node, then reading it in Execute Command.
Working with Command Output
The node captures both stdout and stderr. Access them in subsequent nodes:
// In a Code node after Execute Command
const output = $json.stdout;
const errors = $json.stderr;
const success = $json.exitCode === 0;
Parsing structured output:
If your command outputs JSON, parse it in a subsequent Code node:
const result = JSON.parse($json.stdout);
return [{ json: result }];
Handling large output:
Commands producing large output may hit buffer limits. If you encounter “stdout maxBuffer length exceeded,” redirect output to a file instead:
long_running_command > /tmp/output.txt
Then read the file in a subsequent operation.
Docker Environment Setup
Most n8n users run Docker deployments. The default n8n image is intentionally minimal, which means common tools are missing. This section shows how to extend the image.
Why Commands Fail in Docker
The official n8n Docker image uses Alpine Linux to minimize size. Alpine includes basic utilities but lacks:
- Python and pip
- cURL and wget
- Database clients (mysql, psql)
- Image processing tools (imagemagick, ffmpeg)
- Most programming language runtimes
When you run python3 myscript.py and get “command not found,” Python simply is not installed in the container.
Custom Dockerfile for Python
Create a Dockerfile that extends the n8n image with Python:
FROM docker.n8n.io/n8nio/n8n
# Switch to root to install packages
USER root
# Install Python 3 and pip
RUN apk add --no-cache python3 py3-pip
# Install Python packages (optional)
RUN pip3 install --no-cache-dir requests pandas numpy
# Switch back to node user for security
USER node
Build and use your custom image:
docker build -t n8n-python .
docker run -d --name n8n -p 5678:5678 n8n-python
Custom Dockerfile for cURL
The default image lacks cURL. Add it with:
FROM docker.n8n.io/n8nio/n8n
USER root
RUN apk add --no-cache curl
USER node
While cURL can be useful, consider whether the HTTP Request node can accomplish the same task more elegantly within n8n’s visual workflow.
Custom Dockerfile for Multiple Tools
Combine multiple packages in one Dockerfile:
FROM docker.n8n.io/n8nio/n8n
USER root
# System utilities
RUN apk add --no-cache \
curl \
wget \
git \
jq \
openssh-client
# Python environment
RUN apk add --no-cache python3 py3-pip
RUN pip3 install --no-cache-dir requests
# Database clients
RUN apk add --no-cache \
mysql-client \
postgresql-client
USER node
Volume Mounting for Scripts
Instead of rebuilding images for every script change, mount a scripts directory:
# docker-compose.yml
services:
n8n:
image: docker.n8n.io/n8nio/n8n
volumes:
- ./scripts:/home/node/scripts:ro
Now scripts in your local ./scripts folder are accessible at /home/node/scripts inside the container. The :ro flag makes the mount read-only for security.
For more Docker configuration details, see our n8n Docker setup guide.
Running Python Scripts
Python is one of the most common use cases for Execute Command. Here is how to run Python effectively.
Direct Python Commands
For simple one-liners:
python3 -c "print('Hello from Python')"
For inline scripts with multiple statements:
python3 -c "
import json
data = {'processed': True, 'count': 42}
print(json.dumps(data))
"
Calling External Script Files
If your Python script is mounted or installed in the container:
python3 /home/node/scripts/process_data.py
Pass arguments:
python3 /home/node/scripts/process.py --input "{{ $json.filePath }}" --output /tmp/result.json
Passing Data from n8n to Python
Method 1: Command-line arguments
python3 /scripts/process.py "{{ $json.userId }}" "{{ $json.action }}"
In your Python script:
import sys
user_id = sys.argv[1]
action = sys.argv[2]
Method 2: Environment variables
USER_ID="{{ $json.userId }}" python3 /scripts/process.py
In Python:
import os
user_id = os.environ.get('USER_ID')
Method 3: Piping JSON via stdin
echo '{{ JSON.stringify($json) }}' | python3 /scripts/process.py
In Python:
import sys
import json
data = json.load(sys.stdin)
This method handles complex nested data structures cleanly.
Capturing Python Output
Your Python script should print JSON to stdout for n8n to capture:
import json
result = {
"status": "success",
"processed_items": 42,
"output_file": "/tmp/result.csv"
}
print(json.dumps(result))
In the next node, parse stdout:
// Code node after Execute Command
const pythonResult = JSON.parse($json.stdout);
return [{ json: pythonResult }];
Installing pip Packages at Runtime
For testing, you can install packages at runtime (not recommended for production):
pip3 install requests && python3 /scripts/api_call.py
For production, bake dependencies into your Docker image to avoid download delays and network failures.
Running Bash and Shell Scripts
Bash scripts offer powerful automation capabilities directly in the Execute Command node.
Inline Bash Commands
Simple commands work directly:
ls -la /data | head -20
For bash-specific features, invoke bash explicitly:
/bin/bash -c 'for i in {1..5}; do echo "Item $i"; done'
Calling External Script Files
Store complex logic in script files:
/home/node/scripts/backup.sh
Pass arguments to scripts:
/home/node/scripts/process.sh "{{ $json.inputFile }}" "{{ $json.outputDir }}"
Using Environment Variables
Set environment variables inline:
DB_HOST="{{ $json.dbHost }}" DB_NAME="{{ $json.dbName }}" /scripts/backup.sh
Or use n8n’s environment variable system. Configure them in your n8n deployment and access via the environment variables configuration:
mysqldump -h $DB_HOST -u $DB_USER -p$DB_PASS $DB_NAME > backup.sql
Exit Codes for Conditional Logic
Shell commands return exit codes: 0 for success, non-zero for failure. Use these for workflow branching.
After Execute Command, add an If node:
Condition: {{ $json.exitCode }} equals 0
- True branch: Continue with success logic
- False branch: Handle error, send alert, retry
This pattern is essential for robust automation. Check exit codes rather than assuming commands succeed.
Security Considerations
The Execute Command node can introduce significant security vulnerabilities if misused. Take these precautions seriously.
Why n8n 2.0+ Disables This Node by Default
Starting with n8n version 2.0, the Execute Command node is disabled by default. This change reflects the security risk of arbitrary command execution, especially in multi-user environments.
To enable it, explicitly allow the node via environment variable:
# In your n8n configuration
NODES_INCLUDE=n8n-nodes-base.executeCommand
Or remove it from the exclude list if you have customized NODES_EXCLUDE.
Command Injection Risks
Never pass unsanitized user input directly into commands. Consider this dangerous pattern:
# DANGEROUS - user could inject malicious commands
rm -rf {{ $json.userProvidedPath }}
If userProvidedPath contains / ; rm -rf /, you have a catastrophic security breach.
Safer approach: Validate and sanitize inputs in a Code node before Execute Command:
const safePath = $json.path.replace(/[^a-zA-Z0-9_\-\/\.]/g, '');
if (!safePath.startsWith('/allowed/directory/')) {
throw new Error('Invalid path');
}
return [{ json: { safePath } }];
Principle of Least Privilege
Run n8n with minimal permissions:
- Use a dedicated system user for n8n, not root
- Limit filesystem access to required directories
- Restrict network access where possible
- Avoid storing credentials in scripts; use n8n’s credential system
When to Use SSH Node Instead
If you need to run commands on a different server, use the SSH node rather than Execute Command with SSH inside it. The SSH node:
- Handles authentication securely via n8n credentials
- Provides better error handling
- Is designed for remote execution
Execute Command should only target the local n8n host.
For comprehensive security guidance, review our n8n self-hosting mistakes guide.
Common Errors and Fixes
These errors appear frequently in the n8n community forum. Here is how to solve each one.
Error: “Command not found”
Symptom: Command failed: /bin/sh: python3: not found
Cause: The command is not installed in the n8n environment, or not in the PATH.
Fixes:
-
Check if the tool exists: Run
which python3ortype python3to verify installation. -
Use full path: Instead of
python3, use/usr/bin/python3. -
For Docker: Build a custom image with the required tool (see Docker section above).
-
Verify PATH: Your shell’s PATH may not include the tool’s location. Check with
echo $PATH.
Docker-specific debugging:
# Find your container ID
docker ps | grep n8n
# Execute a shell in the container
docker exec -it <container_id> /bin/sh
# Test if command exists
which python3
Error: “stdout maxBuffer length exceeded”
Symptom: Error: stdout maxBuffer length exceeded
Cause: The command produced more output than the buffer can hold (default is ~1MB).
Fixes:
-
Reduce output: Add filters like
| head -1000or| tail -500. -
Write to file: Redirect output to a file instead of capturing it:
large_output_command > /tmp/output.txt echo "Output written to /tmp/output.txt" -
Process in chunks: If you need all data, split the operation into smaller batches.
According to the Node.js child_process documentation, the maxBuffer option controls this limit.
Error: “Permission denied”
Symptom: /bin/sh: /scripts/run.sh: Permission denied
Cause: The script file lacks execute permissions, or the n8n user cannot access it.
Fixes:
-
Add execute permission:
chmod +x /scripts/run.sh -
Check file ownership: Ensure the n8n user (usually
nodein Docker) can read the file. -
Run via interpreter:
/bin/bash /scripts/run.sh
Error: Expressions Not Substituting
Symptom: The command contains literal {{ $json.field }} instead of the value.
Cause: Expression syntax error or incorrect field reference.
Fixes:
-
Verify field exists: Check that
$json.fieldactually contains data by logging it in a previous node. -
Check syntax: Ensure expressions use double curly braces
{{ }}. -
Escape special characters: If data contains quotes or special shell characters, escape them properly.
Common Error Reference Table
| Error Message | Likely Cause | Fix |
|---|---|---|
| ”command not found” | Tool not installed | Install package or use full path |
| ”stdout maxBuffer exceeded” | Too much output | Redirect to file or limit output |
| ”Permission denied” | File permissions | chmod +x or run via interpreter |
| ”No such file or directory” | Wrong path | Verify path, check volume mounts |
| ”Syntax error” | Shell syntax issue | Check quotes, escapes, line endings |
| Exit code 1 with no output | Command failed silently | Check stderr, add verbose flags |
For complex debugging scenarios, try our free workflow debugger tool.
Real-World Examples
These examples demonstrate practical Execute Command usage patterns.
Example 1: Git Operations
Scenario: Commit and push changes triggered by a workflow.
cd /repo && git add . && git commit -m "Automated update: {{ $now.toFormat('yyyy-MM-dd HH:mm') }}" && git push origin main
Notes:
- Configure Git credentials in the container or use SSH keys
- The repository must be mounted as a volume
- Consider using Git’s credential helper for HTTPS authentication
Example 2: Database Backup with mysqldump
Scenario: Create a timestamped database backup.
mysqldump -h {{ $json.dbHost }} -u {{ $json.dbUser }} -p{{ $json.dbPassword }} {{ $json.dbName }} > /backups/{{ $json.dbName }}_{{ $now.toFormat('yyyyMMdd_HHmmss') }}.sql
Security note: Avoid passing passwords via command line in production. Use a MySQL configuration file or environment variables instead:
mysqldump --defaults-file=/secure/mysql.cnf {{ $json.dbName }} > /backups/backup.sql
Example 3: Python Script for Data Processing
Scenario: Process uploaded CSV files with a Python script.
python3 /scripts/process_csv.py --input "{{ $json.filePath }}" --output "/processed/{{ $json.fileName }}_processed.csv"
Python script (process_csv.py):
import argparse
import pandas as pd
import json
parser = argparse.ArgumentParser()
parser.add_argument('--input', required=True)
parser.add_argument('--output', required=True)
args = parser.parse_args()
# Process the CSV
df = pd.read_csv(args.input)
df['processed'] = True
df.to_csv(args.output, index=False)
# Output result for n8n
result = {
"status": "success",
"rows_processed": len(df),
"output_file": args.output
}
print(json.dumps(result))
Example 4: File System Operations
Scenario: Create directory structure and move files.
mkdir -p /data/{{ $json.clientId }}/{{ $now.toFormat('yyyy/MM') }}
mv "{{ $json.tempFile }}" "/data/{{ $json.clientId }}/{{ $now.toFormat('yyyy/MM') }}/{{ $json.fileName }}"
echo "File moved successfully"
Notes:
- Use
mkdir -pto create parent directories - Quote paths that may contain spaces
- Echo a confirmation message for the workflow log
Example 5: Server Health Check Script
Scenario: Run a health check and return structured results.
/scripts/health_check.sh
health_check.sh:
#!/bin/bash
# Collect system metrics
CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}')
MEMORY=$(free -m | awk 'NR==2{printf "%.1f", $3*100/$2}')
DISK=$(df -h / | awk 'NR==2{print $5}' | sed 's/%//')
# Check if services are running
NGINX_STATUS=$(systemctl is-active nginx 2>/dev/null || echo "not installed")
POSTGRES_STATUS=$(systemctl is-active postgresql 2>/dev/null || echo "not installed")
# Output as JSON
cat << EOF
{
"cpu_percent": $CPU_USAGE,
"memory_percent": $MEMORY,
"disk_percent": $DISK,
"services": {
"nginx": "$NGINX_STATUS",
"postgresql": "$POSTGRES_STATUS"
},
"checked_at": "$(date -Iseconds)"
}
EOF
Parse the JSON output in the next node to trigger alerts or log metrics.
Pro Tips and Best Practices
1. Always Check Exit Codes
Never assume commands succeed. Add an If node to check $json.exitCode:
// Condition for success
{{ $json.exitCode === 0 }}
Route failures to error handling: notifications, retries, or manual review queues.
2. Log Commands for Debugging
During development, echo your command before running it:
echo "Running: python3 /scripts/process.py --id {{ $json.id }}" && python3 /scripts/process.py --id {{ $json.id }}
This helps identify expression substitution issues.
3. Use Temporary Files for Complex Data
Passing complex JSON through command arguments is error-prone. Write to a temp file instead:
// In a Code node before Execute Command
const fs = require('fs');
const tempPath = `/tmp/input_${Date.now()}.json`;
fs.writeFileSync(tempPath, JSON.stringify($json));
return [{ json: { tempPath } }];
Then in Execute Command:
python3 /scripts/process.py --input {{ $json.tempPath }}
4. Set Timeouts for Long Commands
Long-running commands can hang workflows. In your n8n settings, configure appropriate timeouts. For very long operations, consider running them asynchronously and checking status via polling.
5. Document Your Commands
Use n8n’s sticky notes feature to document what each Execute Command node does, especially for complex scripts. Future maintainers (including yourself) will appreciate the context.
6. Test Commands Manually First
Before adding a command to your workflow, test it directly in the n8n container or host:
docker exec -it n8n /bin/sh
# Then run your command manually
This isolates whether issues are with the command itself or n8n’s execution of it.
For workflow architecture guidance, see our n8n workflow best practices guide. For complex automation projects requiring shell integration, our workflow development services can help design robust solutions.
Frequently Asked Questions
Can I run Docker commands from n8n Execute Command?
Yes, but with significant caveats. If n8n runs inside Docker, you cannot directly access the Docker socket by default. To run Docker commands from within the n8n container, you must mount the Docker socket as a volume: -v /var/run/docker.sock:/var/run/docker.sock. This grants the container full Docker access, which is a major security risk. The container could potentially control the host system. Only do this in trusted environments, never in multi-tenant setups. A safer alternative is calling a Docker management API or running Docker commands via SSH to a separate host.
How do I pass n8n data to my Python script?
You have three main options. First, command-line arguments work for simple values: python3 script.py "{{ $json.value }}". Access them in Python via sys.argv. Second, pipe JSON through stdin: echo '{{ JSON.stringify($json) }}' | python3 script.py. Read in Python with json.load(sys.stdin). Third, use environment variables: MY_VAR="{{ $json.value }}" python3 script.py. Access with os.environ.get('MY_VAR'). For complex nested data, the stdin approach is most reliable because it avoids shell escaping issues.
Why is Execute Command not available on n8n Cloud?
n8n Cloud runs on shared infrastructure where multiple customers’ workflows execute. Allowing arbitrary shell commands would let any user potentially access other users’ data, consume excessive resources, or compromise the platform. The security implications make this node incompatible with multi-tenant cloud hosting. If you need shell access, you must use a self-hosted n8n instance where you control the environment. Alternatively, wrap your shell operations in an API that your cloud n8n can call via HTTP Request.
How do I handle commands with very large output?
The default stdout buffer is approximately 1MB. Commands exceeding this throw “maxBuffer length exceeded” errors. The solution is to avoid capturing large output directly. Redirect to a file: big_command > /tmp/output.txt. Then either read the file in a subsequent operation or return just a success indicator. If you need the data in n8n, process it in chunks or filter it before capture: big_command | head -1000 or big_command | grep "relevant". For truly large data processing, consider having your command write results to a database or object storage that n8n can query separately.
Can I run commands on a remote server instead of the n8n host?
Yes, use the SSH node for remote command execution. It handles SSH authentication through n8n’s credential system and provides proper error handling. While you could technically run ssh user@host "command" in Execute Command, this approach requires managing SSH keys manually, lacks n8n’s credential encryption, and makes debugging harder. The SSH node is purpose-built for this use case. For complex remote automation, consider tools like Ansible triggered by n8n, or deploying n8n agents on remote systems.