n8n workflows run, complete, and forget everything. Every execution starts fresh. That customer ID you processed yesterday? Gone. The timestamp of your last API sync? Vanished. The configuration your AI agent needs to remember? Lost to the void.
This stateless design keeps workflows simple, but it creates a real problem: how do you build automation that remembers?
Before Data Tables, the answer meant duct-taping external services to your workflows. Google Sheets for simple storage. Postgres or Supabase for anything serious. Each option added complexity, credentials, rate limits, and external dependencies that could break at 2 AM.
The Storage Problem in Workflow Automation
Consider a common scenario: you process customer orders from a webhook. Without persistent storage, every workflow run processes every order it receives, even if you already handled it an hour ago. Duplicate emails get sent. APIs get called twice. Customers get annoyed.
The traditional workaround was connecting to Google Sheets or a database. But that meant managing OAuth credentials, handling API rate limits, and accepting 1-2 second delays for every read and write operation.
Data Tables Change Everything
Starting with n8n version 1.113, Data Tables provide native persistent storage directly within n8n. No external databases. No API credentials. No rate limits. Just fast, reliable storage that lives inside your n8n instance.
Think of Data Tables as a simple database built into n8n. You create tables with columns, insert rows, query data, and update records. All operations happen locally, completing in milliseconds instead of seconds.
What You’ll Learn
- When to use Data Tables versus Google Sheets, Postgres, or external databases
- How to create and design tables for common automation patterns
- All seven operations: Insert, Update, Delete, Upsert, Get Many, Get One, and Optimize Bulk
- Real-world patterns for duplicate prevention, AI memory, and configuration storage
- Troubleshooting common issues like visibility problems and data persistence
- Best practices for production-ready Data Table workflows
When to Use the Data Table Node
Data Tables excel at specific use cases but aren’t a universal database replacement. This decision matrix helps you choose the right storage for your workflow.
| Scenario | Best Choice | Why |
|---|---|---|
| Track which records you’ve already processed | Data Table | Fast lookups, no external dependencies |
| Store AI agent conversation memory | Data Table | Quick reads/writes for context retrieval |
| Save workflow configuration and prompts | Data Table | Accessible across all workflows |
| Team needs to manually edit data | Google Sheets | Better collaboration interface |
| Complex queries with joins and relations | Postgres/Supabase | Full SQL capabilities |
| Data needs to survive n8n instance replacement | External database | Data Tables live inside n8n |
| Store files or large binary data | External storage (S3, Drive) | Data Tables have size limits |
| Need version history or audit trails | External database | More robust logging options |
Rule of thumb: Use Data Tables when you need fast, simple storage that workflows control entirely. Use external databases when humans need to interact with the data or when you need advanced database features.
Data Tables vs Variables
n8n also offers workflow variables, which might seem similar. Here’s the difference:
| Aspect | Data Tables | Variables |
|---|---|---|
| Structure | Rows and columns (tabular) | Single key-value pairs |
| Capacity | Multiple tables, thousands of rows | Limited number of variables |
| Query capability | Filter, sort, search | Direct key lookup only |
| Best for | Tracking records, storing datasets | Configuration values, API keys |
Choose Data Tables when you need to store multiple records or query data. Choose Variables for simple configuration values that rarely change.
Understanding Data Tables
Before using the Data Table node in workflows, you need to understand how Data Tables work within n8n’s architecture.
How Data Tables Store Information
Data Tables use SQLite as their underlying storage engine. This means your data lives in a file on the n8n server, providing fast local access without network latency. Each table you create becomes a SQLite table with typed columns.
Storage location varies by deployment:
- n8n Cloud: Managed automatically with backups
- Self-hosted (Docker): Inside the n8n data volume
- Self-hosted (npm): In the n8n user data directory
Version Requirements
Data Tables require n8n version 1.113 or later. The feature entered beta in December 2024 and is available on all plans.
For self-hosted installations, you must explicitly enable the feature by setting an environment variable:
N8N_ENABLED_MODULES=data-table
Add this to your Docker Compose file, environment configuration, or startup command. Without it, the Data section won’t appear in the n8n interface.
Storage Limits
By default, a single n8n instance limits total Data Table storage to 50MB. This covers all tables combined, not per table.
For self-hosted deployments, you can increase this limit:
N8N_DATA_TABLES_MAX_SIZE_BYTES=104857600 # 100MB
Practical capacity at 50MB:
- Approximately 200,000-500,000 simple records (depending on field sizes)
- Enough for most tracking and configuration use cases
- Not suitable for storing large documents or binary data
If you’re approaching storage limits, consider archiving old data to an external database or using the Compare Datasets node to identify records safe to delete.
Creating Tables in n8n
Tables are created through the n8n interface, not within workflows:
- Open your n8n dashboard
- Click the Data tab in the left sidebar
- Click New Table
- Name your table descriptively (e.g.,
processed_orders,ai_conversation_memory) - Define columns with appropriate data types
Available column types:
| Type | Use For | Example |
|---|---|---|
| Text | Strings, IDs, JSON | Customer names, order IDs |
| Number | Integers, decimals | Quantities, prices, scores |
| Date | Timestamps, dates | Created dates, last updated |
| Boolean | True/false flags | Is processed, is active |
Design tip: Always include a unique identifier column (like record_id or external_id) that matches your source data. This enables upsert operations and prevents duplicates.
Your First Data Table Workflow
Let’s build a practical example: tracking which orders your workflow has already processed to prevent duplicates.
Step 1: Create the Table
- Go to Data in n8n’s sidebar
- Click New Table
- Name it
processed_orders - Add columns:
order_id(Text) - The unique order identifierprocessed_at(Date) - When you processed itstatus(Text) - Processing result
Step 2: Add the Data Table Node
- In your workflow, click + to add a node
- Search for “Data Table”
- Add it to your canvas
- Connect it after your order processing logic
Step 3: Configure the Insert Operation
For recording processed orders:
- Resource: Row
- Operation: Insert
- Data Table: Select
processed_ordersfrom the dropdown - Columns to Send: Define mappings
{
"order_id": "{{ $json.orderId }}",
"processed_at": "{{ $now.toISO() }}",
"status": "completed"
}
Step 4: Check Before Processing
Add another Data Table node before your processing logic to check if an order was already handled:
- Operation: Get Many
- Data Table:
processed_orders - Filters:
order_idequals{{ $json.orderId }} - Return All: Off (we only need to know if it exists)
Connect this to an If node that checks if any results were returned. If yes, skip processing. If no, proceed with the order.
Result: Your workflow now has persistent memory. Run it multiple times with the same order, and it only processes once.
Data Table Operations Deep Dive
The Data Table node provides seven operations for managing your data. Understanding each operation’s behavior helps you build reliable workflows.
Insert: Adding New Rows
Insert creates new rows in your table. Use it when you’re certain the record doesn’t already exist.
Parameters:
| Parameter | Description |
|---|---|
| Data Table | Select the target table |
| Columns to Send | Choose which fields to include |
| Mapping | Define values for each column |
Example: Log workflow execution
{
"workflow_name": "{{ $workflow.name }}",
"execution_id": "{{ $execution.id }}",
"started_at": "{{ $now.toISO() }}",
"trigger_data": "{{ JSON.stringify($json) }}"
}
Important: Insert fails if you try to add a row with a primary key that already exists. For update-or-create logic, use Upsert instead.
Update: Modifying Existing Rows
Update changes specific rows that match your filter criteria.
Parameters:
| Parameter | Description |
|---|---|
| Data Table | Select the target table |
| Filter | Conditions to match rows for updating |
| Columns to Update | Fields and new values |
Example: Mark order as shipped
Filter: order_id equals {{ $json.orderId }}
Update values:
{
"status": "shipped",
"shipped_at": "{{ $now.toISO() }}"
}
Behavior notes:
- Updates all rows matching the filter
- Returns the number of rows affected
- Does nothing if no rows match (doesn’t error)
Delete: Removing Rows
Delete removes rows matching your filter criteria. Use with caution in production workflows.
Example: Clean up old processed records
Filter: processed_at is before {{ $now.minus({ days: 30 }).toISO() }}
This removes entries older than 30 days, keeping your table lean.
Safety tip: Always test delete operations with Get Many first to verify your filter matches the expected rows.
Upsert: The Idempotent Operation
Upsert is the most important operation for production workflows. It updates a row if it exists or inserts it if it doesn’t, based on a matching column.
Why upsert matters: Workflows can fail and retry. Webhooks can fire multiple times. Without upsert, retries create duplicate records. With upsert, the same data always produces the same result.
Parameters:
| Parameter | Description |
|---|---|
| Match Column | The column used to find existing rows |
| Columns to Send | Fields to insert or update |
Example: Sync customer data
Match Column: customer_id
{
"customer_id": "{{ $json.id }}",
"email": "{{ $json.email }}",
"name": "{{ $json.name }}",
"last_synced": "{{ $now.toISO() }}"
}
Behavior:
- Looks for a row where
customer_idmatches the input value - If found: updates that row with all provided values
- If not found: inserts a new row with all provided values
This makes your sync workflow safe to run repeatedly without creating duplicates.
Get Many: Querying Your Data
Get Many retrieves rows from your table with optional filtering, sorting, and pagination.
Parameters:
| Parameter | Description |
|---|---|
| Return All | Retrieve all matching rows (be careful with large tables) |
| Limit | Maximum rows to return |
| Filters | Conditions rows must match |
| Sort | Order results by column |
Example: Get recent unprocessed orders
Filters:
- status equals "pending"
- created_at is after {{ $now.minus({ hours: 24 }).toISO() }}
Sort:
- created_at, descending
Limit: 100
Output: Each matching row becomes a separate item in n8n, allowing you to process them with subsequent nodes.
Get One: Single Row Retrieval
Get One returns exactly one row matching your criteria. Useful when you expect a unique result.
Parameters:
| Parameter | Description |
|---|---|
| Filter | Conditions to match the row |
Example: Fetch configuration by key
Filter: config_key equals api_rate_limit
Behavior:
- Returns the first matching row
- If no rows match, returns empty (check for this in your workflow)
- If multiple rows match, only returns the first one
Optimize Bulk: High-Performance Inserts
When inserting many rows at once, Optimize Bulk improves performance by batching database operations.
When to use:
- Importing data from APIs with hundreds of records
- Initial data seeding
- Bulk sync operations
Trade-off: When enabled, the node doesn’t return the inserted data. Use this when you don’t need to reference the inserted rows immediately.
Configuration:
- Enable Optimize Bulk in node options
- Provide an array of records to insert
- The node inserts them efficiently in batches
For workflows needing inserted data confirmation, disable this option and accept slightly slower performance.
Real-World Examples
These patterns solve common automation challenges using Data Tables.
Example 1: Prevent Duplicate Processing
Problem: Your webhook receives the same order multiple times due to retries or system glitches. Without deduplication, you send multiple confirmation emails.
Solution: Check Data Tables before processing, mark as processed after.
Workflow structure:
[Webhook] → [Data Table: Get One] → [If: Exists?]
↓ No
[Process Order]
↓
[Send Email]
↓
[Data Table: Insert]
Data Table: Get One configuration:
- Table:
processed_webhooks - Filter:
webhook_idequals{{ $json.id }}
If node configuration:
- Condition:
{{ $json.id }}exists
If the webhook ID exists in your table, the workflow stops. Otherwise, it processes the order and records the webhook ID.
Example 2: AI Agent Memory Store
Problem: Your AI agent needs context from previous conversations, but each workflow execution starts fresh.
Solution: Store conversation summaries in a Data Table, retrieve relevant context before each AI call.
Table structure: ai_memory
| Column | Type | Purpose |
|---|---|---|
| user_id | Text | Identify the user |
| context_key | Text | Topic or conversation ID |
| memory_content | Text | Summarized context |
| updated_at | Date | When last updated |
Workflow pattern:
[Trigger] → [Data Table: Get Many] → [AI Agent with context] → [Data Table: Upsert]
Get Many filters:
user_idequals incoming user IDupdated_atis after 7 days ago
Pass retrieved memories to your AI agent as system context. After the conversation, upsert a summary back to the table.
For more on AI workflow patterns, see our AI Agent node guide.
Example 3: Simple CRM in n8n
Problem: You need basic contact tracking without setting up a full CRM system.
Solution: Build a lightweight CRM using Data Tables with workflows for adding, updating, and querying contacts.
Table structure: contacts
| Column | Type | Purpose |
|---|---|---|
| contact_id | Text | Unique identifier |
| Text | Primary email | |
| name | Text | Full name |
| company | Text | Organization |
| status | Text | Lead, customer, churned |
| last_contact | Date | Most recent interaction |
| notes | Text | Free-form notes |
Create workflows for:
- Add Contact - Form or webhook triggers insert
- Update Contact - Upsert based on email or contact_id
- Find Contact - Get Many with email/name search
- Daily Digest - Get Many for contacts not contacted in 30 days
This setup handles basic CRM needs. For advanced requirements, consider our workflow development services.
Example 4: Execution Status Dashboard
Problem: You need visibility into which workflows ran, when they ran, and whether they succeeded.
Solution: Log execution metadata to a Data Table, build a simple dashboard.
Table structure: execution_log
| Column | Type | Purpose |
|---|---|---|
| execution_id | Text | n8n execution ID |
| workflow_name | Text | Which workflow ran |
| started_at | Date | Execution start time |
| status | Text | Running, completed, failed |
| duration_ms | Number | How long it took |
| error_message | Text | Error details if failed |
Workflow pattern:
Add Data Table nodes at the start and end of critical workflows:
Start node:
{
"execution_id": "{{ $execution.id }}",
"workflow_name": "{{ $workflow.name }}",
"started_at": "{{ $now.toISO() }}",
"status": "running"
}
End node (success path):
Update where execution_id matches:
{
"status": "completed",
"duration_ms": "{{ $now.toMillis() - $('Start Log').item.json.started_at }}"
}
Error Trigger workflow:
Use the Error Trigger node to catch failures and update the log:
{
"status": "failed",
"error_message": "{{ $json.error.message }}"
}
Example 5: Configuration Management
Problem: Multiple workflows share configuration values (API endpoints, rate limits, feature flags). Changing values means editing multiple workflows.
Solution: Store configuration in a Data Table, read values at workflow start.
Table structure: config
| Column | Type | Purpose |
|---|---|---|
| config_key | Text | Unique setting name |
| config_value | Text | The value (JSON for complex data) |
| description | Text | What this setting does |
| updated_at | Date | Last modification |
Example entries:
| config_key | config_value |
|---|---|
api_base_url | https://api.example.com/v2 |
rate_limit_per_minute | 100 |
feature_flags | {"new_ui": true, "beta_features": false} |
Workflow pattern:
At the start of workflows needing configuration:
- Data Table: Get Many - Filter by needed config keys
- Code node - Transform into usable object
- Reference config values throughout workflow
// In Code node after Get Many
const config = {};
for (const item of $input.all()) {
config[item.json.config_key] = item.json.config_value;
}
return [{ json: { config } }];
Now changing a configuration value updates all workflows using it.
Data Tables vs Alternatives
Choosing the right storage solution depends on your specific requirements. This comparison helps you make informed decisions.
Performance Comparison
| Operation | Data Tables | Google Sheets | Postgres |
|---|---|---|---|
| Single row insert | ~8ms | ~1,000ms | ~50ms |
| Query 100 rows | ~15ms | ~2,000ms | ~30ms |
| Update single row | ~10ms | ~1,200ms | ~40ms |
| Rate limits | None | 100 req/100 sec | Connection-based |
Data Tables provide the fastest operations because they’re local. Google Sheets has API overhead and rate limits. External databases have network latency but scale better.
Feature Comparison
| Feature | Data Tables | Google Sheets | Postgres/Supabase |
|---|---|---|---|
| Setup complexity | None | OAuth credentials | Connection string + schema |
| Query language | Simple filters | Limited | Full SQL |
| Joins/relations | No | No | Yes |
| Full-text search | No | Limited | Yes (with extensions) |
| Collaboration | n8n users only | Anyone with access | Depends on setup |
| Data portability | Export required | Easy export | Standard backup tools |
| Storage limit | 50MB default | 10M cells | Plan-dependent |
| Backup/recovery | Self-managed | Google’s backups | Depends on hosting |
When to Choose Each
Choose Data Tables when:
- Speed matters more than query complexity
- Data stays within n8n (no external access needed)
- You want zero external dependencies
- Storage needs are modest (under 50MB)
- Building workflow-internal state tracking
Choose Google Sheets when:
- Non-technical users need to view or edit data
- Team collaboration on data is required
- You need an instant, familiar interface
- Data visualization through Google’s tools helps
Choose Postgres/Supabase when:
- You need complex queries with joins
- Data exceeds 50MB significantly
- Multiple applications access the same data
- You require full-text search or geo queries
- Data must survive n8n instance replacement
- Compliance requires specific database features
For guidance on complex database integrations, our consulting services can help architect the right solution.
Troubleshooting Common Issues
These issues appear frequently in the n8n community and GitHub issues. Solutions are based on real user reports.
Data Tables Not Visible in n8n
Symptoms:
- No “Data” tab in the sidebar
- Data Table node shows empty dropdown
- “User is missing a scope required to perform this action” error
Causes and solutions:
-
Self-hosted: Feature not enabled
Add the environment variable:
N8N_ENABLED_MODULES=data-tableRestart n8n after adding this variable.
-
Version too old
Data Tables require n8n 1.113 or later. Check your version in Settings and upgrade if needed.
-
Permission issues
On n8n Cloud or Enterprise, verify your user role has Data Table permissions. Contact your workspace admin if needed.
Data Loss After Container Restart
Symptoms:
- Tables exist but data is gone after Docker restart
- Workflow changes also lost
- Transactions not completing properly
Cause: Docker volume not properly mounted or using ephemeral storage.
Solution: Ensure your n8n data directory is mounted to a persistent volume:
# docker-compose.yml
volumes:
- ./n8n_data:/home/node/.n8n
The /home/node/.n8n directory contains both workflow definitions and Data Table storage. Without a volume mount, everything resets when the container restarts.
For self-hosting best practices, see our n8n self-hosting guide.
Tables Not Showing in Node Dropdown
Symptoms:
- Created a table in the Data tab
- Data Table node shows empty or missing tables
- “Create new table” link is broken
Cause: Project ID mismatch in self-hosted installations.
Solution:
- Verify you’re in the correct n8n project/workspace
- Create tables while in the same project as your workflow
- If using the “Create new table” link from the node, ensure the URL is correctly formed
- Try creating the table directly from the Data tab instead
”No Row Found” Errors
Symptoms:
- Get One operation returns nothing when row exists
- Filters don’t match expected rows
- Upsert creates duplicates instead of updating
Causes and solutions:
-
Type mismatch
Number columns require numbers, not strings. If your source data is
"123"(string) but the column is Number type, the match fails.Fix: Use expressions to convert types:
{{ parseInt($json.id) }} // Convert string to number {{ String($json.id) }} // Convert number to string -
Case sensitivity
Text matching is case-sensitive. “Active” doesn’t match “active”.
Fix: Normalize case in your workflow or when storing data.
-
Whitespace issues
Invisible spaces or newlines can prevent matches.
Fix: Trim values before storing and matching:
{{ $json.email.trim() }}
Team Members Can’t See Tables
Symptoms:
- Admin creates table, other team members don’t see it
- Table visible in workflow but not in Data tab
- Different users see different tables
Cause: Table permissions and workspace scope.
Solution:
- On n8n Cloud: Tables are scoped to workspaces. Ensure team members are in the same workspace.
- For Enterprise: Check role-based access controls for Data Table permissions.
- Tables created in personal projects may not be visible to team workspaces.
Use our workflow debugger tool to trace data flow issues and verify what data your nodes actually receive.
Pro Tips and Best Practices
These practices come from production experience and help you avoid common pitfalls.
1. Always Use Upsert for Syncing
When syncing data from external sources, always use Upsert instead of Insert. This makes your workflow idempotent. Run it once or run it ten times, the result is the same.
Bad: [API Call] → [Insert] → Hope it doesn't duplicate
Good: [API Call] → [Upsert on unique_id] → Safe to retry
2. Add Timestamps to Every Table
Include created_at and updated_at columns in every table. These enable:
- Filtering for recent changes
- Identifying stale data for cleanup
- Debugging timing issues
- Building incremental sync patterns
{
"created_at": "{{ $now.toISO() }}",
"updated_at": "{{ $now.toISO() }}"
}
3. Design for Query Patterns
Think about how you’ll query data, not just how you’ll store it. If you frequently filter by status and date, ensure those are separate columns, not buried in a JSON blob.
Less queryable:
{
"data": "{\"status\": \"active\", \"updated\": \"2024-01-15\"}"
}
More queryable:
{
"status": "active",
"updated_at": "2024-01-15",
"additional_data": "{...}"
}
4. Implement Cleanup Routines
Data Tables have storage limits. Schedule cleanup workflows that:
- Delete processed records older than X days
- Archive important data to external storage before deletion
- Monitor table sizes and alert when approaching limits
Use the Filter node combined with Delete to clean up efficiently.
5. Back Up Critical Data
Data Tables live inside n8n. If you lose your n8n instance, you lose the data. For critical information:
- Periodically export to Google Sheets or S3
- Use the Aggregate node to batch records for export
- Schedule daily/weekly backup workflows
6. Use Expressions for Dynamic Table Selection
When building reusable workflows, you can select tables dynamically:
{
{
$json.environment === "production"
? "prod_customers"
: "staging_customers";
}
}
This enables environment-specific logic without duplicating workflows.
7. Monitor Storage Usage
Watch your Data Table storage, especially in production:
- Check total size periodically in the Data tab
- Set up alerts when approaching limits
- Plan capacity before you need it
For complex monitoring needs, review our workflow best practices guide.
8. Test with Production-Like Data
Data Tables behave differently with 10 rows versus 10,000 rows. Test your queries and operations with realistic data volumes before deploying to production.
Use our JSON fixer tool to validate data structures before inserting.
Frequently Asked Questions
How do I enable Data Tables on self-hosted n8n?
Set the environment variable N8N_ENABLED_MODULES=data-table and restart your n8n instance. This is required for all self-hosted installations running version 1.113 or later.
Where to add the variable:
- Docker Compose: In your
environmentsection - Systemd service: In your service file or environment config
- Direct npm: Export before starting n8n
After restarting, the “Data” tab appears in the sidebar and the Data Table node shows available tables.
Important: If you’re running an older version, upgrade first. Data Tables aren’t available before 1.113. For Docker deployments, also ensure your data volume is properly mounted to persist tables across container restarts.
What’s the difference between Data Tables and Variables in n8n?
Data Tables store structured, tabular data with rows and columns. Variables store simple key-value pairs.
Use Data Tables when you need to:
- Track multiple records (processed orders, contacts, conversation history)
- Query and filter data
- Perform operations like Insert, Update, Delete, Get Many
Use Variables when you need to:
- Store configuration values (API endpoints, feature flags)
- Keep credentials or settings that rarely change
- Access single values without querying
Quick decision: Ask yourself, “Do I need to store multiple similar items, or just one value?” Multiple items means Data Tables. Single values mean Variables.
Can I use Data Tables with AI agents and LLM workflows?
Yes. Data Tables integrate naturally with AI workflows.
Common patterns:
- Storing conversation history for multi-turn context
- Saving user preferences for personalization
- Tracking which documents an agent has processed
- Caching expensive AI responses for reuse
Typical workflow structure:
- Before an AI Agent node, use Get Many to retrieve relevant context
- Pass that context to your AI agent
- After the agent responds, use Upsert to save important information
The speed advantage (milliseconds vs seconds for external APIs) is especially valuable for AI workflows where you’re already dealing with LLM latency.
Note: Data Tables store text, so serialize complex objects to JSON when saving and parse them when retrieving.
What happens if I exceed the 50MB storage limit?
Insert operations will fail with an error indicating the storage quota is exceeded. Existing data remains accessible, but you can’t add more until you free up space.
To resolve:
- Review tables in the Data tab to identify large or unnecessary data
- Delete old records using the Delete operation with date-based filters
- For self-hosted: increase the limit with
N8N_DATA_TABLES_MAX_SIZE_BYTES
To prevent hitting limits:
- Implement regular cleanup workflows
- Archive important old data to external storage
- Monitor usage proactively
The 50MB limit is generous for tracking and configuration use cases but can fill up if you’re storing large text fields or logging every execution detail.
How do I migrate data from Google Sheets to Data Tables?
Create a workflow that reads from Google Sheets and writes to Data Tables.
Step-by-step:
- Create your Data Table with matching columns in the n8n Data tab
- Add a Google Sheets node (Get Many) to retrieve all rows
- Optionally add an Edit Fields node to transform column names
- Add a Data Table node (Insert or Upsert) to write each row
For large sheets:
- Use the Split in Batches node to process in chunks
- Enable Optimize Bulk for faster inserts
- Use Upsert if you want the migration to be repeatable (matching on a unique ID column)
After migration, update your workflows to use Data Tables instead of Google Sheets. Consider keeping a backup workflow that periodically exports critical Data Table content back to Sheets for visibility or disaster recovery.