n8n API Pagination: Fetch All Your Data Without Missing a Single Record
Your workflow fetched 100 contacts. The API has 10,000. You built an automation that syncs customer data every hour. It runs without errors. Reports look good. Then someone checks the source system and discovers youâve been missing 99% of your records.
This happens more than youâd think. APIs donât dump their entire database in a single response. They paginate. They return a subset of records and provide a way to request the next chunk. If your workflow doesnât follow that pagination trail, youâre working with incomplete data.
The Hidden Data Problem
Most API integrations start simple. You make an HTTP request, get some JSON back, and move on. The problem emerges when your data grows. That CRM with 50 contacts becomes 5,000. The product catalog expands from 200 items to 20,000. The API keeps returning the same first 100 records, and your workflow keeps processing them like nothingâs wrong.
Pagination isnât optional. Itâs table stakes for production-ready automations. Without proper pagination handling, youâre building workflows that silently fail the moment your data exceeds the APIâs default page size.
What Youâll Learn
- How the three main pagination types work (offset, cursor, page number) and how to identify which one an API uses
- Using n8nâs built-in pagination feature in the HTTP Request node to automatically fetch all pages
- Configuring pagination for different API patterns with real expressions you can copy
- Building manual pagination loops for complex scenarios
- Combining pagination with rate limiting to avoid 429 errors
- Error handling strategies when pagination fails mid-stream
- Performance optimization for workflows that fetch thousands of records
How API Pagination Works
Before configuring n8n, you need to understand what pagination actually does and how to identify which type an API uses.
Why APIs Paginate
Returning all records at once creates problems. A request for 100,000 customer records would consume massive server memory, take minutes to serialize to JSON, and overwhelm network bandwidth. The receiving client might crash trying to parse that much data.
Pagination solves this by breaking large datasets into manageable chunks. The client requests one page at a time, processes it, then requests the next. Everyone stays happy.
The Three Pagination Types
| Type | How It Works | Example Request | Best For |
|---|---|---|---|
| Offset/Limit | Skip N records, return M | ?offset=100&limit=50 | Small to medium datasets |
| Cursor | Opaque token points to position | ?cursor=eyJpZCI6MTAwfQ | Large, frequently changing data |
| Page Number | Traditional page numbering | ?page=3&per_page=50 | User-facing APIs, admin interfaces |
Offset/Limit Pagination
The oldest approach. You specify how many records to skip (offset) and how many to return (limit).
Page 1: /api/users?offset=0&limit=100 â Records 1-100
Page 2: /api/users?offset=100&limit=100 â Records 101-200
Page 3: /api/users?offset=200&limit=100 â Records 201-300
Pros: Simple to understand and implement. You can jump to any page directly.
Cons: Performance degrades on large datasets. Inserting or deleting records between requests causes items to shift, potentially duplicating or skipping records.
Cursor-Based Pagination
Modern APIs prefer cursors. Instead of calculating offsets, the API returns an opaque token representing your position in the dataset. You pass this token to get the next page.
{
"data": [...],
"paging": {
"next": {
"after": "MTIzNDU2Nzg5MA=="
}
}
}
The cursor encodes information about where you are in the dataset. It might be a base64-encoded ID, a timestamp, or a combination of sort fields.
Pros: Consistent performance regardless of dataset size. No duplicate or skipped records when data changes. Slackâs engineering team reports cursor pagination provides 17x performance improvement over offset-based approaches.
Cons: You canât jump to arbitrary pages. Navigation is strictly forward (and sometimes backward).
Page Number Pagination
The simplest conceptually. You request page 1, page 2, page 3, and so on.
Page 1: /api/products?page=1&per_page=50
Page 2: /api/products?page=2&per_page=50
Page 3: /api/products?page=3&per_page=50
Pros: Intuitive. Works well for admin interfaces where users want to jump to specific pages.
Cons: Same performance and consistency issues as offset pagination. The database still has to count through all preceding records to find page 47.
Identifying Pagination Type from API Docs
Before building your workflow, check the API documentation. Look for:
| Indicator | Pagination Type |
|---|---|
Parameters named offset, skip, start | Offset/Limit |
Parameters named cursor, after, next_token | Cursor |
Parameters named page, page_number | Page Number |
Response includes next_cursor or paging.next.after | Cursor |
Response includes total_pages or page_count | Page Number |
Response includes total_count or total | Could be any type |
Common APIs and Their Pagination Types
| API | Pagination Type | Key Parameter |
|---|---|---|
| HubSpot | Cursor | after in paging object |
| Stripe | Cursor | starting_after |
| Shopify | Cursor (Link header) | page_info |
| GitHub | Page Number (Link header) | page |
| Airtable | Offset | offset |
| Notion | Cursor | start_cursor |
| Salesforce | Cursor | nextRecordsUrl |
| Mailchimp | Offset | offset, count |
n8nâs Built-in Pagination
The HTTP Request node has native pagination support. When configured correctly, it automatically fetches all pages and returns the combined results. No loops required.
Enabling Pagination in HTTP Request Node
- Add an HTTP Request node to your workflow
- Configure the basic request (URL, method, authentication)
- Scroll down and expand Options
- Click Add Option and select Pagination
- Configure the pagination settings for your API type
The node will now execute multiple requests internally, following the pagination trail until complete.
Pagination Mode: Response Contains Next URL
Use this when the API returns the complete URL for the next page somewhere in the response body.
Example API response:
{
"results": [...],
"next": "https://api.example.com/contacts?cursor=abc123"
}
n8n configuration:
Pagination Mode: Response Contains Next URL
Next URL: {{ $response.body.next }}
The expression {{ $response.body.next }} extracts the next page URL from the response. n8n follows this URL for each subsequent request until the value is empty or null.
Complete When:
Some APIs always return a next field, even on the last page (set to null). Configure when pagination should stop:
Complete When: {{ !$response.body.next }}
This stops pagination when the next field is falsy (null, undefined, empty string).
Pagination Mode: Update a Parameter
Use this when you need to modify a query parameter, body parameter, or header for each subsequent request.
Query Parameter Updates
For offset pagination where you increment the offset:
Pagination Mode: Update a Parameter in Each Request
Type: Query
Name: offset
Value: {{ $pageCount * 100 }}
The $pageCount variable starts at 0 and increments with each request. For an API expecting pages starting at 1:
Name: page
Value: {{ $pageCount + 1 }}
Body Parameter Updates
Some APIs accept pagination parameters in the request body:
Type: Body
Name: cursor
Value: {{ $response.body.next_cursor }}
The $response and $pageCount Variables
Inside pagination expressions, you have access to special variables:
| Variable | Description |
|---|---|
$response | The full response from the previous request |
$response.body | The parsed response body (JSON) |
$response.headers | Response headers object |
$response.statusCode | HTTP status code |
$pageCount | Number of pages fetched so far (starts at 0) |
Common expressions:
// Get cursor from nested object
{{ $response.body.paging.cursors.after }}
// Access array length to check for empty page
{{ $response.body.results.length > 0 }}
// Parse cursor from Link header (advanced)
{{ $response.headers.link }}
// Check if more pages exist
{{ $response.body.has_more === true }}
Setting Maximum Pages
Always set a maximum to prevent infinite loops from misconfigured pagination:
Max Pages: 100
This acts as a safety valve. If your API unexpectedly returns the same cursor repeatedly or pagination logic has a bug, the workflow stops after 100 requests instead of running forever.
Choosing the right limit:
- For known dataset sizes, calculate:
max_pages = expected_records / page_size + buffer - For unknown sizes, start with 100-500 and adjust based on observation
- Monitor execution time and adjust if workflows take too long
Pagination Patterns by API Type
Different APIs require different configurations. Here are production-ready patterns for common scenarios.
Pattern 1: Cursor-Based Pagination
Scenario: HubSpot Contacts API returns contacts with cursor-based pagination.
API Response:
{
"results": [
{ "id": "1", "email": "[email protected]" },
{ "id": "2", "email": "[email protected]" }
],
"paging": {
"next": {
"after": "MTIzNDU="
}
}
}
HTTP Request Configuration:
Method: GET
URL: https://api.hubapi.com/crm/v3/objects/contacts
Authentication: Header Auth (Bearer token)
Options > Pagination:
Pagination Mode: Update a Parameter in Each Request
Type: Query
Name: after
Value: {{ $response.body.paging?.next?.after }}
Complete When: {{ !$response.body.paging?.next?.after }}
Max Pages: 200
The optional chaining (?.) prevents errors when the paging object is missing on the last page.
Pattern 2: Offset + Limit Pagination
Scenario: Airtable API using offset tokens.
API Response:
{
"records": [...],
"offset": "itrXXX"
}
HTTP Request Configuration:
Method: GET
URL: https://api.airtable.com/v0/BASE_ID/TABLE_NAME
Authentication: Header Auth (Bearer token)
Options > Pagination:
Pagination Mode: Update a Parameter in Each Request
Type: Query
Name: offset
Value: {{ $response.body.offset }}
Complete When: {{ !$response.body.offset }}
Max Pages: 100
Note: Airtable calls it âoffsetâ but itâs actually a cursor token, not a numeric offset.
For true numeric offset pagination (like Mailchimp):
Pagination Mode: Update a Parameter in Each Request
Type: Query
Name: offset
Value: {{ $pageCount * 100 }}
Complete When: {{ $response.body.members.length === 0 }}
Max Pages: 50
Pattern 3: Page Number Pagination
Scenario: API uses traditional page numbers.
HTTP Request Configuration:
Method: GET
URL: https://api.example.com/products
Query Parameters:
per_page: 100
Options > Pagination:
Pagination Mode: Update a Parameter in Each Request
Type: Query
Name: page
Value: {{ $pageCount + 1 }}
Complete When: {{ $response.body.products.length === 0 }}
Max Pages: 100
If the API returns total page count:
Complete When: {{ $pageCount >= $response.body.total_pages }}
Pattern 4: Link Header Pagination
Scenario: GitHub and Shopify use RFC 5988 Link headers for pagination.
Response Header:
Link: <https://api.github.com/repos/owner/repo/issues?page=2>; rel="next",
<https://api.github.com/repos/owner/repo/issues?page=10>; rel="last"
HTTP Request Configuration:
Options > Pagination:
Pagination Mode: Response Contains Next URL
Next URL: {{ $response.headers.link?.match(/<([^>]+)>;\s*rel="next"/)?.[1] }}
This regex extracts the URL marked with rel="next" from the Link header.
For Shopifyâs variant (using page_info):
Next URL: {{ $response.headers.link?.match(/<([^>]+)>;\s*rel="next"/)?.[1] }}
Common APIs Quick Reference
| API | Pagination Mode | Parameter | Complete When |
|---|---|---|---|
| HubSpot | Update Parameter | after = $response.body.paging?.next?.after | !$response.body.paging?.next?.after |
| Stripe | Update Parameter | starting_after = $response.body.data[-1]?.id | !$response.body.has_more |
| Notion | Update Parameter | start_cursor = $response.body.next_cursor | !$response.body.has_more |
| Airtable | Update Parameter | offset = $response.body.offset | !$response.body.offset |
| GitHub | Next URL | Link header regex | No next link |
| Shopify | Next URL | Link header regex | No next link |
Manual Pagination with Loops
Sometimes n8nâs built-in pagination isnât enough. You need manual control when:
- Processing each page before fetching the next
- Implementing custom rate limiting between pages
- Handling complex completion conditions
- Aggregating data in specific ways
- Recovering from partial failures
Building a Pagination Loop
The basic structure uses these nodes:
Manual Trigger â Set Initial State â HTTP Request â IF (hasMore?) â Loop Back
â
(No) â Continue workflow
Step 1: Initialize State
Use a Set node to establish starting values:
Fields to Set:
- currentPage: 1
- allResults: []
- hasMore: true
Step 2: Make the Request
HTTP Request node fetches one page:
URL: https://api.example.com/items?page={{ $json.currentPage }}
Step 3: Check for More Pages
IF node evaluates whether to continue:
Condition: {{ $json.hasMore }} equals true
Step 4: Update State
Merge node combines new results with accumulated results. Set node updates the page counter:
currentPage: {{ $json.currentPage + 1 }}
allResults: {{ [...$json.allResults, ...$json.newResults] }}
hasMore: {{ $json.response.has_more }}
Step 5: Loop Back
Connect the Set node output back to the HTTP Request node, creating the loop.
When to Use Manual vs Built-in
| Scenario | Use Built-in | Use Manual |
|---|---|---|
| Simple data fetching | Yes | No |
| Need all results combined | Yes | No |
| Process each page immediately | No | Yes |
| Custom delays between pages | No | Yes |
| Complex completion logic | No | Yes |
| Need to track progress | No | Yes |
| Partial failure recovery | No | Yes |
Aggregating Results Across Pages
For manual loops, use the Aggregate node to combine results from all loop iterations:
- Place Aggregate node after your processing logic
- Configure to aggregate all items
- Connect to the âdoneâ output of your loop
Alternatively, use the Merge node with âCombineâ mode to merge arrays progressively.
Combining Pagination with Rate Limits
The compound problem: You need all 10,000 records, but the API only allows 10 requests per second. Pagination without rate limiting triggers 429 errors. Rate limiting without pagination misses data.
Adding Delays Between Pages
Using Wait Node:
Add a Wait node after your HTTP Request (in manual pagination loops):
Resume: After Time Interval
Amount: 500
Unit: Milliseconds
This adds a half-second pause between each page request.
Using Built-in Pagination with Intervals:
The HTTP Request nodeâs pagination doesnât have a built-in delay option. For rate-limited APIs with built-in pagination, you may need to:
- Use manual pagination loops with Wait nodes
- Set a smaller Max Pages value and use scheduled triggers
- Request larger page sizes to reduce total requests
Respecting Retry-After Headers
When you hit a rate limit, many APIs return a Retry-After header indicating when to retry. Handle this in manual loops:
// In a Code node after HTTP Request
const retryAfter = $input.first().json.$response.headers['retry-after'];
if (retryAfter) {
// Wait the specified seconds
return { waitSeconds: parseInt(retryAfter) };
}
return { waitSeconds: 0 };
Then use a Wait node configured with an expression:
Amount: {{ $json.waitSeconds }}
Unit: Seconds
For comprehensive rate limit handling, see our rate limiting guide.
Memory Considerations
Fetching 100 pages of 1,000 records each means holding 100,000 items in memory. This can crash n8n.
Strategies:
- Process as you go: In manual loops, process each page immediately instead of accumulating all results
- Use sub-workflows: Break processing into chunks using sub-workflows
- Increase memory: For self-hosted n8n, increase Node.js heap size with
NODE_OPTIONS=--max-old-space-size=4096 - Stream to storage: Write each page to a database or file instead of holding in memory
For large dataset handling, see our batch processing guide.
Error Handling During Pagination
Pagination failures are particularly frustrating. You successfully fetch 47 pages, then page 48 fails. Starting over means refetching those 47 pages.
Retry On Fail for Pagination
Enable retries in the HTTP Request node settings:
Settings:
Retry On Fail: true
Max Tries: 3
Wait Between Tries: 1000ms
This handles transient network errors and temporary API issues without failing the entire pagination.
Handling Expired Cursors
Cursors can expire. Some APIs invalidate cursors after a certain time period. If youâre paginating slowly (due to rate limits or processing time), you might get an error on page 50 because the cursor has expired.
Prevention strategies:
- Paginate as quickly as rate limits allow
- Use larger page sizes to reduce total requests
- Store progress checkpoints (discussed below)
Recovery:
If a cursor expires, you typically need to start over. Thereâs no way to resume from an expired cursor since the API has discarded that position.
Saving Progress with Workflow Static Data
For long pagination operations, save progress so you can resume after failures:
// In a Code node within your pagination loop
const staticData = $getWorkflowStaticData('global');
// Save last successful cursor
staticData.lastCursor = $json.nextCursor;
staticData.processedCount = (staticData.processedCount || 0) + $json.results.length;
return $input.all();
On workflow start, check for saved state:
const staticData = $getWorkflowStaticData('global');
const startCursor = staticData.lastCursor || null;
return [{ cursor: startCursor }];
After successful completion, clear the saved state:
const staticData = $getWorkflowStaticData('global');
delete staticData.lastCursor;
delete staticData.processedCount;
Error Isolation
In manual pagination loops, configure individual nodes with âContinue On Failâ to prevent one bad page from stopping everything:
- Select the HTTP Request node
- Open Settings
- Enable âContinue On Failâ
Then add error handling logic to log failed pages and continue:
if ($json.error) {
// Log the error
console.log(`Failed to fetch page: ${$json.error}`);
// Return empty results for this page
return [{ results: [], error: true }];
}
Performance Optimization
When fetching thousands of records, small inefficiencies compound into major problems.
Choosing the Right Page Size
| Page Size | Pros | Cons |
|---|---|---|
| Small (10-50) | Lower memory per request, easier debugging | More requests, higher rate limit risk |
| Medium (100-500) | Balanced performance | Standard choice for most APIs |
| Large (1000+) | Fewer requests, faster completion | High memory usage, longer individual requests |
General guidance:
- Use the maximum page size the API allows
- Reduce page size if you hit memory issues
- Consider your rate limits: fewer large requests may be better than many small ones
Processing Pages in Parallel
When itâs safe:
- Each pageâs processing is independent
- No rate limits (or you can handle them)
- Order doesnât matter
Implementation:
Instead of processing sequentially, split pages across parallel branches:
Split In Batches (page 1-25) â Process
Split In Batches (page 26-50) â Process
...
Warning: Parallel pagination of the same API endpoint is risky. You might trigger rate limits or get inconsistent data. Only parallelize the processing of already-fetched pages.
Memory Management
For very large datasets:
- Donât accumulate: Process each page immediately and discard
- Use binary data mode: For files, stream directly instead of loading into memory
- Chunk your workflow: Use sub-workflows that execute in separate contexts
- Increase limits: Self-hosted users can increase Node.js heap:
NODE_OPTIONS=--max-old-space-size=8192
For data transformation techniques that minimize memory usage, see our data transformation guide.
Real-World Examples
Example 1: Sync All Shopify Products
Goal: Fetch all products from Shopify for inventory sync.
Challenge: Shopify uses cursor pagination via Link headers.
Method: GET
URL: https://{{ $credentials.shopDomain }}.myshopify.com/admin/api/2024-01/products.json
Authentication: Header Auth (X-Shopify-Access-Token)
Query Parameters:
limit: 250
Options > Pagination:
Pagination Mode: Response Contains Next URL
Next URL: {{ $response.headers.link?.match(/<([^>]+)>;\s*rel="next"/)?.[1] }}
Max Pages: 100
Example 2: Export All Airtable Records
Goal: Backup an entire Airtable base.
Method: GET
URL: https://api.airtable.com/v0/{{ $json.baseId }}/{{ $json.tableName }}
Authentication: Header Auth (Bearer token)
Query Parameters:
pageSize: 100
Options > Pagination:
Pagination Mode: Update a Parameter in Each Request
Type: Query
Name: offset
Value: {{ $response.body.offset }}
Complete When: {{ !$response.body.offset }}
Max Pages: 500
Example 3: GitHub Repository Stars
Goal: Get all users who starred a popular repository.
Method: GET
URL: https://api.github.com/repos/{{ $json.owner }}/{{ $json.repo }}/stargazers
Authentication: Header Auth (Bearer token)
Query Parameters:
per_page: 100
Options > Pagination:
Pagination Mode: Response Contains Next URL
Next URL: {{ $response.headers.link?.match(/<([^>]+)>;\s*rel="next"/)?.[1] }}
Max Pages: 100
Example 4: Complete Importable Workflow
Hereâs a complete workflow you can import directly into n8n. It demonstrates cursor-based pagination with HubSpot contacts, including error handling and result aggregation.
To import: Copy the JSON below, open n8n, press Ctrl+V (or Cmd+V on Mac) to paste and import.
{
"name": "HubSpot Contacts - Full Pagination Example",
"nodes": [
{
"parameters": {},
"id": "trigger-1",
"name": "Manual Trigger",
"type": "n8n-nodes-base.manualTrigger",
"typeVersion": 1,
"position": [250, 300]
},
{
"parameters": {
"method": "GET",
"url": "https://api.hubapi.com/crm/v3/objects/contacts",
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth",
"sendQuery": true,
"queryParameters": {
"parameters": [
{
"name": "limit",
"value": "100"
}
]
},
"options": {
"pagination": {
"paginationMode": "updateAParameterInEachRequest",
"parameters": {
"parameters": [
{
"type": "query",
"name": "after",
"value": "={{ $response.body.paging?.next?.after }}"
}
]
},
"paginationCompleteWhen": "receiveSpecificStatusCodes",
"statusCodesWhenComplete": "",
"completeExpression": "={{ !$response.body.paging?.next?.after }}",
"maxRequests": 100
}
}
},
"id": "http-1",
"name": "Get All Contacts",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [450, 300],
"credentials": {
"httpHeaderAuth": {
"id": "YOUR_CREDENTIAL_ID",
"name": "HubSpot API Key"
}
}
},
{
"parameters": {
"aggregate": "aggregateAllItemData",
"destinationFieldName": "allContacts",
"include": "specifiedFields",
"fieldsToInclude": {
"fields": [
{ "fieldName": "results" }
]
}
},
"id": "aggregate-1",
"name": "Combine All Pages",
"type": "n8n-nodes-base.aggregate",
"typeVersion": 1,
"position": [650, 300]
},
{
"parameters": {
"jsCode": "// Flatten results from all pages into single array\nconst allPages = $input.first().json.allContacts;\nconst contacts = allPages.flatMap(page => page.results || []);\n\nreturn [{\n json: {\n totalContacts: contacts.length,\n contacts: contacts\n }\n}];"
},
"id": "code-1",
"name": "Flatten Results",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [850, 300]
}
],
"connections": {
"Manual Trigger": {
"main": [[{ "node": "Get All Contacts", "type": "main", "index": 0 }]]
},
"Get All Contacts": {
"main": [[{ "node": "Combine All Pages", "type": "main", "index": 0 }]]
},
"Combine All Pages": {
"main": [[{ "node": "Flatten Results", "type": "main", "index": 0 }]]
}
},
"settings": { "executionOrder": "v1" }
}
After importing:
- Update the credential reference to your HubSpot API key
- Test with a small
limitvalue first (e.g., 10) - Check the âFlatten Resultsâ output to verify all contacts are combined
Incremental Sync: Fetch Only New Records
Fetching all 10,000 records every hour is wasteful. Most production workflows need incremental sync: fetch only records created or modified since the last run.
Why Incremental Sync Matters
| Approach | API Calls (10k records, hourly) | Data Transferred | Rate Limit Risk |
|---|---|---|---|
| Full sync every run | 100 calls/hour | 10k records/hour | High |
| Incremental sync | 1-5 calls/hour | 10-50 records/hour | Low |
Beyond efficiency, incremental sync:
- Reduces API costs (many APIs charge per request)
- Minimizes rate limit issues
- Processes faster with less memory
- Enables near-real-time sync with frequent runs
The Incremental Sync Pattern
The pattern requires:
- Store the last sync timestamp after each successful run
- Use that timestamp to filter the next request
- Handle the first run when no previous timestamp exists
Implementation with workflowStaticData
Step 1: Get Last Sync Time (Code Node)
// Get stored timestamp or default to 24 hours ago for first run
const staticData = $getWorkflowStaticData('global');
const lastSync = staticData.lastSyncTime || new Date(Date.now() - 86400000).toISOString();
return [{
json: {
lastSyncTime: lastSync,
isFirstRun: !staticData.lastSyncTime
}
}];
Step 2: Configure HTTP Request with Date Filter
Most APIs support filtering by date. Common parameter names:
| API | Parameter | Format |
|---|---|---|
| HubSpot | filterGroups with lastmodifieddate | Unix timestamp (ms) |
| Salesforce | LastModifiedDate in SOQL | ISO 8601 |
| Stripe | created[gte] | Unix timestamp |
| Airtable | filterByFormula | ISO 8601 |
| Notion | filter.timestamp | ISO 8601 |
Example: HubSpot with date filter
URL: https://api.hubapi.com/crm/v3/objects/contacts/search
Method: POST
Body (JSON):
{
"filterGroups": [{
"filters": [{
"propertyName": "lastmodifieddate",
"operator": "GTE",
"value": "{{ new Date($json.lastSyncTime).getTime() }}"
}]
}],
"limit": 100
}
Step 3: Save New Sync Time (Code Node at End)
// Only save if we successfully processed records
const staticData = $getWorkflowStaticData('global');
// Save current time as the new sync point
staticData.lastSyncTime = new Date().toISOString();
// Optionally track stats
staticData.lastRunRecordCount = $input.all().length;
staticData.lastRunTime = new Date().toISOString();
return $input.all();
Handling Edge Cases
First Run Detection:
const staticData = $getWorkflowStaticData('global');
if (!staticData.lastSyncTime) {
// First run - fetch last 7 days or use a reasonable default
return [{
json: {
filterDate: new Date(Date.now() - 7 * 86400000).toISOString(),
isFirstRun: true
}
}];
}
return [{
json: {
filterDate: staticData.lastSyncTime,
isFirstRun: false
}
}];
Overlapping Records:
When syncing by modified_date, records modified exactly at your sync timestamp might be missed or duplicated. Add a small buffer:
// Subtract 1 minute from last sync to ensure overlap
const bufferMs = 60000;
const filterDate = new Date(new Date(staticData.lastSyncTime).getTime() - bufferMs);
Then use the Remove Duplicates node to handle any duplicates from the overlap.
Failed Runs:
Donât update the sync timestamp if the workflow fails:
// In your final success node
const staticData = $getWorkflowStaticData('global');
// Only update on success
if ($input.all().length > 0 || $json.successfullyProcessed) {
staticData.lastSyncTime = new Date().toISOString();
}
Complete Incremental Sync Workflow Structure
Schedule Trigger (every 15 min)
â
Get Last Sync Time (Code)
â
HTTP Request (with date filter + pagination)
â
Process Records
â
Save New Sync Time (Code)
â
[Optional] Send Summary Notification
This pattern works with any paginated API that supports date filtering.
Troubleshooting Common Pagination Errors
When pagination fails, error messages can be cryptic. Hereâs how to diagnose and fix the most common issues.
Error: âCannot read property âafterâ of undefinedâ
Cause: Your expression tries to access a nested property that doesnât exist on some responses.
Example problematic expression:
{{ $response.body.paging.next.after }}
Solution: Use optional chaining:
{{ $response.body.paging?.next?.after }}
This returns undefined instead of throwing an error when paging or next is missing.
Error: âPagination stopped unexpectedlyâ
Symptoms: Workflow completes without errors but only fetched a few pages.
Diagnosis checklist:
-
Check Max Pages setting - Is it set too low?
-
Test Complete When expression - Add a Set node to output the expression value:
{{ !$response.body.paging?.next?.after }}If this returns
trueon page 2 when you expect 10 pages, your expression is wrong. -
Inspect actual response - Enable âFull Responseâ and check what the API actually returns on the âlastâ page.
Common fix: The API might use a different field name than documented. Check for variations like hasMore, has_more, moreResults, nextPageToken.
Error: âRequest failed with status code 400â
Cause: The pagination parameter value is malformed or the API rejects it.
Common causes:
- Cursor encoding issues - Some cursors contain special characters that need URL encoding
- Wrong parameter type - Sending string when API expects integer
- Expired cursor - Cursor tokens often expire after minutes/hours
Solution for encoding:
{{ encodeURIComponent($response.body.nextCursor) }}
Error: âRequest failed with status code 429â
Cause: Rate limit exceeded during pagination.
Solutions:
- Add delays - Use manual pagination with Wait nodes
- Reduce page size - Fewer items per page means more requests, but spread over time
- Implement exponential backoff - See our rate limiting guide
Error: âJavaScript heap out of memoryâ
Cause: Accumulated paginated data exceeds available memory.
Solutions:
- Process pages immediately - Donât accumulate all results
- Increase Node.js memory -
NODE_OPTIONS=--max-old-space-size=4096 - Use sub-workflows - Each sub-workflow gets fresh memory
- Stream to database - Write each page to storage instead of holding in memory
Error: âExecution timeoutâ
Cause: Pagination takes longer than the configured timeout.
Solutions:
- Increase timeout - In workflow settings, increase execution timeout
- Use scheduled chunks - Instead of fetching all at once, fetch portions on a schedule
- Check for infinite loops - Verify your Complete When expression actually triggers
Debugging Checklist
When pagination isnât working, run through this checklist:
| Check | How to Verify |
|---|---|
| API returns pagination data | Make single request, inspect response for cursor/next fields |
| Expression extracts value correctly | Add Set node with {{ $response.body.your.path }} |
| Complete When triggers correctly | Test expression returns false on page 1, true on last page |
| Max Pages allows enough requests | Calculate: total_records / page_size = pages needed |
| No rate limiting | Check for 429 errors in execution log |
| Memory sufficient | Monitor n8n process memory during execution |
Getting Expert Help
Pagination logic can get complex, especially when combined with rate limits, error handling, and large dataset requirements. If youâre building mission-critical data pipelines:
- Our workflow development services can build production-ready pagination solutions
- Consulting packages help optimize existing workflows
- Use our workflow debugger tool to identify issues in your pagination logic
Frequently Asked Questions
Why does my pagination stop before fetching all records?
The most common causes:
-
Wrong âComplete Whenâ expression: Your condition evaluates to true before all pages are fetched. Test your expression against actual API responses.
-
Max Pages too low: You hit the configured maximum before exhausting the data. Increase the Max Pages setting.
-
Empty page triggered completion: Some APIs return empty arrays before the true end. Check for
has_morefields instead of just empty results. -
Cursor format mismatch: The cursor value isnât being extracted correctly. Enable âFull Responseâ temporarily to inspect the actual response structure.
-
Authentication expiring: Long pagination operations might outlive your access token. Implement token refresh logic.
How do I handle APIs that return a total count vs just hasMore?
For APIs returning total_count and per_page:
Complete When: {{ ($pageCount + 1) * 100 >= $response.body.total_count }}
This calculates whether youâve fetched all records based on the total count.
For APIs returning total_pages:
Complete When: {{ $pageCount + 1 >= $response.body.total_pages }}
Remember that $pageCount starts at 0, so the first page is page 0 internally.
Can I paginate through multiple APIs in the same workflow?
Yes. Use separate HTTP Request nodes for each API, each with its own pagination configuration. The nodes execute independently.
For sequential multi-API pagination (API B depends on API A results):
HTTP Request (API A with pagination) â Process â HTTP Request (API B with pagination)
For parallel multi-API pagination:
Start â Split â HTTP Request (API A) â Merge
âââ HTTP Request (API B) ââââ
Whatâs the maximum number of pages n8n can handle?
Thereâs no hard limit, but practical constraints apply:
- Memory: Each page consumes RAM. At some point, youâll exhaust available memory.
- Execution time: Very long operations risk timeouts, especially with webhook triggers.
- Rate limits: More pages mean more requests and higher rate limit risk.
For extremely large datasets (millions of records), consider:
- Processing in scheduled chunks rather than all at once
- Using incremental sync with date filters instead of full fetches
- Streaming directly to a database instead of holding in memory
How do I debug pagination that isnât working correctly?
-
Test without pagination first: Make a single request and inspect the response structure. Verify your cursor/offset field exists where you expect it.
-
Enable Full Response: In HTTP Request options, enable âFull Responseâ to see headers and status codes alongside the body.
-
Check expression output: Use a Set node after HTTP Request with expressions like
{{ $response.body.paging }}to see exactly what values your pagination expressions receive. -
Add logging: In manual loops, add Set nodes that capture the current page number, cursor value, and result count for debugging.
-
Start with Max Pages: 2: Limit to two pages initially. If it works, the logic is correct and you can increase the limit.
-
Compare with API docs: Some APIs change their pagination format between versions. Verify youâre using the correct field names for your API version.
For persistent debugging challenges, our workflow debugger tool helps identify issues across your entire workflow execution.