n8n API Pagination: Fetch All Your Data Without Missing a Single Record
n8n API Pagination: Fetch All Your Data Without Missing a Single Record
• Logic Workflow Team

n8n API Pagination: Fetch All Your Data Without Missing a Single Record

#n8n #pagination #API #HTTP Request #automation #tutorial

Your workflow fetched 100 contacts. The API has 10,000. You built an automation that syncs customer data every hour. It runs without errors. Reports look good. Then someone checks the source system and discovers you’ve been missing 99% of your records.

This happens more than you’d think. APIs don’t dump their entire database in a single response. They paginate. They return a subset of records and provide a way to request the next chunk. If your workflow doesn’t follow that pagination trail, you’re working with incomplete data.

The Hidden Data Problem

Most API integrations start simple. You make an HTTP request, get some JSON back, and move on. The problem emerges when your data grows. That CRM with 50 contacts becomes 5,000. The product catalog expands from 200 items to 20,000. The API keeps returning the same first 100 records, and your workflow keeps processing them like nothing’s wrong.

Pagination isn’t optional. It’s table stakes for production-ready automations. Without proper pagination handling, you’re building workflows that silently fail the moment your data exceeds the API’s default page size.

What You’ll Learn

  • How the three main pagination types work (offset, cursor, page number) and how to identify which one an API uses
  • Using n8n’s built-in pagination feature in the HTTP Request node to automatically fetch all pages
  • Configuring pagination for different API patterns with real expressions you can copy
  • Building manual pagination loops for complex scenarios
  • Combining pagination with rate limiting to avoid 429 errors
  • Error handling strategies when pagination fails mid-stream
  • Performance optimization for workflows that fetch thousands of records

How API Pagination Works

Before configuring n8n, you need to understand what pagination actually does and how to identify which type an API uses.

Why APIs Paginate

Returning all records at once creates problems. A request for 100,000 customer records would consume massive server memory, take minutes to serialize to JSON, and overwhelm network bandwidth. The receiving client might crash trying to parse that much data.

Pagination solves this by breaking large datasets into manageable chunks. The client requests one page at a time, processes it, then requests the next. Everyone stays happy.

The Three Pagination Types

TypeHow It WorksExample RequestBest For
Offset/LimitSkip N records, return M?offset=100&limit=50Small to medium datasets
CursorOpaque token points to position?cursor=eyJpZCI6MTAwfQLarge, frequently changing data
Page NumberTraditional page numbering?page=3&per_page=50User-facing APIs, admin interfaces

Offset/Limit Pagination

The oldest approach. You specify how many records to skip (offset) and how many to return (limit).

Page 1: /api/users?offset=0&limit=100   → Records 1-100
Page 2: /api/users?offset=100&limit=100 → Records 101-200
Page 3: /api/users?offset=200&limit=100 → Records 201-300

Pros: Simple to understand and implement. You can jump to any page directly.

Cons: Performance degrades on large datasets. Inserting or deleting records between requests causes items to shift, potentially duplicating or skipping records.

Cursor-Based Pagination

Modern APIs prefer cursors. Instead of calculating offsets, the API returns an opaque token representing your position in the dataset. You pass this token to get the next page.

{
  "data": [...],
  "paging": {
    "next": {
      "after": "MTIzNDU2Nzg5MA=="
    }
  }
}

The cursor encodes information about where you are in the dataset. It might be a base64-encoded ID, a timestamp, or a combination of sort fields.

Pros: Consistent performance regardless of dataset size. No duplicate or skipped records when data changes. Slack’s engineering team reports cursor pagination provides 17x performance improvement over offset-based approaches.

Cons: You can’t jump to arbitrary pages. Navigation is strictly forward (and sometimes backward).

Page Number Pagination

The simplest conceptually. You request page 1, page 2, page 3, and so on.

Page 1: /api/products?page=1&per_page=50
Page 2: /api/products?page=2&per_page=50
Page 3: /api/products?page=3&per_page=50

Pros: Intuitive. Works well for admin interfaces where users want to jump to specific pages.

Cons: Same performance and consistency issues as offset pagination. The database still has to count through all preceding records to find page 47.

Identifying Pagination Type from API Docs

Before building your workflow, check the API documentation. Look for:

IndicatorPagination Type
Parameters named offset, skip, startOffset/Limit
Parameters named cursor, after, next_tokenCursor
Parameters named page, page_numberPage Number
Response includes next_cursor or paging.next.afterCursor
Response includes total_pages or page_countPage Number
Response includes total_count or totalCould be any type

Common APIs and Their Pagination Types

APIPagination TypeKey Parameter
HubSpotCursorafter in paging object
StripeCursorstarting_after
ShopifyCursor (Link header)page_info
GitHubPage Number (Link header)page
AirtableOffsetoffset
NotionCursorstart_cursor
SalesforceCursornextRecordsUrl
MailchimpOffsetoffset, count

n8n’s Built-in Pagination

The HTTP Request node has native pagination support. When configured correctly, it automatically fetches all pages and returns the combined results. No loops required.

Enabling Pagination in HTTP Request Node

  1. Add an HTTP Request node to your workflow
  2. Configure the basic request (URL, method, authentication)
  3. Scroll down and expand Options
  4. Click Add Option and select Pagination
  5. Configure the pagination settings for your API type

The node will now execute multiple requests internally, following the pagination trail until complete.

Pagination Mode: Response Contains Next URL

Use this when the API returns the complete URL for the next page somewhere in the response body.

Example API response:

{
  "results": [...],
  "next": "https://api.example.com/contacts?cursor=abc123"
}

n8n configuration:

Pagination Mode: Response Contains Next URL
Next URL: {{ $response.body.next }}

The expression {{ $response.body.next }} extracts the next page URL from the response. n8n follows this URL for each subsequent request until the value is empty or null.

Complete When:

Some APIs always return a next field, even on the last page (set to null). Configure when pagination should stop:

Complete When: {{ !$response.body.next }}

This stops pagination when the next field is falsy (null, undefined, empty string).

Pagination Mode: Update a Parameter

Use this when you need to modify a query parameter, body parameter, or header for each subsequent request.

Query Parameter Updates

For offset pagination where you increment the offset:

Pagination Mode: Update a Parameter in Each Request
Type: Query
Name: offset
Value: {{ $pageCount * 100 }}

The $pageCount variable starts at 0 and increments with each request. For an API expecting pages starting at 1:

Name: page
Value: {{ $pageCount + 1 }}

Body Parameter Updates

Some APIs accept pagination parameters in the request body:

Type: Body
Name: cursor
Value: {{ $response.body.next_cursor }}

The $response and $pageCount Variables

Inside pagination expressions, you have access to special variables:

VariableDescription
$responseThe full response from the previous request
$response.bodyThe parsed response body (JSON)
$response.headersResponse headers object
$response.statusCodeHTTP status code
$pageCountNumber of pages fetched so far (starts at 0)

Common expressions:

// Get cursor from nested object
{{ $response.body.paging.cursors.after }}

// Access array length to check for empty page
{{ $response.body.results.length > 0 }}

// Parse cursor from Link header (advanced)
{{ $response.headers.link }}

// Check if more pages exist
{{ $response.body.has_more === true }}

Setting Maximum Pages

Always set a maximum to prevent infinite loops from misconfigured pagination:

Max Pages: 100

This acts as a safety valve. If your API unexpectedly returns the same cursor repeatedly or pagination logic has a bug, the workflow stops after 100 requests instead of running forever.

Choosing the right limit:

  • For known dataset sizes, calculate: max_pages = expected_records / page_size + buffer
  • For unknown sizes, start with 100-500 and adjust based on observation
  • Monitor execution time and adjust if workflows take too long

Pagination Patterns by API Type

Different APIs require different configurations. Here are production-ready patterns for common scenarios.

Pattern 1: Cursor-Based Pagination

Scenario: HubSpot Contacts API returns contacts with cursor-based pagination.

API Response:

{
  "results": [
    { "id": "1", "email": "[email protected]" },
    { "id": "2", "email": "[email protected]" }
  ],
  "paging": {
    "next": {
      "after": "MTIzNDU="
    }
  }
}

HTTP Request Configuration:

Method: GET
URL: https://api.hubapi.com/crm/v3/objects/contacts
Authentication: Header Auth (Bearer token)

Options > Pagination:
  Pagination Mode: Update a Parameter in Each Request
  Type: Query
  Name: after
  Value: {{ $response.body.paging?.next?.after }}
  Complete When: {{ !$response.body.paging?.next?.after }}
  Max Pages: 200

The optional chaining (?.) prevents errors when the paging object is missing on the last page.

Pattern 2: Offset + Limit Pagination

Scenario: Airtable API using offset tokens.

API Response:

{
  "records": [...],
  "offset": "itrXXX"
}

HTTP Request Configuration:

Method: GET
URL: https://api.airtable.com/v0/BASE_ID/TABLE_NAME
Authentication: Header Auth (Bearer token)

Options > Pagination:
  Pagination Mode: Update a Parameter in Each Request
  Type: Query
  Name: offset
  Value: {{ $response.body.offset }}
  Complete When: {{ !$response.body.offset }}
  Max Pages: 100

Note: Airtable calls it “offset” but it’s actually a cursor token, not a numeric offset.

For true numeric offset pagination (like Mailchimp):

Pagination Mode: Update a Parameter in Each Request
Type: Query
Name: offset
Value: {{ $pageCount * 100 }}
Complete When: {{ $response.body.members.length === 0 }}
Max Pages: 50

Pattern 3: Page Number Pagination

Scenario: API uses traditional page numbers.

HTTP Request Configuration:

Method: GET
URL: https://api.example.com/products

Query Parameters:
  per_page: 100

Options > Pagination:
  Pagination Mode: Update a Parameter in Each Request
  Type: Query
  Name: page
  Value: {{ $pageCount + 1 }}
  Complete When: {{ $response.body.products.length === 0 }}
  Max Pages: 100

If the API returns total page count:

Complete When: {{ $pageCount >= $response.body.total_pages }}

Scenario: GitHub and Shopify use RFC 5988 Link headers for pagination.

Response Header:

Link: <https://api.github.com/repos/owner/repo/issues?page=2>; rel="next",
      <https://api.github.com/repos/owner/repo/issues?page=10>; rel="last"

HTTP Request Configuration:

Options > Pagination:
  Pagination Mode: Response Contains Next URL
  Next URL: {{ $response.headers.link?.match(/<([^>]+)>;\s*rel="next"/)?.[1] }}

This regex extracts the URL marked with rel="next" from the Link header.

For Shopify’s variant (using page_info):

Next URL: {{ $response.headers.link?.match(/<([^>]+)>;\s*rel="next"/)?.[1] }}

Common APIs Quick Reference

APIPagination ModeParameterComplete When
HubSpotUpdate Parameterafter = $response.body.paging?.next?.after!$response.body.paging?.next?.after
StripeUpdate Parameterstarting_after = $response.body.data[-1]?.id!$response.body.has_more
NotionUpdate Parameterstart_cursor = $response.body.next_cursor!$response.body.has_more
AirtableUpdate Parameteroffset = $response.body.offset!$response.body.offset
GitHubNext URLLink header regexNo next link
ShopifyNext URLLink header regexNo next link

Manual Pagination with Loops

Sometimes n8n’s built-in pagination isn’t enough. You need manual control when:

  • Processing each page before fetching the next
  • Implementing custom rate limiting between pages
  • Handling complex completion conditions
  • Aggregating data in specific ways
  • Recovering from partial failures

Building a Pagination Loop

The basic structure uses these nodes:

Manual Trigger → Set Initial State → HTTP Request → IF (hasMore?) → Loop Back
                                            ↓
                                     (No) → Continue workflow

Step 1: Initialize State

Use a Set node to establish starting values:

Fields to Set:
  - currentPage: 1
  - allResults: []
  - hasMore: true

Step 2: Make the Request

HTTP Request node fetches one page:

URL: https://api.example.com/items?page={{ $json.currentPage }}

Step 3: Check for More Pages

IF node evaluates whether to continue:

Condition: {{ $json.hasMore }} equals true

Step 4: Update State

Merge node combines new results with accumulated results. Set node updates the page counter:

currentPage: {{ $json.currentPage + 1 }}
allResults: {{ [...$json.allResults, ...$json.newResults] }}
hasMore: {{ $json.response.has_more }}

Step 5: Loop Back

Connect the Set node output back to the HTTP Request node, creating the loop.

When to Use Manual vs Built-in

ScenarioUse Built-inUse Manual
Simple data fetchingYesNo
Need all results combinedYesNo
Process each page immediatelyNoYes
Custom delays between pagesNoYes
Complex completion logicNoYes
Need to track progressNoYes
Partial failure recoveryNoYes

Aggregating Results Across Pages

For manual loops, use the Aggregate node to combine results from all loop iterations:

  1. Place Aggregate node after your processing logic
  2. Configure to aggregate all items
  3. Connect to the “done” output of your loop

Alternatively, use the Merge node with “Combine” mode to merge arrays progressively.

Combining Pagination with Rate Limits

The compound problem: You need all 10,000 records, but the API only allows 10 requests per second. Pagination without rate limiting triggers 429 errors. Rate limiting without pagination misses data.

Adding Delays Between Pages

Using Wait Node:

Add a Wait node after your HTTP Request (in manual pagination loops):

Resume: After Time Interval
Amount: 500
Unit: Milliseconds

This adds a half-second pause between each page request.

Using Built-in Pagination with Intervals:

The HTTP Request node’s pagination doesn’t have a built-in delay option. For rate-limited APIs with built-in pagination, you may need to:

  1. Use manual pagination loops with Wait nodes
  2. Set a smaller Max Pages value and use scheduled triggers
  3. Request larger page sizes to reduce total requests

Respecting Retry-After Headers

When you hit a rate limit, many APIs return a Retry-After header indicating when to retry. Handle this in manual loops:

// In a Code node after HTTP Request
const retryAfter = $input.first().json.$response.headers['retry-after'];
if (retryAfter) {
  // Wait the specified seconds
  return { waitSeconds: parseInt(retryAfter) };
}
return { waitSeconds: 0 };

Then use a Wait node configured with an expression:

Amount: {{ $json.waitSeconds }}
Unit: Seconds

For comprehensive rate limit handling, see our rate limiting guide.

Memory Considerations

Fetching 100 pages of 1,000 records each means holding 100,000 items in memory. This can crash n8n.

Strategies:

  1. Process as you go: In manual loops, process each page immediately instead of accumulating all results
  2. Use sub-workflows: Break processing into chunks using sub-workflows
  3. Increase memory: For self-hosted n8n, increase Node.js heap size with NODE_OPTIONS=--max-old-space-size=4096
  4. Stream to storage: Write each page to a database or file instead of holding in memory

For large dataset handling, see our batch processing guide.

Error Handling During Pagination

Pagination failures are particularly frustrating. You successfully fetch 47 pages, then page 48 fails. Starting over means refetching those 47 pages.

Retry On Fail for Pagination

Enable retries in the HTTP Request node settings:

Settings:
  Retry On Fail: true
  Max Tries: 3
  Wait Between Tries: 1000ms

This handles transient network errors and temporary API issues without failing the entire pagination.

Handling Expired Cursors

Cursors can expire. Some APIs invalidate cursors after a certain time period. If you’re paginating slowly (due to rate limits or processing time), you might get an error on page 50 because the cursor has expired.

Prevention strategies:

  1. Paginate as quickly as rate limits allow
  2. Use larger page sizes to reduce total requests
  3. Store progress checkpoints (discussed below)

Recovery:

If a cursor expires, you typically need to start over. There’s no way to resume from an expired cursor since the API has discarded that position.

Saving Progress with Workflow Static Data

For long pagination operations, save progress so you can resume after failures:

// In a Code node within your pagination loop
const staticData = $getWorkflowStaticData('global');

// Save last successful cursor
staticData.lastCursor = $json.nextCursor;
staticData.processedCount = (staticData.processedCount || 0) + $json.results.length;

return $input.all();

On workflow start, check for saved state:

const staticData = $getWorkflowStaticData('global');
const startCursor = staticData.lastCursor || null;

return [{ cursor: startCursor }];

After successful completion, clear the saved state:

const staticData = $getWorkflowStaticData('global');
delete staticData.lastCursor;
delete staticData.processedCount;

Error Isolation

In manual pagination loops, configure individual nodes with “Continue On Fail” to prevent one bad page from stopping everything:

  1. Select the HTTP Request node
  2. Open Settings
  3. Enable “Continue On Fail”

Then add error handling logic to log failed pages and continue:

if ($json.error) {
  // Log the error
  console.log(`Failed to fetch page: ${$json.error}`);
  // Return empty results for this page
  return [{ results: [], error: true }];
}

Performance Optimization

When fetching thousands of records, small inefficiencies compound into major problems.

Choosing the Right Page Size

Page SizeProsCons
Small (10-50)Lower memory per request, easier debuggingMore requests, higher rate limit risk
Medium (100-500)Balanced performanceStandard choice for most APIs
Large (1000+)Fewer requests, faster completionHigh memory usage, longer individual requests

General guidance:

  • Use the maximum page size the API allows
  • Reduce page size if you hit memory issues
  • Consider your rate limits: fewer large requests may be better than many small ones

Processing Pages in Parallel

When it’s safe:

  • Each page’s processing is independent
  • No rate limits (or you can handle them)
  • Order doesn’t matter

Implementation:

Instead of processing sequentially, split pages across parallel branches:

Split In Batches (page 1-25) → Process
Split In Batches (page 26-50) → Process
...

Warning: Parallel pagination of the same API endpoint is risky. You might trigger rate limits or get inconsistent data. Only parallelize the processing of already-fetched pages.

Memory Management

For very large datasets:

  1. Don’t accumulate: Process each page immediately and discard
  2. Use binary data mode: For files, stream directly instead of loading into memory
  3. Chunk your workflow: Use sub-workflows that execute in separate contexts
  4. Increase limits: Self-hosted users can increase Node.js heap: NODE_OPTIONS=--max-old-space-size=8192

For data transformation techniques that minimize memory usage, see our data transformation guide.

Real-World Examples

Example 1: Sync All Shopify Products

Goal: Fetch all products from Shopify for inventory sync.

Challenge: Shopify uses cursor pagination via Link headers.

Method: GET
URL: https://{{ $credentials.shopDomain }}.myshopify.com/admin/api/2024-01/products.json
Authentication: Header Auth (X-Shopify-Access-Token)

Query Parameters:
  limit: 250

Options > Pagination:
  Pagination Mode: Response Contains Next URL
  Next URL: {{ $response.headers.link?.match(/<([^>]+)>;\s*rel="next"/)?.[1] }}
  Max Pages: 100

Example 2: Export All Airtable Records

Goal: Backup an entire Airtable base.

Method: GET
URL: https://api.airtable.com/v0/{{ $json.baseId }}/{{ $json.tableName }}
Authentication: Header Auth (Bearer token)

Query Parameters:
  pageSize: 100

Options > Pagination:
  Pagination Mode: Update a Parameter in Each Request
  Type: Query
  Name: offset
  Value: {{ $response.body.offset }}
  Complete When: {{ !$response.body.offset }}
  Max Pages: 500

Example 3: GitHub Repository Stars

Goal: Get all users who starred a popular repository.

Method: GET
URL: https://api.github.com/repos/{{ $json.owner }}/{{ $json.repo }}/stargazers
Authentication: Header Auth (Bearer token)

Query Parameters:
  per_page: 100

Options > Pagination:
  Pagination Mode: Response Contains Next URL
  Next URL: {{ $response.headers.link?.match(/<([^>]+)>;\s*rel="next"/)?.[1] }}
  Max Pages: 100

Example 4: Complete Importable Workflow

Here’s a complete workflow you can import directly into n8n. It demonstrates cursor-based pagination with HubSpot contacts, including error handling and result aggregation.

To import: Copy the JSON below, open n8n, press Ctrl+V (or Cmd+V on Mac) to paste and import.

{
  "name": "HubSpot Contacts - Full Pagination Example",
  "nodes": [
    {
      "parameters": {},
      "id": "trigger-1",
      "name": "Manual Trigger",
      "type": "n8n-nodes-base.manualTrigger",
      "typeVersion": 1,
      "position": [250, 300]
    },
    {
      "parameters": {
        "method": "GET",
        "url": "https://api.hubapi.com/crm/v3/objects/contacts",
        "authentication": "genericCredentialType",
        "genericAuthType": "httpHeaderAuth",
        "sendQuery": true,
        "queryParameters": {
          "parameters": [
            {
              "name": "limit",
              "value": "100"
            }
          ]
        },
        "options": {
          "pagination": {
            "paginationMode": "updateAParameterInEachRequest",
            "parameters": {
              "parameters": [
                {
                  "type": "query",
                  "name": "after",
                  "value": "={{ $response.body.paging?.next?.after }}"
                }
              ]
            },
            "paginationCompleteWhen": "receiveSpecificStatusCodes",
            "statusCodesWhenComplete": "",
            "completeExpression": "={{ !$response.body.paging?.next?.after }}",
            "maxRequests": 100
          }
        }
      },
      "id": "http-1",
      "name": "Get All Contacts",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [450, 300],
      "credentials": {
        "httpHeaderAuth": {
          "id": "YOUR_CREDENTIAL_ID",
          "name": "HubSpot API Key"
        }
      }
    },
    {
      "parameters": {
        "aggregate": "aggregateAllItemData",
        "destinationFieldName": "allContacts",
        "include": "specifiedFields",
        "fieldsToInclude": {
          "fields": [
            { "fieldName": "results" }
          ]
        }
      },
      "id": "aggregate-1",
      "name": "Combine All Pages",
      "type": "n8n-nodes-base.aggregate",
      "typeVersion": 1,
      "position": [650, 300]
    },
    {
      "parameters": {
        "jsCode": "// Flatten results from all pages into single array\nconst allPages = $input.first().json.allContacts;\nconst contacts = allPages.flatMap(page => page.results || []);\n\nreturn [{\n  json: {\n    totalContacts: contacts.length,\n    contacts: contacts\n  }\n}];"
      },
      "id": "code-1",
      "name": "Flatten Results",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [850, 300]
    }
  ],
  "connections": {
    "Manual Trigger": {
      "main": [[{ "node": "Get All Contacts", "type": "main", "index": 0 }]]
    },
    "Get All Contacts": {
      "main": [[{ "node": "Combine All Pages", "type": "main", "index": 0 }]]
    },
    "Combine All Pages": {
      "main": [[{ "node": "Flatten Results", "type": "main", "index": 0 }]]
    }
  },
  "settings": { "executionOrder": "v1" }
}

After importing:

  1. Update the credential reference to your HubSpot API key
  2. Test with a small limit value first (e.g., 10)
  3. Check the “Flatten Results” output to verify all contacts are combined

Incremental Sync: Fetch Only New Records

Fetching all 10,000 records every hour is wasteful. Most production workflows need incremental sync: fetch only records created or modified since the last run.

Why Incremental Sync Matters

ApproachAPI Calls (10k records, hourly)Data TransferredRate Limit Risk
Full sync every run100 calls/hour10k records/hourHigh
Incremental sync1-5 calls/hour10-50 records/hourLow

Beyond efficiency, incremental sync:

  • Reduces API costs (many APIs charge per request)
  • Minimizes rate limit issues
  • Processes faster with less memory
  • Enables near-real-time sync with frequent runs

The Incremental Sync Pattern

The pattern requires:

  1. Store the last sync timestamp after each successful run
  2. Use that timestamp to filter the next request
  3. Handle the first run when no previous timestamp exists

Implementation with workflowStaticData

Step 1: Get Last Sync Time (Code Node)

// Get stored timestamp or default to 24 hours ago for first run
const staticData = $getWorkflowStaticData('global');
const lastSync = staticData.lastSyncTime || new Date(Date.now() - 86400000).toISOString();

return [{
  json: {
    lastSyncTime: lastSync,
    isFirstRun: !staticData.lastSyncTime
  }
}];

Step 2: Configure HTTP Request with Date Filter

Most APIs support filtering by date. Common parameter names:

APIParameterFormat
HubSpotfilterGroups with lastmodifieddateUnix timestamp (ms)
SalesforceLastModifiedDate in SOQLISO 8601
Stripecreated[gte]Unix timestamp
AirtablefilterByFormulaISO 8601
Notionfilter.timestampISO 8601

Example: HubSpot with date filter

URL: https://api.hubapi.com/crm/v3/objects/contacts/search
Method: POST
Body (JSON):
{
  "filterGroups": [{
    "filters": [{
      "propertyName": "lastmodifieddate",
      "operator": "GTE",
      "value": "{{ new Date($json.lastSyncTime).getTime() }}"
    }]
  }],
  "limit": 100
}

Step 3: Save New Sync Time (Code Node at End)

// Only save if we successfully processed records
const staticData = $getWorkflowStaticData('global');

// Save current time as the new sync point
staticData.lastSyncTime = new Date().toISOString();

// Optionally track stats
staticData.lastRunRecordCount = $input.all().length;
staticData.lastRunTime = new Date().toISOString();

return $input.all();

Handling Edge Cases

First Run Detection:

const staticData = $getWorkflowStaticData('global');

if (!staticData.lastSyncTime) {
  // First run - fetch last 7 days or use a reasonable default
  return [{
    json: {
      filterDate: new Date(Date.now() - 7 * 86400000).toISOString(),
      isFirstRun: true
    }
  }];
}

return [{
  json: {
    filterDate: staticData.lastSyncTime,
    isFirstRun: false
  }
}];

Overlapping Records:

When syncing by modified_date, records modified exactly at your sync timestamp might be missed or duplicated. Add a small buffer:

// Subtract 1 minute from last sync to ensure overlap
const bufferMs = 60000;
const filterDate = new Date(new Date(staticData.lastSyncTime).getTime() - bufferMs);

Then use the Remove Duplicates node to handle any duplicates from the overlap.

Failed Runs:

Don’t update the sync timestamp if the workflow fails:

// In your final success node
const staticData = $getWorkflowStaticData('global');

// Only update on success
if ($input.all().length > 0 || $json.successfullyProcessed) {
  staticData.lastSyncTime = new Date().toISOString();
}

Complete Incremental Sync Workflow Structure

Schedule Trigger (every 15 min)
    ↓
Get Last Sync Time (Code)
    ↓
HTTP Request (with date filter + pagination)
    ↓
Process Records
    ↓
Save New Sync Time (Code)
    ↓
[Optional] Send Summary Notification

This pattern works with any paginated API that supports date filtering.

Troubleshooting Common Pagination Errors

When pagination fails, error messages can be cryptic. Here’s how to diagnose and fix the most common issues.

Error: “Cannot read property ‘after’ of undefined”

Cause: Your expression tries to access a nested property that doesn’t exist on some responses.

Example problematic expression:

{{ $response.body.paging.next.after }}

Solution: Use optional chaining:

{{ $response.body.paging?.next?.after }}

This returns undefined instead of throwing an error when paging or next is missing.

Error: “Pagination stopped unexpectedly”

Symptoms: Workflow completes without errors but only fetched a few pages.

Diagnosis checklist:

  1. Check Max Pages setting - Is it set too low?

  2. Test Complete When expression - Add a Set node to output the expression value:

    {{ !$response.body.paging?.next?.after }}

    If this returns true on page 2 when you expect 10 pages, your expression is wrong.

  3. Inspect actual response - Enable “Full Response” and check what the API actually returns on the “last” page.

Common fix: The API might use a different field name than documented. Check for variations like hasMore, has_more, moreResults, nextPageToken.

Error: “Request failed with status code 400”

Cause: The pagination parameter value is malformed or the API rejects it.

Common causes:

  1. Cursor encoding issues - Some cursors contain special characters that need URL encoding
  2. Wrong parameter type - Sending string when API expects integer
  3. Expired cursor - Cursor tokens often expire after minutes/hours

Solution for encoding:

{{ encodeURIComponent($response.body.nextCursor) }}

Error: “Request failed with status code 429”

Cause: Rate limit exceeded during pagination.

Solutions:

  1. Add delays - Use manual pagination with Wait nodes
  2. Reduce page size - Fewer items per page means more requests, but spread over time
  3. Implement exponential backoff - See our rate limiting guide

Error: “JavaScript heap out of memory”

Cause: Accumulated paginated data exceeds available memory.

Solutions:

  1. Process pages immediately - Don’t accumulate all results
  2. Increase Node.js memory - NODE_OPTIONS=--max-old-space-size=4096
  3. Use sub-workflows - Each sub-workflow gets fresh memory
  4. Stream to database - Write each page to storage instead of holding in memory

Error: “Execution timeout”

Cause: Pagination takes longer than the configured timeout.

Solutions:

  1. Increase timeout - In workflow settings, increase execution timeout
  2. Use scheduled chunks - Instead of fetching all at once, fetch portions on a schedule
  3. Check for infinite loops - Verify your Complete When expression actually triggers

Debugging Checklist

When pagination isn’t working, run through this checklist:

CheckHow to Verify
API returns pagination dataMake single request, inspect response for cursor/next fields
Expression extracts value correctlyAdd Set node with {{ $response.body.your.path }}
Complete When triggers correctlyTest expression returns false on page 1, true on last page
Max Pages allows enough requestsCalculate: total_records / page_size = pages needed
No rate limitingCheck for 429 errors in execution log
Memory sufficientMonitor n8n process memory during execution

Getting Expert Help

Pagination logic can get complex, especially when combined with rate limits, error handling, and large dataset requirements. If you’re building mission-critical data pipelines:

Frequently Asked Questions

Why does my pagination stop before fetching all records?

The most common causes:

  1. Wrong “Complete When” expression: Your condition evaluates to true before all pages are fetched. Test your expression against actual API responses.

  2. Max Pages too low: You hit the configured maximum before exhausting the data. Increase the Max Pages setting.

  3. Empty page triggered completion: Some APIs return empty arrays before the true end. Check for has_more fields instead of just empty results.

  4. Cursor format mismatch: The cursor value isn’t being extracted correctly. Enable “Full Response” temporarily to inspect the actual response structure.

  5. Authentication expiring: Long pagination operations might outlive your access token. Implement token refresh logic.

How do I handle APIs that return a total count vs just hasMore?

For APIs returning total_count and per_page:

Complete When: {{ ($pageCount + 1) * 100 >= $response.body.total_count }}

This calculates whether you’ve fetched all records based on the total count.

For APIs returning total_pages:

Complete When: {{ $pageCount + 1 >= $response.body.total_pages }}

Remember that $pageCount starts at 0, so the first page is page 0 internally.

Can I paginate through multiple APIs in the same workflow?

Yes. Use separate HTTP Request nodes for each API, each with its own pagination configuration. The nodes execute independently.

For sequential multi-API pagination (API B depends on API A results):

HTTP Request (API A with pagination) → Process → HTTP Request (API B with pagination)

For parallel multi-API pagination:

Start → Split → HTTP Request (API A) → Merge
            └─→ HTTP Request (API B) ───┘

What’s the maximum number of pages n8n can handle?

There’s no hard limit, but practical constraints apply:

  • Memory: Each page consumes RAM. At some point, you’ll exhaust available memory.
  • Execution time: Very long operations risk timeouts, especially with webhook triggers.
  • Rate limits: More pages mean more requests and higher rate limit risk.

For extremely large datasets (millions of records), consider:

  • Processing in scheduled chunks rather than all at once
  • Using incremental sync with date filters instead of full fetches
  • Streaming directly to a database instead of holding in memory

How do I debug pagination that isn’t working correctly?

  1. Test without pagination first: Make a single request and inspect the response structure. Verify your cursor/offset field exists where you expect it.

  2. Enable Full Response: In HTTP Request options, enable “Full Response” to see headers and status codes alongside the body.

  3. Check expression output: Use a Set node after HTTP Request with expressions like {{ $response.body.paging }} to see exactly what values your pagination expressions receive.

  4. Add logging: In manual loops, add Set nodes that capture the current page number, cursor value, and result count for debugging.

  5. Start with Max Pages: 2: Limit to two pages initially. If it works, the logic is correct and you can increase the limit.

  6. Compare with API docs: Some APIs change their pagination format between versions. Verify you’re using the correct field names for your API version.

For persistent debugging challenges, our workflow debugger tool helps identify issues across your entire workflow execution.

Ready to Automate Your Business?

Tell us what you need automated. We'll build it, test it, and deploy it fast.

✓ 48-72 Hour Turnaround
✓ Production Ready
✓ Free Consultation
⚡

Create Your Free Account

Sign up once, use all tools free forever. We require accounts to prevent abuse and keep our tools running for everyone.

or

You're in!

Check your email for next steps.

By signing up, you agree to our Terms of Service and Privacy Policy. No spam, unsubscribe anytime.

🚀

Get Expert Help

Add your email and one of our n8n experts will reach out to help with your automation needs.

or

We'll be in touch!

One of our experts will reach out soon.

By submitting, you agree to our Terms of Service and Privacy Policy. No spam, unsubscribe anytime.