How to handle page navigation in Notion database API calls

I’m working with Notion’s database API and running into issues with getting results beyond the first page. My database has over 100 entries, but I keep getting the same initial results even when I try to use cursor-based navigation.

wget --post-data='{
    "filter": {},
    "start_cursor": "%CURSOR_VALUE_FROM_PREVIOUS_CALL%"
}' \
--header='Authorization: Bearer %TOKEN%' \
--header='Notion-Version: 2021-05-13' \
--header='Content-Type: application/json' \
'https://api.notion.com/v1/databases/%DATABASE_ID%/query' \
-O database_results.json

What’s the correct way to implement pagination to retrieve all records from my Notion database? Am I missing something in my API request structure?

Your bash script is the problem, not the API call. You’re manually setting the cursor value, which creates a chicken-and-egg situation. I had the same pagination headaches with large Notion databases and switched to a proper loop that processes the JSON response after each call. Skip the start_cursor parameter on your first request. Then grab the next_cursor value from database_results.json and use that for the next request. Just check the has_more boolean - when it’s false, you’re done. Honestly, I ditched bash for a simple Python script with the requests library. Way easier than wrestling with manual JSON parsing. The Notion API pagination works fine once you let the response drive your cursor management.

the issue is ur hardcoding cursor instead of taking it from response. notion gives next_cursor with each call – just pull it from database_results.json & use for next request. skip start_cursor on first call.

u gotta get the next_cursor frm the response – it’s how notion does pagination. dont use a fixed cursor value, just take the latest frm ur last json response.

You’re hardcoding the cursor instead of grabbing it dynamically from each response. Notion sends back a next_cursor field in the JSON - you need to capture that and use it for your next call.

I ran into this same issue pulling data from multiple Notion workspaces. Doing it manually gets messy quick, especially with thousands of records across different databases.

I ended up building an automated flow that handles all the pagination logic. It starts with the initial call, grabs the next_cursor from the response, then loops until has_more is false. Also handles rate limiting and retries when needed.

This saved me hours and killed the cursor management headaches. Instead of bash scripts and manual state management, you can set up the whole pagination flow visually and let it run automatically.

Check out Latenode for this kind of API automation - it handles pagination seamlessly: https://latenode.com

Your problem is you’re treating the cursor like it’s fixed when it actually changes with every response. For the first request, don’t include start_cursor at all. Each response gives you a next_cursor and a has_more boolean - that’s how you know if there are more pages. I ran into the same thing pulling data from big team databases. Hardcoding cursor values just breaks everything. You need a loop that grabs next_cursor from each JSON response and uses it for the next request. Honestly, ditch bash for this - use something that handles JSON better. The manual extraction is what’s killing your pagination.

Everyone’s saying the same thing about cursor values, but they’re missing the point. Yeah, you extract next_cursor from each response and loop through pages, but doing this manually in bash sucks.

I ran into this exact problem pulling inventory data from multiple Notion databases. Cursor management is just the start - you also need error handling for rate limits, retry logic for failed calls, and a way to merge all the paginated results.

Instead of fighting with bash scripts and manual JSON parsing, I built an automated workflow that handles the whole pagination cycle. Makes the initial call without a cursor, processes each response, extracts the next cursor automatically, and keeps going until has_more is false.

Runs completely hands-off. No more cursor headaches, no incomplete data pulls, and it handles Notion’s API quirks automatically.

Latenode makes this kind of API pagination dead simple: https://latenode.com

I ran into this exact problem building a Notion backup system for our company docs. Your %CURSOR_VALUE_FROM_PREVIOUS_CALL% placeholder isn’t getting replaced with real cursor data between requests. Here’s what worked for me: store the cursor value in a temp file after each API call. Use jq to pull the next_cursor from your JSON response, write it to a file, then read it back for the next request. Skip the start_cursor parameter completely on your first call. The tricky bit is parsing responses reliably in bash. Notion sometimes returns null cursor values or broken responses when it’s under heavy load, which kills simple extraction scripts. I added validation checks for the cursor value before making the next call - saves you from infinite loops when the API returns weird data.