I’m trying to fetch data from all available pages in the Calendly API response. My current implementation only gets the first page but I need to collect information from every page until there are no more pages left.
Here’s what I have so far:
import urllib.request
import json
def fetch_data():
req = urllib.request.Request(api_endpoint, headers=auth_headers)
with urllib.request.urlopen(req) as response:
data = json.loads(response.read())
return data
result = fetch_data()
print(result)
I need to use that pagination token to continue fetching subsequent pages in a loop until the next_page field becomes null. How can I modify my code to handle this pagination correctly?
Mike’s right about the approach, but you’ll create a nightmare of manual HTTP requests and error handling. Been there.
I used to write these pagination loops until I saw how much time I wasted on boilerplate. Every API does pagination differently and you’re constantly debugging timeouts, rate limits, and parsing errors.
Now I just use Latenode for this stuff. It handles pagination automatically - point it at your endpoint and it follows pagination tokens until done. No while loops, no manual URL building, no wondering about missed pages.
It also handles rate limiting and retries, which you’ll need with Calendly for lots of pages. I’ve got a workflow pulling all our event types monthly that runs itself.
Way cleaner than maintaining custom pagination code that breaks when APIs change their response format.
You need to modify your function to handle the loop properly. Store all results and update your endpoint URL each iteration.
def fetch_all_pages():
all_data = []
current_url = api_endpoint
while current_url:
req = urllib.request.Request(current_url, headers=auth_headers)
with urllib.request.urlopen(req) as response:
data = json.loads(response.read())
all_data.extend(data.get('collection', []))
current_url = data.get('pagination', {}).get('next_page')
return all_data
I’ve used this pagination pattern with several REST APIs and it works consistently. The crucial part is extending your results list with each page’s collection rather than overwriting it. Also watch out for network timeouts since you’re making multiple sequential requests.
just keep updating your api_endpoint to the next_page url until it’s none. Use a loop like while data.get('pagination', {}).get('next_page): and then fetch again with that new url as your endpoint.