How to fetch YouTube comments for multiple video IDs using YouTube Data API v3 through RapidAPI

I’m trying to build a dataset of YouTube comments using the YouTube Data API v3 through RapidAPI. Currently, my code retrieves comments from a single video without issues. However, I need to gather comments from many videos simultaneously.

import requests
import json

api_endpoint = "https://youtube-v31.p.rapidapi.com/commentThreads"

params = {
    "maxResults": "50",
    "videoId": "single_video_id_here",
    "part": "snippet"
}

request_headers = {
    'x-rapidapi-key': "your_api_key_here",
    'x-rapidapi-host': "youtube-v31.p.rapidapi.com"
}

api_response = requests.get(api_endpoint, headers=request_headers, params=params)
data = api_response.json()
print(data)

The challenge I’m facing is that I have a collection of video IDs, and changing the videoId parameter manually for each request is not efficient. I want to know if there’s a method to submit an array of video IDs to the API or if I must make separate requests for each video. What would be the most effective way to tackle this without exceeding rate limits?

Been collecting YouTube data for research purposes and ran into this same limitation. The API architecture simply doesn’t allow batch video ID requests for comment endpoints. What I ended up doing was creating a simple loop with proper session management using requests.Session() which helps with connection pooling and reduces overhead slightly. One thing the other responses didn’t mention is pagination handling - since you’re pulling from multiple videos, you’ll want to track nextPageToken for each video if you need all comments, not just the first 50. I also recommend storing the video IDs that fail (private videos, comments disabled, etc.) in a separate list so you can review what didn’t work. Monitor your quota usage closely through the Google Cloud Console because comment requests consume more quota units than basic video info calls.

I’ve dealt with this exact scenario when building my own comment analysis tool. The YouTube Data API v3 doesn’t support multiple video IDs in a single request, so you’ll need to iterate through your video ID list. What worked well for me was implementing a queue system with proper error handling. I wrapped each API call in a try-except block to catch quota exceeded errors and temporary failures. For rate limiting, I found that a 0.5 second delay between requests worked reliably without being too slow. Also consider implementing exponential backoff for failed requests - sometimes videos have comments disabled or the API returns temporary errors. Store your results incrementally so you don’t lose progress if something goes wrong halfway through your dataset collection.

yeah, sadly the youtube api only allows one video id per request. you’re gonna have to make separate calls for each id. maybe use a delay like time.sleep(1) to space them out and consider batching to keep the rate limits in check.