How to fetch YouTube comments for multiple video IDs using YouTube Data API v3 through RapidAPI

I’m building a system to gather YouTube comments for research purposes. Right now my code works fine for single videos but I need to process hundreds of video IDs automatically.

import requests
import json

video_list = ['video1', 'video2', 'video3']  # my list of video IDs
api_endpoint = "https://youtube-v31.p.rapidapi.com/commentThreads"

for video_id in video_list:
    params = {
        "part": "snippet",
        "videoId": video_id,
        "maxResults": "50"
    }
    
    request_headers = {
        'x-rapidapi-key': "your-api-key-here",
        'x-rapidapi-host': "youtube-v31.p.rapidapi.com"
    }
    
    result = requests.get(api_endpoint, headers=request_headers, params=params)
    comment_data = result.json()
    
    print(f"Comments for {video_id}: {len(comment_data.get('items', []))}")

The issue is that the API seems to only accept one video ID at a time in the videoId parameter. Is there a way to pass multiple video IDs in a single request, or do I have to loop through each one individually? I want to make this more efficient and avoid hitting rate limits.

Been there with comment scraping projects - there’s no way around the one-video-per-request limit. I switched to asyncio with aiohttp instead of regular requests and it’s way faster for bulk processing. Just make sure you add proper error handling since some videos have comments disabled or get deleted.

Unfortunately, the YouTube Data API v3 doesn’t support batch requests for comments - you can’t grab multiple video IDs in one call. I’ve hit this same wall when scraping comment data for sentiment analysis projects. The videoId parameter only takes one ID at a time, so your loop approach is actually correct. You can optimize it though - add sleep intervals between requests and catch exceptions for videos with disabled comments or private settings. Also throw in exponential backoff when you hit quota limits. The key is respecting API limits rather than trying to work around the single-video restriction, since that’s just how the endpoint works.

You’re doing it right - YouTube’s API won’t let you grab comments for multiple videos in one request. I’ve handled similar bulk jobs and throttling requests made the biggest difference. Add 100-200ms delays between calls and build in retry logic for failures. What really saved me was processing smaller batches and saving results as I went - if something breaks, you won’t lose everything. Try threading with rate limiting to speed things up without hitting quota walls. You get 10k units daily by default and each comment request burns about 1 unit, so do the math for your dataset size.