I’m building a comment scraper for YouTube videos and need help with batch processing. Right now my code only handles one video at a time but I have hundreds of video IDs to process.
import requests
import json
def fetch_comments_batch(video_list):
api_endpoint = "https://youtube-v31.p.rapidapi.com/commentThreads"
request_headers = {
'x-rapidapi-key': "your-api-key-here",
'x-rapidapi-host': "youtube-v31.p.rapidapi.com"
}
for vid in video_list:
params = {
"maxResults": "100",
"videoId": vid,
"part": "snippet"
}
result = requests.get(api_endpoint, headers=request_headers, params=params)
print(result.json())
video_ids = ["video1", "video2", "video3"]
fetch_comments_batch(video_ids)
The current setup requires me to manually change the videoId parameter each time. I want to pass an array of video IDs and automatically collect comments from all of them in one go. Is there a way to modify the API call to accept multiple video IDs at once, or do I need to loop through them individually? What’s the most efficient approach for handling large datasets of video IDs?