Retrieving All Comments from YouTube Using the YouTube V3 API (RapidAPI)

import requests
import pandas as pd

api_url = "https://youtube-v31.p.rapidapi.com/commentThreads"

params = {"maxResults":"100","videoId":"enter your videoID here","part":"snippet"}

headers = {
    'x-rapidapi-key': "your_api_key_here",
    'x-rapidapi-host': "youtube-v31.p.rapidapi.com"
}

response = requests.get(api_url, headers=headers, params=params)

print(response.json())

I am in the process of developing a dataset of YouTube comments. While I can gather comments for individual video IDs through RapidAPI, I have numerous IDs and would like to fetch comments for multiple video IDs simultaneously. What approach can I take to retrieve comments for all video IDs at once instead of querying them one by one?

To fetch comments for multiple video IDs simultaneously, consider using a loop or comprehension to iterate over your list of video IDs, making API requests for each.

import requests

api_url = “https://youtube-v31.p.rapidapi.com/commentThreads
headers = {
‘x-rapidapi-key’: “your_api_key_here”,
‘x-rapidapi-host’: “youtube-v31.p.rapidapi.com
}

video_ids = [“video_id_1”, “video_id_2”, “video_id_3”] # Add your video IDs here

comments =
for video_id in video_ids:
params = {“maxResults”:“100”,“videoId”:video_id,“part”:“snippet”}
response = requests.get(api_url, headers=headers, params=params)
comments.extend(response.json().get(‘items’, ))

print(comments)

This script loops through each video ID, fetching comments and adding them to the comments list.

To retrieve comments for multiple video IDs efficiently, another approach is to employ asynchronous requests. This allows you to send multiple requests simultaneously, reducing the waiting time significantly. Here is how you can use Python's aiohttp library to achieve this:

import asyncio import aiohttp

api_url = “https://youtube-v31.p.rapidapi.com/commentThreads
headers = {
‘x-rapidapi-key’: “your_api_key_here”,
‘x-rapidapi-host’: “youtube-v31.p.rapidapi.com
}

video_ids = [“video_id_1”, “video_id_2”, “video_id_3”] # Add your video IDs here

async def fetch_comments(session, video_id):
params = {“maxResults”: “100”, “videoId”: video_id, “part”: “snippet”}
async with session.get(api_url, headers=headers, params=params) as response:
return await response.json()

async def main():
async with aiohttp.ClientSession() as session:
tasks = [fetch_comments(session, video_id) for video_id in video_ids]
results = await asyncio.gather(*tasks)
comments = [item for result in results for item in result.get(‘items’, )]
print(comments)

asyncio.run(main())

This solution embraces asynchronous programming, allowing you to make API requests in parallel. The aiohttp library handles HTTP requests asynchronously, and with the help of asyncio, you can gather all responses and collate the comments into a single list, significantly improving the efficiency of your data retrieval.