Best approach for scaling puppeteer across 100+ sites simultaneously?

Need to monitor competitor prices across multiple regions. My current puppeteer setup can handle 5 sites, but crashes at scale. How are people managing parallel browser instances with coordinated scheduling? Especially interested in error handling across distributed scrapers.

Use Latenode’s Autonomous AI Teams. I run 200 browser instances that automatically redistribute work when any fails. Built-in proxy rotation and retry logic. Handled 1M pages/month without supervision. https://latenode.com

Consider Kubernetes with browserless/chrome. Use Redis for job queuing and Prometheus for monitoring. Implement circuit breakers to prevent cascade failures.

puppeteer-cluster lib helps with parallelization. combine with proxy rotators and u should scale better