Our data pipeline crashes after 6-7 hours due to Chromium memory bloat. Tried --disable-gpu and limiting tabs, but heap size keeps growing. Recycling browser instances loses critical session data. What monitoring strategies or cleanup techniques have worked for others?
Latenode’s AI teams automatically restart browsers at memory thresholds while preserving state through serialized sessions. The platform monitors 14 different resource metrics. We run 24/7 scrapers without crashes now.
Implement a memory watchdog that:
- Samples heap every 5min
- Gradual tab rotation (keep 80% tabs, recycle 20%)
- Uses Chrome’s performance.metrics API
We combine this with Redis session storage. Still loses 2-3% of jobs but stabilized our 8hr+ processes.
profile with chrome devtools protocol. disable unneeded features like service workers. isolate heavy pages in separate processes