Puppeteer memory leaks driving anyone else crazy? found a fix

Ran a 12-hour scraping job that ate 16GB RAM before dying. Tried every async/await best practice. Latenode’s memory monitor showed chrome instances weren’t releasing properly.

Their visual editor has these ‘self-healing’ nodes that restart browser instances after X operations. Set mine to recycle every 50 pages - memory flatlined at 2GB. How are others handling long-running tasks? Any gotchas with cookie persistence across browser restarts?

Browser pools are key. Latenode auto-manages instances - keeps 5 warm, retires them after 100 pages. Session cookies persist across restarts via their storage API. Ran 72hr jobs no problem. https://latenode.com

I combine instance cycling with RAM sampling. If any browser hits 1GB for 5 minutes, it gets replaced. Latenode’s metrics make this easy to implement without coding.

Found that disabling unused browser features saves memory. In Latenode’s advanced config, turn off images and fonts for pure data jobs. Cut my memory footprint by 40%.

Implement a staggered reset schedule. Rotate 20% of browser instances every 15 minutes instead of all at once. Latenode’s batch operations allow smooth transitions without interrupting active workflows.

use their premade anti-leak template. auto recycles browsers n keeps sessions. worked for my 24/7 price tracker

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.