Chromium browser launch fails in Node.js Puppeteer when SSH connection is disconnected

I’m encountering a strange problem with my Node.js application that uses Puppeteer to generate PDF documents from web pages. It operates perfectly while I’m connected via SSH, but once the SSH session is closed, Puppeteer starts giving me errors after successfully creating a few PDFs.

Here’s the error that appears:

Error: Failed to launch the browser process!

cmd_run.go:1285: WARNING: cannot start document portal: dial unix /run/user/1000/bus: connect: no such file or directory

/system.slice/pm2-ubuntu.service is not a snap cgroup

Below is my Puppeteer configuration:

const chromeInstance = await puppeteer.launch({
    headless: "new", 
    userDataDir: "./cache/" + sessionId, 
    executablePath: process.env.CHROME_PATH,
    args: [
        '--disable-features=IsolateOrigins',
        '--disable-site-isolation-trials',
        '--autoplay-policy=user-gesture-required',
        '--disable-background-networking',
        '--disable-background-timer-throttling',
        '--disable-backgrounding-occluded-windows',
        '--disable-breakpad',
        '--disable-client-side-phishing-detection',
        '--disable-component-update',
        '--disable-default-apps',
        '--disable-dev-shm-usage',
        '--disable-domain-reliability',
        '--disable-extensions',
        '--disable-features=AudioServiceOutOfProcess',
        '--disable-hang-monitor',
        '--disable-ipc-flooding-protection',
        '--disable-notifications',
        '--disable-offer-store-unmasked-wallet-cards',
        '--disable-popup-blocking',
        '--disable-print-preview',
        '--disable-prompt-on-repost',
        '--disable-renderer-backgrounding',
        '--disable-setuid-sandbox',
        '--disable-speech-api',
        '--disable-software-rasterizer',
        '--disable-sync',
        '--hide-scrollbars',
        '--ignore-gpu-blacklist',
        '--metrics-recording-only',
        '--mute-audio',
        '--no-default-browser-check',
        '--no-first-run',
        '--no-pings',
        '--no-sandbox',
        '--no-zygote',
        '--password-store=basic',
        '--use-gl=swiftshader',
        '--use-mock-keychain'
    ]
});

Has anyone experienced this issue with SSH sessions before? I’d appreciate any suggestions on how to resolve it.

I’ve encountered a similar issue when using Puppeteer on Ubuntu servers. The problem usually arises because Chrome tries to access display resources that are unavailable once your SSH session is closed. To mitigate this, adding --disable-gpu and --virtual-time-budget=5000 to your launch arguments can help. Similarly, including --disable-web-security and --single-process has proven to enhance stability after SSH disconnection. Another workaround is setting the DISPLAY environment variable to :99 prior to launching Puppeteer, which oddly resolves some of the issues. Additionally, you might want to consider using systemd instead of PM2 for managing your process, as it tends to handle session isolation more effectively.

use nohup to run ur app instead of just closing SSH - ran into this exact problem on debian. the process has to detach from the terminal properly. also double-check that PM2’s actually running in daemon mode. sometimes it still inherits session stuff even when it looks detached.

The Problem:

You’re experiencing errors with your Node.js application using Puppeteer to generate PDFs after your SSH session closes. The error message indicates a problem connecting to a user-specific D-Bus session, which is unavailable once the SSH session terminates. This suggests Puppeteer is trying to access resources tied to your user session, resources that no longer exist after SSH disconnection.

:thinking: Understanding the “Why” (The Root Cause):

Puppeteer, when launching Chrome, often inherits environment variables and attempts to interact with system services associated with your active user session. When your SSH session closes, these resources become inaccessible, leading to the observed errors. This isn’t necessarily a bug in your code, but a consequence of how Puppeteer interacts with the underlying operating system and Chrome’s session management. The solution involves decoupling Puppeteer from your active SSH session.

:gear: Step-by-Step Guide:

  1. Migrate PDF Generation to a Dedicated System Process: The most robust solution is to run your Puppeteer script as a system-level service instead of directly within your SSH session. This ensures it’s independent of your user session and continues running even after you disconnect. This might involve using systemd (on Linux) or a similar service manager. This prevents the issue entirely, because the Puppeteer process is not tied to a short-lived SSH session.

  2. (Alternative, Less Robust) Environment Variable Manipulation: If migrating to a system service isn’t immediately feasible, you can try unsetting specific environment variables that Chrome might be relying on for user session integration. Before calling puppeteer.launch(), add the following lines to your Node.js code:

process.env.XDG_RUNTIME_DIR = "";
process.env.DBUS_SESSION_BUS_ADDRESS = "";
process.env.DISPLAY = ":0"; // May need adjustment depending on your system configuration.  Consider omitting if this causes additional errors.
  1. (Alternative, Less Robust) Add Chrome Arguments: Add these arguments to your puppeteer.launch() configuration. These arguments force Chrome to ignore certain user session-related features.
args: [
    // ... your existing arguments ...
    '--disable-extensions-http-throttling',
    '--disable-background-timer-throttling',
    '--dbus-stub',  //Especially important for addressing D-Bus related errors
    '--no-sandbox', //Potentially needed for security reasons, carefully evaluate risks.
    '--disable-gpu' //Consider adding this for headless environments.
],
  1. (Alternative, Less Robust) Verify PM2 Configuration: If you’re using PM2, make sure it’s running as a system service, not a user service. The command pm2 startup followed by pm2 save will attempt to configure PM2 to persist across reboots. Ensure PM2 is correctly configured as a system daemon. If it is still inheriting your user session, these previous steps might not fully solve the problem.

:mag: Common Pitfalls & What to Check Next:

  • Systemd Configuration (If Applicable): If using systemd, ensure your service file correctly specifies the user and working directory. It should run under a user that has the necessary permissions to access the required resources (including the directory where you store the cache folders). Incorrect configurations could still lead to unexpected behavior.
  • Permissions: Verify that your Node.js process and the user running the Puppeteer instance have the necessary read/write permissions for the userDataDir specified in your Puppeteer configuration.
  • Resource Limits: If your server has resource limits, it’s possible that the process is being killed before your pdf generation is complete. Review and adjust any relevant ulimit settings.
  • Alternative Solutions: If these solutions do not resolve the issue, consider employing an alternative approach, like using a dedicated automation platform, as mentioned in another answer.

:speech_balloon: Still running into issues? Share your (sanitized) config files, the exact command you ran, and any other relevant details. The community is here to help!

This Chrome session issue is a nightmare that won’t die no matter what flags you use. I wasted months messing with environment variables and Chrome arguments for the same problems.

Here’s the thing - you’re jamming a desktop browser into a headless server setup. Chrome updates constantly and breaks session handling every time.

I ditched the Chrome headaches and moved everything to a dedicated platform. Now my app just sends requests to an automation workflow that cranks out PDFs without drama.

The workflow runs completely separate from SSH sessions. Chrome gets its own isolated environment built for headless work. No more random failures when connections drop.

Plus I can queue multiple PDF jobs that run parallel without eating each other’s memory. Way better than babysitting Puppeteer instances.

Latenode handles all the browser management pain: https://latenode.com

Chrome’s trying to access user session stuff that disappears when SSH cuts out. Had this exact issue with Puppeteer on EC2 through PM2. Here’s what actually fixed it for me: add --disable-extensions-http-throttling and --disable-background-timer-throttling to your Chrome args, then set these env vars before starting Node: export DISPLAY=:0, export XDG_RUNTIME_DIR="", and export DBUS_SESSION_BUS_ADDRESS="". The big one - configure PM2 as a system service, not user service. Run pm2 startup and pm2 save to make your processes persist at system level. This kills the session dependency that breaks Chrome when SSH drops. That document portal warning goes away once XDG_RUNTIME_DIR is unset - forces Chrome to skip the session stuff entirely.

Looks like a dbus session issue. That /run/user/1000/bus error happens when SSH disconnects. Add --dbus-stub to your Chrome args and set DBUS_SESSION_BUS_ADDRESS="" before launching Puppeteer. Fixed it for me on CentOS servers.

This happens because Chrome tries to connect to desktop services that aren’t available after your SSH session ends. I’ve seen the exact same thing when running Puppeteer in Docker containers spawned during SSH sessions. Chrome inherits session-specific environment variables that point to user session resources that no longer exist. Here’s what fixed it for me: unset these environment variables before launching the browser. Set process.env.XDG_SESSION_TYPE = '' and process.env.XDG_SESSION_CLASS = '' in your Node app before calling puppeteer.launch(). Also add --disable-features=VizDisplayCompositor to your Chrome arguments. That document portal warning means Chrome’s trying to talk to desktop integration services. The cleanest fix is running your app through a proper init system instead of within an SSH session. Try launching your Node process through a systemctl service file instead of manually via SSH if you can.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.