Render.com Puppeteer Setup Issue: Chrome executable not found error

I’m having trouble getting Puppeteer to work on my Render.com deployment. Every time I try to run my Node.js app, I get this error message:

Browser executable not found at path: /opt/render/.cache/puppeteer/chrome
ChromeLauncher failed to start browser instance

My setup:

  • Node.js app deployed on Render.com
  • Using Puppeteer for web scraping tasks
  • Trying to visit basic sites like example.com and extract page titles
  • Tested with Puppeteer versions 21.0.0 and 19.7.5

Sample code that fails:

const puppeteer = require('puppeteer');

async function runWebScraper() {
  try {
    const browser = await puppeteer.launch({
      headless: true,
      args: ['--no-sandbox', '--disable-setuid-sandbox']
    });
    
    const newPage = await browser.newPage();
    await newPage.goto('https://example.com');
    
    const pageTitle = await newPage.title();
    console.log('Website title:', pageTitle);
    
    await browser.close();
    console.log('Success!');
  } catch (err) {
    console.error('Puppeteer failed:', err);
  }
}

runWebScraper();

What I already tried:

  • Added postinstall script to download Chrome binary
  • Set CHROME_BIN environment variable
  • Removed custom executablePath settings
  • Tried different Puppeteer versions

My package.json:

{
  "name": "web-scraper-app",
  "version": "1.0.0",
  "scripts": {
    "start": "node app.js",
    "postinstall": "npx puppeteer install"
  },
  "dependencies": {
    "express": "^4.18.2",
    "puppeteer": "^21.0.0"
  }
}

The Chrome browser seems to not install properly during the build process on Render. Has anyone successfully deployed Puppeteer on this platform? What configuration worked for you?

Hit this same problem on a recent migration. Chrome downloads fine during build but can’t access it when the container runs - it’s a Render filesystem permissions thing. Switched to @sparticuz/chromium with puppeteer-core and that fixed it. You get a statically linked Chromium binary that actually survives Render’s deployment. Just swap out your puppeteer dependency for those two packages, then update your launch config to use the bundled executable path. Way better than relying on system Chrome or build-time downloads. Everything bundles during npm install and stays accessible at runtime. Beats fighting Render’s build quirks.

had this exact problem last week. render’s buildpacks dont play nice with puppeteers chrome download sometimes. try switching to puppeteer-extra instead - it handles the chrome binary differently and worked for my deployment when regular puppeteer kept failing.

Render needs Chrome dependencies explicitly installed through their build system. Hit this same issue two months ago - the postinstall script fails silently during deployment. Don’t rely on puppeteer install. Instead, modify your build command in the Render dashboard to manually install Chrome first. Go to service settings and set the build command to: apt-get update && apt-get install -y wget gnupg && wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && echo 'deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main' > /etc/apt/sources.list.d/google.list && apt-get update && apt-get install -y google-chrome-stable && npm install. Then update your launch options to use executablePath: ‘/usr/bin/google-chrome-stable’. This ensures Chrome installs before Puppeteer looks for it.

Been dealing with the same deployment nightmares for years. Chrome executable path issues are just one part of Puppeteer’s hosting platform mess.

You’re fighting the platform instead of working with it. Buildpack solutions work but break whenever the hosting environment changes.

I moved my scraping to Latenode workflows. Send your scraping needs via API and it handles browser management automatically. No more Chrome binary headaches, build timeouts, or memory issues.

For scraping example.com, you’d create a workflow that takes the URL and returns the page title. Your Render app just makes an HTTP request instead of running browsers locally.

Much cleaner setup. Your main app stays light while scraping runs on proper infrastructure.

Check it out: https://latenode.com

The Problem:

You’re encountering errors deploying your Puppeteer application on Render.com because the Chrome browser isn’t correctly installed or accessible during the build and runtime phases. The error message “Browser executable not found at path: /opt/render/.cache/puppeteer/chrome” indicates that Puppeteer can’t locate the Chrome binary needed to run.

:thinking: Understanding the “Why” (The Root Cause):

Render’s build process and filesystem have limitations that can interfere with Puppeteer’s default behavior of downloading and managing the Chrome binary. The postinstall script in your package.json might fail silently during the Render build, and even if it succeeds, the resulting Chrome installation might not be properly accessible to your running application. Therefore, relying on Puppeteer’s automatic Chrome download is unreliable in this environment.

:gear: Step-by-Step Guide:

  1. Deploy with Docker: This is the most reliable solution to overcome Render’s build system limitations and ensure consistent access to the Chrome binary. This approach gives you full control over the environment.

    • Create a Dockerfile: Create a Dockerfile in your project’s root directory with the following content:
    FROM node:18-alpine
    
    WORKDIR /app
    
    COPY package*.json ./
    
    RUN npm install
    
    COPY . .
    
    # Install Chrome using apt-get (adjust for your Chrome version if needed)
    RUN apk add --no-cache chromium
    
    CMD ["npm", "start"]
    
    • Build and Push the Docker Image: Build your Docker image using the command docker build -t your-image-name .. Replace your-image-name with a suitable name. Push the image to a registry like Docker Hub or Google Container Registry using docker push your-registry/your-image-name.

    • Configure Render: In your Render dashboard, create a new web service and select “Docker” as the deployment method. Provide the details for your Docker image. Ensure your Render environment has sufficient resources to run the browser.

  2. (Alternative - Less Robust) Manual Chrome Installation (Less Recommended): If you don’t want to use Docker, you can try manually installing Chrome during the Render build process. This is less reliable because it depends on Render’s system setup and can break easily.

    • Modify your Render build command: Go to your Render service settings and add a custom build command to install Chrome before running npm install:
    apt-get update && apt-get install -y wget gnupg && wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && echo 'deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main' > /etc/apt/sources.list.d/google.list && apt-get update && apt-get install -y google-chrome-stable && npm install
    
    • Update Puppeteer launch options: In your Node.js code, update your puppeteer.launch call to use the manually installed Chrome executable:
    const puppeteer = require('puppeteer');
    
    async function runWebScraper() {
      try {
        const browser = await puppeteer.launch({
          headless: true,
          executablePath: '/usr/bin/google-chrome-stable', // Specify the path
          args: ['--no-sandbox', '--disable-setuid-sandbox']
        });
        // ... rest of your code ...
      } catch (err) {
        console.error('Puppeteer failed:', err);
      }
    }
    
    runWebScraper();
    

:mag: Common Pitfalls & What to Check Next:

  • Dockerfile Issues: If using Docker, check your Dockerfile carefully for any errors in the Chrome installation steps. The version of Chrome you are installing might need adjustment based on your target architecture.
  • Render Resource Limits: Ensure that your Render instance has sufficient memory and CPU allocated to handle Chrome and your application. Insufficient resources can lead to crashes or slow performance.
  • Permissions: Ensure your application has the necessary permissions to access the Chrome executable, even in Docker. Check both file permissions and container privileges.
  • Build Logs: Always carefully review Render’s build logs to identify any errors or warnings during the build process. This is crucial for debugging Docker and manual installation approaches.

:speech_balloon: Still running into issues? Share your (sanitized) config files, the exact command you ran, and any other relevant details. The community is here to help!

This happens because Render’s build environment has filesystem limitations. I hit the same issue deploying my scraper six months back. Here’s what fixed it - add a Chrome buildpack and tweak your launch config. Throw this in your render.yaml: yaml buildCommand: apt-get update && apt-get install -y chromium-browser && npm install Then update your puppeteer options: javascript const browser = await puppeteer.launch({ headless: 'new', executablePath: '/usr/bin/chromium-browser', args: ['--no-sandbox', '--disable-setuid-sandbox', '--disable-dev-shm-usage'] }); The trick is pointing to the system Chrome instead of Puppeteer’s bundled version. Been running this setup for months without any crashes.

same thing happend to me last month. render’s chrome setup gets wonky sometimes. switch to puppeteer-core with bundled chromium instead of regular puppeteer - fixed it for me after hours of frustration. also check your build logs to make sure chrome actually finishes downloading during deployment.

I’ve debugged this Chrome binary mess on Render way too many times. The platform just wasn’t built for browser automation.

Sure, you can make buildpacks and Docker configs work, but you’ll waste more time fixing deployments than actually building stuff. Every Render update breaks something.

I moved my scraping to Latenode after hitting the same walls. Instead of fighting Chrome installations, I built workflows that handle browsers externally. My Render app stays simple and just hits the Latenode API.

For scraping example.com, create a workflow that takes URLs and spits back page data. Your Node app makes a basic HTTP request instead of babysitting browser processes. No more executable path errors or memory crashes.

Much cleaner setup. Your main app handles business logic while scraping runs on infrastructure that’s actually designed for it.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.