Running Selenium headless mode without browser installation on Linux terminal

I’m working on a Python web scraping project that needs to run in a terminal-only environment. I’m using Ubuntu 22.04 through WSL and trying to scrape data from online marketplaces with Selenium in headless mode.

The issue I’m facing is confusing. According to what I read in the Selenium documentation, you only need the browser driver and don’t actually need the full browser installed on your system. However, when I try to run my script on Ubuntu, I get an error message saying Chrome isn’t installed.

This seems to contradict what the documentation suggests. Is there a way to make Selenium work in headless mode without having to install the actual browser? I really want to keep my server environment clean and only install what’s absolutely necessary for the scraping to work.

Yeah, I got confused by this too at first. WebDriver still needs the browser binary even in headless mode - it just runs without the GUI. When people say ‘no browser needed,’ they mean you don’t need a desktop environment or display server running. On Ubuntu, I’d use the chromium-browser package since it’s lighter than full Chrome and works great headless.

You’re hitting this because WebDriver still needs the actual browser, even in headless mode. The docs are confusing - they mean you don’t need a display or GUI, not that you can skip the browser completely. I’ve hit this same problem deploying scrapers to cloud servers. Just install a minimal browser package. On Ubuntu, go with google-chrome-stable using --headless and --no-sandbox flags, or try chromium-browser since it has fewer dependency headaches on servers. Chrome’s official headless Docker images work great too if you want to keep your system clean. You need the browser engine - it’s what handles JavaScript, DOM parsing, and all the web tech that makes modern sites actually work.

Selenium + browsers on servers = nightmare. Been through dependency hell way too many times trying to keep environments clean.

But here’s what I learned - stop fighting Chrome/Chromium installs and their messy dependencies. Just move the browser stuff off your server entirely.

I switched most of my scraping to automation platforms that handle all the browser headaches. Your scraping logic runs on demand or scheduled, but you don’t deal with headless configs, driver versions, or any of that garbage on your Ubuntu box.

Replaced dozens of these setups already. Less maintenance, more reliable, and keeps your environment clean like you want.

For web scraping workflows without local browser management, Latenode works well: https://latenode.com

Indeed, the documentation can be misleading regarding the necessity of having a browser installed. Selenium’s WebDriver functions as an interface that requires a browser engine to operate. Without the browser executable present, you will encounter errors. In a minimal environment like your Ubuntu setup, it is advisable to install Chromium instead of Chrome, as it is lighter and does not include a GUI. You can install it using the command ‘sudo apt install chromium-browser’. While headless mode eliminates the graphical interface, it still relies on the core browser functionalities for JavaScript execution.