Selenium headless mode results in access denial while standard mode works correctly

I’m encountering issues when trying to scrape a website using Selenium in headless mode. When I run Chrome visibly, everything functions perfectly, and I can successfully retrieve the page content. However, once I switch to headless mode, I get an “Access Denied” error from the site.

Working code (visible mode):

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait

driver_options = Options()
driver = webdriver.Chrome(options=driver_options)
driver.maximize_window()
wait_handler = WebDriverWait(driver, 30)
target_url = "https://example-finance-site.com/stock-data"
driver.get(target_url)
wait_handler.until(lambda d: d.execute_script('return document.readyState') == "complete")
print(driver.page_source)

Failing code (headless mode):

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait

driver_options = Options()
driver_options.add_argument('--headless')
driver_options.add_argument('--no-sandbox')
driver_options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome(options=driver_options)
driver.maximize_window()
wait_handler = WebDriverWait(driver, 30)
target_url = "https://example-finance-site.com/stock-data"
driver.get(target_url)
wait_handler.until(lambda d: d.execute_script('return document.readyState') == "complete")
print(driver.page_source)

In headless mode, I receive a page saying access is denied instead of the actual webpage content. What could be causing this difference in functionality between the two modes?

yeah this is a common headache with selenium. the site’s probably detecting headless mode through window properties. try setting --disable-web-security and --disable-features=VizDisplayCompositor flags too. also some sites check for navigator.webdriver so u might wanna disable that aswell.

This happens because many websites implement bot detection mechanisms that specifically target headless browsers. The site you’re trying to access likely uses JavaScript-based detection that checks for various browser properties that differ between headless and regular Chrome instances.

I’ve faced similar issues and found success by adding user agent spoofing and window size configuration to make the headless browser appear more like a regular one. Try adding these arguments to your headless configuration:

driver_options.add_argument('--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36')
driver_options.add_argument('--window-size=1920,1080')
driver_options.add_argument('--disable-blink-features=AutomationControlled')
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")

Finance sites are particularly aggressive with their anti-bot measures, so you might need to experiment with different combinations of these flags. Sometimes adding random delays between requests also helps avoid detection patterns.

I ran into this exact problem last month when scraping financial data. The issue stems from how headless Chrome reports certain browser properties that anti-bot systems can easily detect. What worked for me was setting the viewport and disabling automation indicators before navigating to the page. Add these lines after creating your driver but before the get() call: python driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument', { 'source': 'delete window.cdc_adoQpoasnfa76pfcZLmcfl_Array; delete window.cdc_adoQpoasnfa76pfcZLmcfl_Promise; delete window.cdc_adoQpoasnfa76pfcZLmcfl_Symbol;' }) driver.set_window_size(1366, 768) Also try adding --disable-extensions and --disable-plugins-discovery to your options. Some sites check for these automation artifacts that Chrome creates in headless mode. The CDP command removes detection variables that many financial sites specifically look for.