Why does page.evaluate not find elements that work in browser console?

I’m learning how to use Puppeteer and ran into a confusing issue. I can run a selector in Chrome’s developer console and it works perfectly, but when I use the same selector inside page.evaluate(), it returns null or undefined.

Here’s what works in the browser console:

document.querySelector('.video-stats .view-counter span.count-text').textContent

But this Puppeteer code fails to find the same element:

const playwright = require('puppeteer')

const scrapeData = async () => {
  const browserInstance = await playwright.launch()
  const newPage = await browserInstance.newPage()

  await newPage.goto('https://example-video-site.com/watch?v=abc123')
  
  await newPage.waitFor(2000)

  const output = await newPage.evaluate(() => {
    let viewCount = document.querySelector('.video-stats .view-counter span.count-text').textContent
    return {viewCount}
  })

  browserInstance.close()
  return output
}

scrapeData().then(data => {
  console.log(data)
})

I eventually found a workaround using a different approach, but I really want to understand why the evaluate function behaves differently than the browser console. What am I missing here?

Dynamic content loading could be interfering with your selector. Many video sites use client-side rendering where elements get populated after the initial DOM loads. Your evaluate function runs in the browser context but might execute before the JavaScript that populates those specific elements has finished running. I encountered similar problems when scraping social media sites - the DOM structure exists but the actual content gets injected later via AJAX calls or React components. Try adding await newPage.waitForFunction(() => document.querySelector('.video-stats .view-counter span.count-text')?.textContent) before your evaluate call. This waits until the element actually contains text content rather than just existing in the DOM. Also worth checking if the site uses shadow DOM or iframes for that particular section, which would require different selectors than what works in the console.

timing issue most likely - even tho you got waitFor(2000) that doesnt guarantee the element is actualy loaded. try using waitForSelector() instead of the fixed delay, something like await newPage.waitForSelector('.video-stats .view-counter span.count-text') before your evaluate call. the browser console runs after everything loads but puppeteer might be too fast

The issue stems from context differences between manual browsing and automated scraping. When you’re manually testing in the browser console, the page has fully rendered and any dynamic content loaded through JavaScript has completed execution. However, with Puppeteer, your script might be executing before all the dynamic elements are properly rendered to the DOM. Another possibility is that the website detects automated browsing and serves different content or loads elements differently when it identifies Puppeteer. Some sites implement bot detection that can alter the page structure. You might want to try setting a realistic user agent string with await newPage.setUserAgent() to make your requests appear more like a regular browser. Also consider that some elements might be loaded conditionally based on user interactions or viewport conditions that aren’t replicated in your automated script.