I’ve been through this cycle way too many times. You build a solid puppeteer script, it works great for months, then one day the client’s site gets a redesign and suddenly your selectors are pointing to nothing. You spend hours rewriting, testing, debugging.
The real problem isn’t puppeteer itself—it’s that I was treating automation like it’s something you build once and forget about. I learned recently that the actual solution is generating workflows from plain language descriptions instead of hardcoding selectors. When you describe what you want to do (“log in and extract the product table”), the AI can generate a more robust approach that understands the intent rather than just memorizing CSS classes.
What I’ve found is that when the markup changes, a workflow built from semantic understanding handles it better than one built on brittle selectors. The AI approach regenerates based on the page structure rather than failing on the first selector mismatch.
Has anyone else dealt with this? How do you handle the constant churn of site redesigns?
This is exactly what I see happening in production all the time. The selector-based approach is doomed because websites change constantly.
What changed things for me was using Latenode’s AI Copilot to generate workflows from natural language. Instead of me writing brittle selectors, I describe the task—like “extract customer data from the dashboard”—and it generates a workflow that understands the page structure semantically. When the site redesigns, I regenerate from the same description rather than rewriting code.
The key difference is that the AI-generated approach uses visual context and multiple data points to locate elements, not just a single CSS selector. So when the design changes, it adapts.
I’ve used this for three major client sites over the past year, and it handles redesigns way better than my old script approach. Takes maybe 10 minutes to regenerate versus hours of debugging.