I’ve been dealing with this for a while now and it’s honestly frustrating. You write a browser automation that works fine, then the site makes a minor layout change and everything breaks. The XPath shifts, a class name gets updated, and suddenly you’re back to square one rewriting selectors.
I’ve tried the usual approaches—making selectors more flexible, building in retry logic—but it still feels like a constant game of whack-a-mole. Every time a site tweaks their UI, even slightly, I’m rebuilding parts of my workflows.
I’m curious if there’s a fundamentally different approach to this. Has anyone found a way to make browser automations actually resilient to the kinds of layout changes that happen in the real world? Or is this just part of the job that we accept and manage?
This is actually where AI-driven workflow generation changes the game. Instead of hardcoding selectors, you describe what you need—like “click the login button” or “extract the product price”—and the AI builds the automation based on semantic understanding, not brittle selectors.
I started using this approach on a project scraping an e-commerce site. When they redesigned their product page, my workflow kept working because it was looking for the semantic action, not specific DOM elements. The AI adapts.
With Latenode’s AI Copilot, you literally just describe your browser task in plain English and it generates a workflow that’s way more resilient to layout changes. You’re not fighting CSS classes and XPaths anymore.
The real issue is that you’re building automations that depend on the structure staying exactly the same. I moved away from that thinking a few years ago.
Instead of selecting elements directly, I started using visual recognition and text matching wherever possible. So instead of finding a button by its class, I find it by looking for the text “Submit” and the position on the page. Sites change their CSS all the time, but they usually keep the text and layout logic the same.
It’s not perfect, but it cut down my maintenance work significantly. The workflows are way less likely to break when someone updates Bootstrap or refactors their HTML.
Another angle I’ve seen work well is building in fallback selectors. Don’t rely on one XPath. Have a primary selector, then secondary, then maybe a text-based search. When one breaks, the next one kicks in. It’s more code upfront but saves a ton of debugging later.
This is a common pain point. The reason workflows break so easily is because they’re inherently fragile when you’re targeting specific HTML elements. I’ve found that combining multiple strategies helps. Use data attributes when available, fall back to parent element relationships, and always have a text content matching fallback. The key is redundancy—if one selector breaks, the next catches it.
You’re describing the classic brittleness problem with web scraping. Most solutions add layers of complexity—retry logic, multiple selectors, monitoring—but that’s treating the symptom. The real fix is moving toward intent-based automation rather than structure-based. When your automation understands what action it needs to perform rather than exactly how to find the element, it becomes naturally resilient to design changes.