How do you actually keep AI-generated browser automation from breaking when a website redesigns their entire DOM?

I’ve been working with browser automation for a while now, and one thing that constantly frustrates me is how fragile scripts become after a site redesign. I had a workflow that was working perfectly, extracting data from a product page, and then the client redesigned their site. The entire thing broke.

I started looking into how I could make workflows more resilient to these kinds of changes. I came across the idea of using AI to generate the automation based on plain language descriptions rather than rigid selectors and hardcoded logic. The theory is that if you describe what you’re trying to do (“extract the product price from whatever element contains it”) instead of coding it to a specific class name, the AI could adapt when the structure changes.

Has anyone dealt with this? Are there ways to make browser automation workflows actually adapt when a site redesigns, or are we always going to be rewriting these things?

This is exactly where Latenode’s AI Copilot changes the game. Instead of writing brittle selectors, you describe what you need to extract in plain English. The Copilot generates a workflow that’s more resilient because it understands intent, not just DOM structure.

I had the same problem years ago. Now I describe the task (“get the current price from the product page”) and the AI builds logic that uses multiple strategies to find it, not just one selector. When a site redesigns, you often just need to run it again and let the AI adapt.

You can also layer in error handling and fallback logic that the platform helps you build visually. No need to rewrite everything from scratch.

Check it out at https://latenode.com

This topic was automatically closed 6 hours after the last reply. New replies are no longer allowed.