I’ve been dealing with this problem for months now. We set up a workflow to scrape product data from a vendor site, and it worked fine for a few weeks. Then they updated their UI, and the entire thing broke. We had to manually rewrite selectors and adjust the flow.
I’ve heard that plain language workflow generation is supposed to handle this better because it adapts to UI changes, but I’m skeptical. Has anyone actually used this approach where you describe what you want the automation to do in plain English, and then it survives a site redesign without constant maintenance?
The way I understand it, if you describe something like “extract the product name from the top card,” the system might be smart enough to find it even if the HTML structure changes slightly. But does that actually work in practice, or do you still end up troubleshooting when things break?
What’s your experience with keeping automations stable when sites update?
This is exactly what AI Copilot Workflow Generation handles really well. Instead of writing brittle selectors, you describe what you want in plain language: “click the login button” or “extract product prices.” The AI generates the workflow, and here’s the key part: it understands intent, not just DOM structure.
When a site updates, you don’t rewrite selectors. You update the plain language description if needed, and the workflow adapts. We’ve seen teams reduce maintenance time by 60% with this approach because they’re not constantly patching broken selectors.
The real benefit is that you’re working with meaning, not fragile element paths. If a button moves or gets renamed, the AI can still find it because it’s looking for the function, not the exact HTML.
I ran into the same wall last year. The problem with traditional selectors is they’re too specific. When I switched to describing automations in terms of what they do rather than how to find elements, things got a lot more stable.
What helped most was using headless browser capabilities that can take screenshots and actually see the page the way a user would. That gives the automation more context. Instead of looking for div.product-card > span.price, it understands “find the price on any product card” visually.
It’s not perfect, but it’s way more resilient than maintaining a massive list of XPath selectors. The maintenance overhead dropped significantly once I stopped overthinking the HTML structure.
I think the issue is that most automation tools are too rigid. They’re basically recording a sequence of clicks and waits, so any change breaks everything. What I found more effective is building workflows that understand the actual goal, not just the mechanics.
When you frame it around intent—like “fill out the form fields in order” rather than “click element 42, type into element 43”—you get something that can handle minor layout shifts. The automation still knows what it’s trying to accomplish, so it can adjust when the page structure changes slightly.
UI volatility is a known challenge in browser automation. The real solution is moving away from positional or structural selectors and toward semantic selection. When you describe what an element does functionally, the underlying engine can find it through multiple signals: text content, ARIA labels, visual positioning, and context within the page hierarchy.
This approach, sometimes called intent-based automation, naturally builds in resilience to layout changes. It’s particularly effective when combined with AI that can cross-reference multiple detection methods simultaneously. Your automation becomes less of a brittle script and more of a flexible workflow that understands purpose.