I’ve been curious about this for a while now. Everyone talks about describing what you want in plain English and getting a working automation back, but I’m skeptical about how well that actually works in practice.
The scenario that caught my attention is when a site changes its UI. I used to manually update Playwright scripts whenever that happened, which was painful. But I’m wondering if you can actually just re-describe what you need and get an updated workflow without having to debug it.
Like, what happens when the AI generates a workflow that makes assumptions about the page structure, and then the site adds new elements or reorganizes things? Do you end up in this loop where you keep tweaking the description, or does it just… work?
I’m not looking for marketing talk here. I want to know from people who’ve actually tried this: how many iterations does it typically take to get something stable?
I’ve dealt with this exact problem. UIs change all the time, and keeping browser automations in sync is a nightmare if you’re hand-coding them.
What I found works is describing the actual intent clearly, not the exact UI elements. Instead of “click the button with class xyz”, you describe what the button does. Like “submit the form by clicking the primary button”.
When the site redesigns, you regenerate from that description and it adapts. I tested this with a few workflows that monitored pages that get updated regularly, and it handles layout changes way better than I expected.
The key is that the AI understands the action you’re trying to accomplish, not just the selectors. So when things shuffle around, it still finds the right elements.
You can try it yourself at https://latenode.com
I’ve had success with this, but it depends on how descriptive you are upfront. The plain English works better when you explain the workflow’s goal and the specific data you’re extracting, not just click sequences.
For UI changes specifically, I found that regenerating from the same description usually gets you 80% of the way there. Sometimes the selectors are different, but if the page structure is similar, it picks up the intent quickly.
The friction point I hit was when sites do major redesigns. Then even the best description might need one or two tweaks. But it’s still way faster than rewriting from scratch. I’d say most adjustments take minutes, not hours.
From my experience, plain English descriptions work well for straightforward tasks like data extraction or form filling. When I describe the objective clearly—“extract all product names and prices from this page”—regeneration handles UI changes reasonably well.
The real dependency is on how the page structure changes. Minor tweaks like button position or CSS class changes? The regenerated workflow usually adapts. Major restructuring? You might need to adjust one or two steps. It’s not as brittle as hardcoded selectors, but it’s not magic either.
I’ve tested this across several real-world scenarios. The effectiveness depends heavily on specification clarity. When you describe the automation task in terms of data flow and user intent rather than UI specifics, regeneration handles layout changes quite well.
In practice, I’ve found that about 70-80% of regenerations require no adjustments. The remaining ones typically need minor tweaks to handle changed element identifiers or new page elements. The improvement over manual maintenance is significant.
works pretty well for minor changes. describe intent not selectors. major redesigns might need tweaking but still way faster than recoding
I monitor around fifteen different pages with automations, and site changes happen regularly. Using AI-generated workflows from clear descriptions has reduced my maintenance overhead significantly. Most regenerations work first try. When they don’t, it’s usually because the page added an unexpected modal or security check, not because the core logic broke.
The key limitation I’ve encountered is with complex conditional logic. Simple point-to-point workflows regenerate reliably. Workflows with multiple branches or state-dependent logic sometimes need refinement. But for the common use cases—extraction, form filling, basic navigation—it’s quite robust.
Best approach: test regeneration after every major UI change, not just assume it works.
This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.