i’ve been experimenting with using plain english descriptions to generate puppeteer workflows, and while it’s amazing how fast you can get something running, i keep hitting the same wall. i’ll describe what i want—like “click the login button and wait for the dashboard”—and the copilot generates working code. but then the website updates their markup, and everything falls apart. the selectors are too brittle.
i get that this is a puppeteer problem in general, but i’m wondering if there’s a smarter way to build these workflows from the start. like, should i be thinking differently about how to write my plain english descriptions to get more robust selectors? or is there a technique i’m missing that makes the generated workflows more tolerant of ui changes?
has anyone else run into this, and what’s your actual experience been with ai-generated automation that stands up over time?
this is one of the biggest pain points i see people run into. the issue isn’t really with plain english descriptions—it’s that most copilots generate selectors without thinking about stability.
what i’ve found works better is using a platform that lets you describe the intent, not just the mechanics. instead of “click the button with id=‘submit’”, you describe “submit the form”. the ai then thinks about multiple ways to find that button and picks the most stable approach.
with Latenode’s AI Copilot, you can describe your automation in plain language, and it generates workflows that actually consider element stability. you can also tune the workflow in the visual builder afterward, tweaking selectors or adding fallbacks without touching code.
the real power is that you’re not locked into brittle generated code. you can see what was generated, adjust it, and test it. when the site changes, you update the workflow instead of rewriting scripts.
i dealt with this exact problem at my last project. we were scraping several sites, and every couple of weeks something would break.
what actually helped was being more specific in how we described the automation. instead of targeting by id or class, we started looking at semantic markers—like the button’s position, surrounding text, or aria labels. when you feed that into the copilot, it tends to generate more resilient selectors.
also, i started adding a validation step in the workflow. after the click happens, the automation checks if the expected element appeared. if not, it tries an alternative selector. it’s like a fallback chain. takes a bit longer to set up, but updates to the site rarely break things completely.
the fragility you’re describing is a real issue. from my experience, brittleness comes from two places. first, the generated selectors are often too specific—they target exact ids or classes that change frequently. second, there’s no resilience logic built in.
when i build automations now, i treat the ai-generated workflow as a starting point. i then add redundancy. if one selector fails, try another. use xpath patterns that are less likely to break, like searching by text content or structural position. you can also add explicit waits for state changes instead of just relying on timing.
the platform you use matters too. some visual builders let you see exactly what selectors are being used and adjust them before deployment. that visibility is crucial.
brittleness in ai-generated selectors is a known limitation. the copilot generates based on the current dom structure, but websites evolve. here’s what i’ve learned works better.
first, describe your automation in terms of behavior and intent, not structure. “extract the price from the product listing” rather than “read the text in div.price-tag”. the better copilots interpret that and generate more semantic selectors.
second, use relative positioning and element relationships. a button that’s always three steps down from a specific heading is more stable than targeting by id. xpath with text matching or aria attributes tends to survive redesigns better.
third, build in retry logic. if a selector fails, have the workflow try alternatives or pause for manual input. this isn’t fully automated, but it dramatically reduces how often you need to manually fix things.
yeah, the issue is that ai generates selectors for the current state only. add fallback selectors in your workflow. use text matching and aria labels instead of ids. these change less often than class names.