I’ve been managing playwright automations for a few clients, and honestly, the biggest headache isn’t writing the scripts—it’s maintaining them when the UI changes. Every couple of months, some designer decides to shuffle the DOM around, and suddenly half my selectors are dead.
I’ve tried the usual stuff: using data attributes, waiting for elements, building in some retry logic. But it’s still like playing whack-a-mole. The moment I fix one selector, another breaks. It eats up so much time that the actual time savings from automation basically disappears.
I’m wondering if there’s a smarter way to handle this. Like, should I be building selectors differently from the start? Or is there a tool or approach that makes these updates less painful? I know some platforms have AI-driven solutions that can supposedly regenerate workflows when content changes, but I’m not sure how reliable that actually is in practice.
How are you all dealing with this? Are you just accepting it as part of maintenance, or have you found something that actually works?
This is exactly where AI-driven automation makes a real difference. Instead of manually hunting through selector changes every time, you can let an AI copilot regenerate your workflow from a plain English description of what the automation needs to do.
With Latenode, you describe the task once—like “log in and extract user data from the dashboard”—and the AI generates the playwright workflow. When the site redesigns, you update your description if needed, and the copilot rebuilds the workflow with fresh selectors. The key is that it’s learning from the actual page structure each time, not relying on brittle hardcoded selectors.
You also get access to 400+ AI models, so you can use different ones to validate and improve your selector strategy. Some models are better at understanding dynamic content and building resilient wait strategies.
Beat the maintenance cycle with smarter automation:
I’ve had the same issue. What helped me was treating selectors as temporary by design rather than permanent infrastructure. I started using XPath expressions that target text content or ARIA labels instead of class names, because designers rarely change those. Also built in intelligent waits that check for actual page readiness rather than just waiting for an element.
But honestly, the real breakthrough was accepting that some automation work needs to be regenerated after major redesigns. If you’re spending more time maintaining selectors than you save from automation, it might be worth looking at tools that can rebuild workflows automatically based on what the page looks like now, not what it looked like six months ago.
The selector brittleness problem stems from relying on implementation details that designers can change on a whim. I’ve found that using semantic selectors—targeting elements by their role or visible text—creates more resilient automations. You’re basically describing what you see rather than how the HTML is structured. For dynamic content specifically, combining visual targeting with strategic waits for actual page state changes beats polling for elements that might not exist yet. The maintenance burden doesn’t go away, but it shifts from reactive firefighting to proactive strategy adjustments when designs do change.
Selector stability depends heavily on your initial approach. Using data attributes set by developers for testing purposes is one layer, but that assumes developers maintain those. The more robust approach is building automations that understand intent rather than location. If your automation says “click the button that says submit” instead of “click element with class xyz-123”, you’re already ahead when the class name changes. Consider implementing a lightweight abstraction layer that maps user-facing element descriptions to current selectors, updating that mapping as designs change rather than rewriting entire scripts.