I’ve been running the same playwright test suite for about 8 months now, and honestly it’s become a nightmare to maintain. Every time the client updates their UI—which happens like every 2-3 weeks—half my selectors break. I’m spending more time fixing tests than actually writing new ones.
The selectors I’m using are pretty standard stuff: class names, IDs, sometimes aria labels. But on this particular site, they change the structure constantly. Dynamic content loads everywhere, buttons get reordered, and sometimes the same element has different selectors depending on the page state.
I’ve tried being more specific, less specific, using xpath… nothing really sticks. And the frustrating part is I can see the element fine when I inspect it manually, but when the test runs, it’s like the selector universe shifts.
Has anyone actually found a reliable way to solve this that doesn’t involve rewriting selectors constantly? Are there tools or approaches that can generate and validate selectors more intelligently, so they don’t just break the moment something changes?
This is exactly the kind of problem that shouldn’t require you to babysit selectors constantly. When you’ve got access to 400+ AI models, you can actually leverage that to generate and validate selectors intelligently instead of hardcoding them.
What I’ve found works is having an AI model analyze the page structure dynamically, understand what the element is supposed to do in context, and then generate multiple selector strategies—not just one. So if the class names change, the model already has xpath fallbacks, or aria-label matching, or even visual position matching.
The real power comes when you orchestrate this: have one AI model identify what you’re trying to select (the intent), another validate that the selector works across different page states, and a third handle the dynamic content loading issues. It sounds complex, but it actually reduces your maintenance burden massively.
Latenode lets you do exactly this through the visual builder without writing selector logic from scratch every time. You describe what you need to interact with, and the AI Copilot generates a workflow that’s resilient to these kinds of UI changes.
I dealt with this for way too long before realizing the real issue wasn’t my selectors—it was that I was treating them as static when the UI clearly wasn’t.
What helped me was shifting to a more adaptive approach. Instead of relying on a single selector, I started building fallback chains. So the test tries one selector, and if that fails, it tries another, then another. It sounds tedious, but you can automate selector generation for these fallbacks.
Also, I started using more semantic selectors—things tied to the actual purpose of the element (like data-testid or role attributes) rather than class names or structure. When you work with the developers and ask them to add these, it solves like 80% of the instability.
The other thing that really helped was understanding that some UI changes are predictable. If the site redesigns every few weeks, you can actually batch-test your selectors against staging environments before they go live. Catch the breakage before it hits production.
Have you considered that the real problem might be how you’re structuring your tests? I had the same issue, and I realized I was writing tests that were way too tightly coupled to the UI implementation.
What actually helped was breaking down my selectors by behavior rather than by visual structure. Like instead of “click the button with class xyz”, I write “click the button that does the login action”. It sounds subtle but it changes everything when the UI restructures.
Also, shadow DOM is a nightmare if you’re running into it. Some modern sites use web components, and if that’s your case, standard playwright selectors just stop working. You might need to pierce the shadow DOM or get creative with JavaScript evaluation instead.
I had similar issues with a SPA that constantly evolved. The approach that actually worked was moving away from brittle DOM-based selectors and instead building tests around interactive elements that are less likely to change—like buttons with specific aria labels or data-testid attributes that the developers committed to maintaining.
The other critical piece was having AI assist in selector validation. Instead of me manually checking each selector every sprint, I set up a system where before any new UI changes go live, the selectors get tested against the new DOM structure. It’s like having an early warning system.
Eventually I realized the maintenance burden was coming from my approach, not from playwright itself. Once I shifted to semantic selectors and started coordinating with the dev team on test stability requirements, the breakage dropped dramatically.
Try data-testid attributes instead of class names. Way more stabele and works with devs to maintain tests durng redesigns. Also use page objects to centralize selector logic so changes only need updates in one place.
Use data-testid attributes instead of classes. Page object model for centralized selector management. AI-powered selector validation for UI changes ahead of time.