Has anyone built self-healing POM workflows that fix broken locators?

I’m so tired of our test suites breaking every time the frontend team makes even tiny UI changes. Our Page Object Models keep failing because locators break, and it’s becoming a major bottleneck for our entire development process.

Last week, the frontend team changed some class names and our entire test suite failed. It took us 2 days to update all the locators, during which time we couldn’t release any new features. This is happening at least once a month now.

I’ve heard about self-healing test frameworks that can automatically detect and repair broken locators in real-time. Is this actually possible? Has anyone implemented something like this with AI?

Specifically, I’m looking at Latenode’s AI Copilot feature and wondering if it can:

  1. Detect when a test fails due to a locator issue (versus an actual bug)
  2. Analyze the current page structure to find the element using alternative means
  3. Generate a new, working locator on the fly
  4. Update the POM framework with the new locator for future runs

Has anyone built something like this? Did it actually work reliably in production, or is it more trouble than it’s worth?

I built exactly what you’re describing using Latenode’s AI Copilot, and it’s been a complete game-changer for our team.

We were in the same situation - tests breaking constantly due to UI changes, with days lost to maintenance every month. The self-healing workflow we built with Latenode not only works, it’s become our most valuable automation asset.

Here’s how we implemented it: When a test fails, our error handler captures the exception and the current page state (DOM snapshot and screenshot). It then calls the AI Copilot to analyze if this is a locator issue. If it is, the AI examines the page structure, comparing it to historical data of that page, and generates multiple alternative locator strategies.

The brilliant part is how it prioritizes the new locators. It first tries semantic approaches (aria-labels, data-testid), then visual recognition, then relative positioning to stable elements. It tests each strategy in real-time until it finds one that works.

Once it finds a working locator, it automatically updates our POM repository with the new selector and creates a PR for review. This gives us visibility into the changes while still fixing the immediate issue.

The results have been incredible - our test maintenance time dropped by 87%, and our pipeline stability went from constantly breaking to 96% reliable. The AI has gotten smarter over time too - it now predicts potential locator issues before they even happen.

Check it out at https://latenode.com

We implemented a self-healing POM system last year and it’s been a major success. Here’s how we approached it:

First, we created a wrapper around our element finder that catches NoSuchElementExceptions. When an element can’t be found, instead of failing immediately, it triggers our healing process.

The healing process works in layers:

  1. Try alternative locator strategies we’ve pre-defined (we store 3-4 different ways to locate important elements)
  2. Use relative locators to find nearby stable elements and navigate from there
  3. Use a fuzzy matching algorithm that can handle slight changes to IDs and classes

When a healing strategy works, it records the new locator and its success rate. After a locator has proven reliable across multiple runs, it automatically updates our POM definitions.

We also built an admin dashboard that shows which elements are frequently breaking and which healing strategies are most successful. This helps us identify patterns and improve our initial locators.

The system catches about 85% of locator issues automatically. For the remaining 15%, it at least provides detailed diagnostic information that makes manual fixes much faster.

The most surprising benefit was how it improved our relationship with the frontend team. Now we don’t constantly complain about their changes breaking our tests, which has led to better collaboration.

I built a self-healing POM framework for my company last year, and it’s been incredibly successful. Our approach combined multiple strategies to create a robust solution.

We started by implementing a proxy layer around our element finder functions. When a locator fails, instead of throwing an exception immediately, the proxy triggers our healing algorithm.

The healing process follows these steps:

  1. Capture the current DOM state and a screenshot of the page
  2. Analyze the failed locator to understand what type of element it was trying to find
  3. Apply a series of alternative location strategies, including attribute-based locators, relative position locators, and visual similarity matching
  4. When a new working locator is found, validate it by performing the originally intended action
  5. If the action succeeds, record the new locator in our healing database

After a new locator has proven reliable across multiple test runs and environments, our system automatically updates the source code in our POM classes.

The results have been remarkable - we’ve reduced locator-related test failures by approximately 78%, and our test maintenance time has decreased significantly.

I’ve implemented self-healing POM systems for three different organizations, and there are several patterns that have proven effective.

The most successful approach uses a multi-layered strategy:

  1. Preventive layer: Work with developers to implement stable locators (data-testid attributes) for critical elements
  2. Detection layer: Distinguish between locator failures and actual application bugs through context analysis
  3. Healing layer: Apply multiple alternative location strategies when primary locators fail
  4. Learning layer: Track successful healing strategies and improve future attempts

For the healing layer specifically, we implemented a weighted scoring system for alternative locators. Each potential new locator is scored based on its specificity, uniqueness, and stability factors. Higher-scoring candidates are tried first.

We also built a feedback loop where developers can review and approve automatic locator changes. This creates a virtuous cycle where the system learns from human expertise.

The most challenging aspect was handling dynamic content properly. We solved this by incorporating contextual awareness - the system understands that some elements may disappear legitimately versus breaking due to locator issues.

With this approach, we reduced locator maintenance by approximately 85% while improving test reliability.

yes, built one last year. it catches broken locators and tries alternatives - first by attributes, then by text, then by position. works about 80% of the time, saving us tons of maintenance work.

Try healing framework with AI fallbacks.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.