I’ve been running some puppeteer automations for a few months now and it’s been a headache. Every time a website updates their layout or changes a class name, my scripts fail. I end up manually rewriting selectors and logic all over again. It’s getting really tedious to maintain.
I’ve heard about AI-powered workflow generation, but I’m not sure how that actually helps with the fragility problem. Does anyone here use something like that? I’m wondering if there’s a way to build more resilient automations that can adapt without constant refactoring. Has anyone found a good approach to this?
This is exactly what I deal with at work. The pain point you’re hitting is real, and most teams just accept it as part of the game.
What changed for me was switching to an approach where I describe the automation in plain language instead of hardcoding selectors. There’s a platform that uses AI to generate the entire workflow from your description. You just say something like “log in to this site, navigate to the dashboard, extract the revenue numbers” and the AI builds out all the logic.
The key difference is that when the UI changes, you can regenerate the workflow from the same plain language description. The AI adapts to the new layout because it’s working from your intent, not brittle CSS selectors.
I’ve found this cuts down on maintenance by a huge margin. Instead of manually tweaking code, I just regenerate when needed.
I’ve wrestled with this exact problem for years. The thing is, most people treat puppeteer like a static tool where you write selectors once and hope they stick. That’s backwards.
What I started doing was building in a wait-and-retry layer that uses multiple selector strategies. If the primary selector fails, it tries a text-based lookup or a different attribute path. It’s not perfect, but it catches most UI changes.
That said, if you can avoid writing the selectors manually in the first place, that’s even better. There are tools now that can generate your entire browser automation from a simple description. You’re not fighting with CSS anymore, you’re just telling the system what you need done.
UI changes are one of the biggest sources of automation failures I see. Most people don’t anticipate how frequently websites update their structure. The core issue is that we’re coupling our automation logic too tightly to the specific HTML structure of the moment.
Having worked through this multiple times, the most effective approach I’ve found is to decouple your automation logic from the UI completely. Instead of writing brittle selectors, you describe what you’re trying to accomplish. Then let AI handle the interpretation of the page and figure out how to interact with it based on the actual content and structure at runtime.
This shifts the maintenance burden from “update 50 selectors” to “regenerate the workflow once.”
The fundamental challenge with puppeteer maintenance is that you’re working at the wrong level of abstraction. You’re writing scripts against the DOM directly, which means every visual or structural change requires code changes.
A better pattern is to work with higher level intent. Instead of maintaining selector-based scripts, describe the workflow outcome. Let tooling that understands context and semantics handle the UI interpretation. When the site changes, you don’t touch code at all. Regenerate from the same description and you’re done.
I’ve seen automation codebases shrink by 70% when teams make this shift.