How do you actually keep ai-generated puppeteer workflows from completely breaking when a website redesigns their structure?

I’ve been experimenting with using Latenode’s AI Copilot to generate Puppeteer workflows from plain text descriptions, and I’m running into a frustrating problem. The workflows it generates work great at first, but the moment a site redesigns even slightly—new class names, different DOM structure, whatever—the entire thing falls apart.

I understand why this happens. The AI generates selectors and navigation logic based on the current state of the site, but there’s no built-in resilience. It’s like the workflow is brittle by design.

From what I’ve read in the docs, the AI Copilot is supposed to be able to produce “ready-to-run” workflows, but I’m wondering if there’s actually a way to make these more robust. Are there patterns or techniques people are using to handle this? Should I be manually tweaking the generated code to add fallback selectors, or is there a better approach within Latenode itself?

Has anyone here successfully built a Puppeteer automation that actually survives website redesigns without constant rewrites?

This is exactly the kind of problem Latenode was built to solve. The AI Copilot doesn’t just generate a one-off script—it can generate workflows that are designed to adapt. The key difference is that Latenode lets you layer in fallback logic, retry mechanisms, and even AI-powered element detection as part of the workflow itself, not just in the code.

What I’ve seen work really well is combining the AI-generated base workflow with custom code nodes that use fuzzy matching or alternative selectors. The AI can help you write this resilience layer too. You describe what you need, and it generates the code that handles dynamic changes.

The real shift in thinking is this: instead of a brittle script, you’re building a workflow that’s designed to fail gracefully and adapt. That’s what separates a temporary automation from something that actually lasts.

Check out what’s possible: https://latenode.com

I dealt with this exact issue on a project where we were scraping e-commerce sites. The first version broke constantly because we were relying on CSS classes that marketing kept changing.

What helped was building in multiple layers of detection. Instead of just looking for a specific selector, we’d check for text content, position on the page, structural hierarchy. It’s more work upfront, but sites rarely change everything at once.

Also, adding logging and monitoring made a huge difference. When a selector fails, you want to know immediately, not days later when someone notices the data stopped flowing. Some basic error handling in your workflow goes a long way. You can set up alerts to trigger when things start failing so you can adjust before it becomes a real problem.

One approach I’ve used is treating the workflow more like a maintenance item than a set-it-and-forget-it tool. Version control your workflows, document what selectors you’re targeting and why. When a site redesigns, you roll out an update rather than treat it as a catastrophic failure.

The other thing that helps is understanding what’s actually stable on a page versus what changes frequently. Navigation elements and main content areas tend to have more stable structure than sidebar widgets or promotional sections. Build your workflow around the stuff that rarely changes. That simple shift reduced our maintenance burden significantly.

The resilience problem with AI-generated workflows stems from the fact that they’re optimized for the current state of a page, not for change management. You can improve this by implementing a validation layer—before a workflow executes, it verifies that the expected elements still exist on the page. If they don’t, it can trigger an alert or attempt alternative strategies.

Another pattern that works is using data attributes or semantic HTML elements when possible. Some sites are built with stable data-attributes that marketing changes can’t touch. If your workflow targets those instead of class names, it’s far more resilient. The downside is you can’t always control the site you’re scraping, but when you can, it makes a real difference.

add fallback selectors & retry logic. monitor for failures so you catch issues early. test regularly. that’s the main stuff that actually works in practice for keeping workflows stable.