I’ve been skeptical about this whole AI Copilot workflow generation thing. We have some processes that are fairly straightforward, but I’ve always assumed that even if an AI could generate them, we’d end up rebuilding half of it anyway.
The claim is basically that you can describe what you need in plain English and get back something that’s ready to run. But my experience with code generation tools has usually meant a lot of going back and forth—tweaking, fixing edge cases, adding error handling.
I’m wondering if anyone here has actually tested this with real workflows. Do you get something closer to 80% usable, or more like 30% and you’re just redoing it from scratch anyway? What kinds of automations actually work well with this approach, and where does it tend to break down?
I’m not looking for marketing speak—just honest feedback about whether this actually reduces development time or just shifts the work around.