I’ve been exploring how to accelerate our BPM migration timeline, and I keep seeing references to AI-assisted workflow generation—the idea that you can describe a process in plain English and get back something that’s actually runnable.
That sounds incredible if it works. Our bottleneck right now is that translating business requirements into executable workflows takes forever. We go through multiple rounds of clarification, documentation, and refinement. If we could compress that cycle by generating an initial draft from a plain language description, that could genuinely change our migration schedule.
But I’m skeptical about what happens after generation. I’ve done enough automation work to know that “generated” often means “good starting point that requires heavy customization.” The question is how much customization.
Say you feed the system a description like: “When a vendor invoice is received, validate it against our purchase orders, flag discrepancies, route to accounting if clean, or to procurement if there are issues.” Does it actually generate something close to production-ready? Or do you end up rewriting most of it anyway, which makes the time savings disappear?
I’m also curious about how this interacts with our existing workflows. We’re not starting from scratch—we have Camunda processes that we’re migrating away from. Can the generation handle that kind of context, or does it just create idealized workflows that don’t account for the messy reality of our current setup?
Has anyone actually used this approach for a real migration and seen it save time, or have you all found that the rework eats most of the gains?
We tried this approach when we migrated off our old workflow platform last year. The realistic answer is somewhere in the middle of what you’re imagining.
When we fed in detailed process descriptions, the generation gave us workflows with the core logic correct but missing a lot of context. Error handling was minimal, edge cases weren’t covered, and integration specifics were vague. We didn’t rebuild everything, but we definitely spent time completing what was generated rather than just running it as-is.
The real time saving came from not starting at a blank canvas. Having the basic structure and flow already mapped out meant we could focus on refinement instead of initial design. That’s not nothing. For a complex process, it probably cut our design phase by 40 to 50 percent. But then we still needed to test, debug, and tune.
What mattered most was description quality. Vague requirements generated vague workflows. When we put detail into the description upfront, the output was proportionally better. It’s not magic, it’s more like having a really smart starting point.
The context question you raised is important. If you’re trying to replicate existing Camunda workflows, feeding those descriptions through generation works okay for the happy path but struggles with the custom logic and quirks your current setup has. We handled this by describing what the process should be in the new system, not trying to force the generation to understand our old system’s peculiarities.
That reframing actually helped. It forced us to think about whether we were migrating processes or just moving them as-is. Sometimes the old workflow had workarounds baked in that don’t make sense anymore.
Generated workflows are better than nothing but worse than hand-designed ones. The value isn’t that they’re perfect, it’s that they’re fast enough that you can iterate on them quickly instead of debating design in meetings. We used generation as a way to get a prototype in front of business stakeholders way faster than traditional design. That feedback loop was worth more than the generation accuracy itself.