We’ve been doing detailed process mapping before implementing any workflow automation—swimlane diagrams, BPMN notation, the whole thing. It takes weeks and involves multiple stakeholder sessions to get the diagram right. Only then do our developers translate that diagram into actual workflow code.
I’ve been reading about AI copilot tools that claim you can just describe what you need in plain English and the system generates a runnable workflow. That sounds either incredibly powerful or completely impractical, depending on whether it actually works.
My skepticism: plain English descriptions are ambiguous. Business people describe what they want in very different ways. One person says “send an email when the deal closes,” and another says “initiate notification process upon deal closure,” and they mean the same thing but describe it differently. I’m not sure an AI system reliably parses that variability and generates correct logic.
But I also realize we’re probably spending more time on process mapping than necessary. If a copilot could take even a rough English description and generate 80% of the workflow—and then we refine from there—that might actually accelerate things without adding risk.
The harder question is repeatability. If someone describes a workflow in English and the copilot generates it, can that workflow be understood and modified by someone else six months later? Or does natural language generation create workflows that are fragile and hard to maintain?
For teams that have actually tried this: does plain language generation actually save you time, or does it just shift the time burden to testing and fixing what the AI generated?
We tested this and it’s genuinely faster for straightforward workflows. Our copilot tool could generate a working automation from a plain English description in about 15 minutes. Traditional mapping would take us 2-3 weeks when you factor in stakeholder meetings and diagram iterations.
What worked: describing workflows that followed obvious patterns. Approval flows, notifications, data transfers. The AI understood those patterns.
What didn’t work: workflows with complex conditional logic or exception handling. If your process has six different paths based on different conditions, the plain English description gets confusing fast. The AI would generate something, but you’d need developers to validate and refine.
The time savings were real, but they came with a caveat: you need to test generated workflows more thoroughly. Plain English descriptions hide assumptions. You discover those during testing.
We found it faster for prototyping but not necessarily for production. Initial generation from a description was quick. But then you’d need to review what was generated, validate the logic, test edge cases, and make adjustments. That refinement phase took almost as long as traditional mapping.
The advantage wasn’t speed, honestly. It was accessibility. Business analysts could describe workflows and see a prototype immediately. That opened up conversation with stakeholders in a way process diagrams don’t. You could iterate on the prototype visually, which was faster than iterating on diagrams.
If you’re purely optimizing for time-to-development, plain language might not win. But if you’re optimizing for collaboration and iteration, it’s better.
Plain language generation works when your workflows match existing patterns the AI has learned. Novel processes with unusual logic? The AI struggles and generates something that needs heavy customization. For standard business processes, it accelerates initial development. For edge cases, it’s not much faster than traditional approaches. Biggest benefit is reducing the process mapping phase where stakeholders argue about diagram details.
Plain text faster for template-based workflows. Custom logic still needs traditional mapping. Hybrid approach works best: describe inputs/outputs, let AI map the basics, then refine with diagrams.