We’ve been experimenting with using autonomous AI agents to model some of our internal workflows before we automate them, and I’m noticing that the simulations are catching things our process documentation completely missed.
Here’s the scenario: one of our teams owns a data validation workflow. On paper, it’s straightforward—check data, validate against rules, flag issues, send report. We had detailed documentation, we’d walked through it multiple times with the team. Seemed solid.
So we set up agents to model the workflow: one agent doing the validation, another handling escalation, another creating summaries. We ran it against historical data to see if it matched what actually happens.
It didn’t match. The agents exposed several things:
First, the validation workflow loops. We’d documented it as a straight path, but actually, when data fails validation, the system retries with adjusted parameters. That happens multiple times before escalation. No one mentioned this in the meetings because it’s automatic—it just happens. But it accounts for about 40% of the processing time.
Second, context switching between systems. The official process is: validate, flag, report. But in reality, someone has to cross-reference the flagged data against another system to understand why it failed. That wasn’t in the documented process at all, but it’s adding about 30 minutes to each cycle.
Third, exceptions and workarounds. The documentation describes the happy path. What actually happens is that experienced team members skip several steps when they recognize certain patterns. These shortcuts are valid and save time, but they’re not in any documentation.
When we automated based on the documented process alone, we would have built something that doesn’t match reality. Using agents to model first let us see the actual workflow, not the theoretical one.
The other interesting part: the agents themselves pointed out inefficiencies we probably wouldn’t have caught. One agent suggested that doing validation and cross-reference checks in a different order would eliminate a lot of loops. That’s a suggestion that came from an AI reasoning about the optimal flow, not from us reverse-engineering bad habits.
Has anyone else used agent-based modeling before automation? I’m curious if others are seeing this gap between documented processes and what actually happens.