I’m working on a scraping and data enrichment pipeline that’s getting complicated. Right now it’s a single workflow that does everything sequentially, but I’m wondering if breaking it into autonomous agents would actually help or just create more problems.
The task breaks down like this: one agent collects data from the website, another validates and cleans it, and a third enriches it with external lookups. In theory, having specialized agents should make the logic cleaner. In practice, I’m worried about race conditions, missed handoffs, and one agent trying to process data before the previous one finishes.
Has anyone actually orchestrated multiple agents on something like this? What went wrong? What worked? I’m trying to figure out if coordinating multiple agents is genuinely better or if I’m just adding complexity for no reason.