i’ve been researching whether it makes sense to split webkit scraping work across multiple coordinated ai agents - like one agent that handles login, another that extracts data, and a third that validates what was extracted. the pitch sounds nice: specialized agents working in parallel should be more reliable and faster. but my concern is that coordinating between agents adds complexity, debugging becomes a nightmare, and if one agent fails halfway through, the whole thing breaks down. i’ve tried building multi-step workflows manually and they tend to be fragile. has anyone actually used autonomous ai teams for webkit scraping? what did coordination actually look like, and was it worth the setup overhead versus just building one solid workflow?