This is a weird problem to have, but I’m genuinely confused by the choice now. We’ve got access to a bunch of different AI models through a single subscription, and I’m finding myself paralyzed by options.
For our Puppeteer workflows, we need AI to make decisions at different points. Which pages to visit next? What data is relevant to extract? When should we flag something for manual review? Each decision point could theoretically use a different model, but I have no idea if I’m picking the right one based on task, cost, speed, or what.
I used to manage separate APIs for different models, which was its own nightmare. At least then I knew I was paying for what I used directly. Now I’ve got one subscription and dozens of models available and I’m basically guessing which one to throw at each problem.
I know there are probably best practices here—maybe simpler models for routing decisions, bigger models for complex analysis? But I’m not finding clear guidance. In our old setup with separate APIs, you made a choice and lived with it. Now there’s analysis paralysis.
How are you folks approaching this? Are you just picking a model and sticking with it for consistency? Testing each decision point separately? Or is there a framework I’m missing that makes this less arbitrary?