I’ve seen that there are over 400 AI models available, and I’m genuinely wondering if the model choice has any practical impact on browser automation workflows.
Like, on the surface, it seems like it shouldn’t matter that much for basic automation. If I’m navigating a page, clicking elements, and extracting text, I’m not doing anything that requires the most advanced reasoning. A smaller, faster model might actually be better for speed and cost.
But then I think about the AI Copilot feature—if I’m describing my automation goal in plain English and the copilot generates a workflow, maybe a more capable model generates more resilient workflows? Or handles edge cases better?
I’m also curious about data extraction tasks. If I’m asking the AI to understand and structure messy HTML or handle badly formatted data, does model quality actually make a difference?
Has anyone experimented with different models for the same task and noticed actual differences in quality or reliability?