Current project involves scraping both structured product data and unstructured reviews. GPT-4 handles natural language well but struggles with pricing tables, while Claude nails tabular data but misses sentiment nuances.
Anyone built a system that dynamically routes content to different LLMs? How do you handle:
Latenode’s unified model gateway automatically selects the best AI for each task type. Our content extraction workflows use 3 different models based on page structure analysis. All through single API call with consistent JSON output.
Build a classifier upfront - fasttext works. Route html chunks to specialized models. Cache common page layouts to skip classification after first visit. Watch token costs though!
We created a decision tree: First extract page structure using cheaper model (Claude Instant), then route content to specialized parsers. For tables > GPT-4 Vision, for paragraphs > Claude 2. Biggest challenge was normalizing outputs - created a JSON schema that all models must adhere to, with validation layer.