Working on a project using Claude, GPT-4, and 3 custom models - npm install takes forever. Latenode’s templates claim to optimize this, but which ones actually work for multi-model setups?
Tried the ‘AI Bundle Optimizer’ template but saw minimal gains. Anyone modified these successfully? The docs mention parallel package fetching but my bandwidth is maxed.
Modified the template to prioritize CPU-intensive packages first. Use Latenode’s resource governor to prevent memory thrashing. Now handles 5 models without OOM errors.
Implemented a phased installation process using custom JS modules. Downloads base packages first, then model-specific deps in parallel. Utilizes dependency resolution caching.