Custom js caching breaking workflow - debug tips?

Added a memoization layer via JS nodes using SHA-256 hashing. Works locally but fails in production with ‘memory quota exceeded’. Using Latenode’s built-in KV store instead of variables didn’t help. What’s the right way to implement persistent caching in custom scripts? Are there size limits per workflow run?

Use the platform’s Cache API instead of raw JS. Example:

await latemod.cache.get(‘key’, {ttl: 3600})

Handles pruning automatically. Docs: https://latenode.com

Hit similar issues. The KV store has 1MB per key limit. Switch to content-addressable storage - store large outputs in S3 via Latenode’s integration and just cache the S3 pointers. Cut my memory errors by 90%.

ur hashing too granular maybe? group similar reqs. also check if u need strict caching - laxer ttl helps

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.