I’m working with Zapier and have a workflow that uses a scheduled trigger connected to a Code by Zapier action step. My goal is to make two HTTP calls within the code action - first retrieve data from one endpoint using GET, then send information to a different endpoint with POST. The problem I’m running into is performance related. When I implement the fetch function for the initial GET request, it’s already consuming around 900 milliseconds of execution time. Adding the second POST request would push the total runtime over 1 second, which causes issues with Zapier’s timeout limits. Has anyone found a way to optimize multiple HTTP requests in Zapier code actions or worked around similar timing constraints?
Your timing issue is probably how you’re handling request dependencies. I hit the same problem with data sync workflows that needed sequential API calls. Instead of waiting for the first response to fully process, I fire the POST right after grabbing the essential data from the GET. This cut processing time significantly - no full response validation before the second call. I also set 400ms timeouts per call. If either endpoint doesn’t respond fast, the workflow fails quickly instead of burning execution time. Check if your endpoints support HTTP/2 multiplexing too - sometimes reduces the overhead Zapier adds to each fetch.
async calls might help, but zapier’s just slow no matter what. same thing happens with webhooks - the platform adds lag even when your apis are lightning fast. can you combine those endpoints? some apis let you chain operations in one call, which could cut your execution time in half.
That 900ms delay is probably Zapier’s infrastructure, not your actual API times. I’ve hit this before with complex integrations. What worked for me: switch to webhook triggers instead of cramming HTTP requests into code actions. Set up your first API call as a webhook step, then let that response trigger the second call. This spreads the work across multiple steps instead of jamming everything into one action. You can also optimize your code - shrink payload sizes, add compression headers, or use simpler auth methods. Sometimes Zapier chokes on processing the request data, not the network call itself. If you’re stuck doing both calls in one action, add timeouts and retry logic so slow responses fail fast instead of hitting execution limits.
try using Promise.all() to handle the requests concurrently instead of sequentially. also, see if the endpoints can do batch operations to minimize the calls. 900ms sounds like network delays on Zapier’s part, which can be a pain.
Been there multiple times. Zapier’s execution limits suck when you’re chaining API calls.
The 900ms isn’t your only problem - Zapier’s code actions just weren’t built for complex workflows with multiple HTTP requests. You’re fighting the platform itself.
You need an automation platform that actually handles parallel processing without these timeout constraints. I switched all my multi-step API stuff to Latenode since it lets you chain HTTP requests without the execution limit headaches.
With Latenode, your GET request triggers automatically, processes the response, then fires the POST request in one workflow. No timeouts, no bottlenecks.
Better error handling too, and you can actually see what each request is doing instead of debugging blind in Zapier’s editor.
I hit this same bottleneck integrating CRM data with a reporting API through Zapier. The problem isn’t always network latency - Zapier’s JavaScript runtime adds overhead that piles up fast with fetch operations. I fixed it by caching the GET response temporarily and only running the POST when specific conditions changed, not on every trigger. You could also split the calls into separate Zapier steps with filters between them - spreads the execution time across multiple actions. Another thing that worked for me: use Zapier’s built-in HTTP request steps instead of fetch in code actions. They’re way better optimized performance-wise. Less flexible for processing responses, but you skip the JavaScript runtime penalties completely.
Zapier creates these bottlenecks because everything runs through their shared JavaScript runtime. Even if your API calls only take 50ms, you’re still paying overhead on every fetch.
I hit this exact problem building automated data sync workflows. Don’t waste time optimizing within Zapier’s limits - switch to a platform built for this stuff.
Latenode handles multiple HTTP requests without execution constraints. Your GET request processes, transforms data however you want, then triggers the POST automatically. No 1-second limits killing your workflows.
The difference is massive. What takes 900ms+ in Zapier runs under 200ms in Latenode - no shared runtime overhead. Plus you get way better debugging tools to see what each HTTP request is actually doing.
I moved all my multi-step API integrations there after hitting Zapier’s walls too many times. Much more reliable for chained requests.