Http client closed request, check your client timeouts

Hello everyone,

I want to migrate from N8N to latenode, but I’m having problems when I have a lot of records to load, it gives a timeout error, calling the API through the system, through postman or through latenode itself.

This is a scenario that searches for data from a Google Firestore and creates a PDF report and returns the link to download the PDF, in this test it found 20 records, and needs to make 4 more queries in different tables in Google Firestore to get some data and do some processing on the data to create the PDF, this scenario gives a timeout when it reaches 1 minute and forty seconds.

Does anyone have any idea what this could be?

I’ve already paid for the basic plan to see if I should change from N8N or not, so far I’m liking it but one thing I noticed is that it’s slower than N8N, while some scenarios in N8N take 4 seconds, latenode takes more than 20 seconds, other scenarios in which N8N takes 40 seconds, latenode takes more than 1 minute.

Is this slowness normal?




Hello! The error means that the client (the one who sent the webhook ) closed the connection before receiving a response.

At the end of your scenario, there’s a Webhook Response node that should send back a reply (such as a status update). However, if it only gets triggered after all actions in the scenario are completed, the client might time out and close the connection before the response is sent.

That said, the scenario itself is still fully executed - there are no errors on Latenode’s side. It’s simply a case where the client didn’t wait long enough for the response.

How to fix this:

  • Increase the timeout on the client side, if possible.
  • Optimize your scenario to complete faster.
  • Send an immediate Webhook Response with a quick acknowledgment, and continue executing the main logic in the background.

As for performance: did you use the cloud or self-hosted version of n8n?

In some cases, n8n may be faster now, but our developers are constantly working on improving the responsiveness and speed of the platform. With every update, we aim to deliver better performance.

Hi Raian, thank you so much for sharing this fix. I was suffering this. And about to post this issue on latencode community. But luckly when I open the community this is the first post with fix.

1 Like