I’m running into a timeout problem with my PDF generation code that only happens on my production server. The same code works perfectly fine on my development and test environments.
Here’s my PDF creation method:
public async Task<IActionResult> CreateDocument(string htmlContent)
{
try
{
var chromeInstance = await Puppeteer.LaunchAsync(new LaunchOptions
{
Headless = true,
});
var newPage = await chromeInstance.NewPageAsync();
// Wait up to 60 seconds for content to load
await newPage.SetContentAsync(htmlContent, new NavigationOptions { Timeout = 60000 });
var outputPath = $"D:/Reports/{Guid.NewGuid()}_{ClientId}.pdf";
await newPage.PdfAsync(outputPath);
// Clean up resources
await chromeInstance.CloseAsync();
return Json(outputPath);
}
catch (Exception error)
{
LogError(error);
return Problem($"PDF creation failed: {error.Message}");
}
}
I set up the browser download in my startup file:
// Initialize Chromium during app startup
await new BrowserFetcher().DownloadAsync();
The error I get shows a 180 second timeout being exceeded during the PDF generation process. The stack trace points to the PdfAsync call timing out. Has anyone else run into similar issues with Puppeteer behaving differently between development and production servers? What could be causing this timeout only in the live environment?
This timeout mess happens because you’re managing browser instances manually. Gets really messy in production with different resource constraints and concurrent requests.
I had the exact same headache until I switched to automating the whole PDF pipeline. Instead of fighting Puppeteer config and resource management on your server, just offload it to a dedicated automation workflow.
Set up a workflow that handles browser lifecycle, memory management, and PDF generation automatically. It spins up fresh browser instances, manages timeouts properly, and cleans up resources without you worrying about server configs.
The workflow handles multiple PDF requests at once, retries failed generations, and stores PDFs in cloud storage instead of your local drive. No more hardcoded paths or manual cleanup.
I’ve been running PDF generation this way for months - zero timeouts. The automation platform handles all the browser complexity while your app just sends HTML and gets back the PDF location.
Check out Latenode - handles Puppeteer workflows perfectly and kills these production headaches: https://latenode.com
Try resource pooling instead of tweaking individual browser instances. I hit similar production timeouts because spawning new Chrome processes for every PDF kills performance under load. What worked for me: keep 2-3 browser instances running and reuse pages rather than launching fresh browsers each time. Your production server’s handling multiple requests at once while dev isn’t under that pressure. Add explicit memory limits to LaunchOptions and watch how many Chrome processes pile up during peak usage. Sometimes it’s not one slow PDF causing timeouts - it’s resource exhaustion from multiple requests hitting at once. Queue your PDF requests instead of processing them all simultaneously. Your server specs are probably way different between environments too - RAM and CPU allocation matters here.
Had this exact issue six months ago. Turns out it was resource constraints on the production server - Chromium couldn’t render PDFs because of memory limits. Production servers always have tighter resource limits than dev. What fixed it: added explicit resource management to LaunchOptions. Try these Args: “–no-sandbox”, “–disable-setuid-sandbox”, and “–disable-dev-shm-usage”. Also throw in “–max_old_space_size=4096” to give Chromium more memory. Check if your production server has all the dependencies. Chrome/Chromium needs extra Linux libraries that aren’t always obvious. Bump up the timeout for PdfAsync too - it’s resource-heavy on slower hardware. I’d monitor CPU and memory usage while generating PDFs to see if you’re hitting resource bottlenecks.
I’ve hit this annoying issue too. The timeout’s usually from different server setups, not your code. That hardcoded D:/Reports/ path screams Windows, but most production servers run Linux where Puppeteer acts totally different. First, check if your server has enough disk space - PDF creation fails silently when storage runs low. Add a proper timeout to PdfAsync itself, not just SetContentAsync. Default timeouts are often too short for production hardware. Wrap your PDF generation in a using statement or manually dispose the page before closing the browser. Undisposed pages leak memory and make future PDFs timeout. Last thing - check task manager to make sure the browser process actually dies after each request.
Sounds like network connectivity between your production server and external resources. I hit this same issue when my HTML referenced external stylesheets or fonts that worked fine in dev but got blocked by corporate firewalls in production. The browser just sits there waiting forever for these resources before generating the PDF. Set WaitUntilNavigation.NetworkIdle0 in your NavigationOptions so all network requests finish first. Also try pre-downloading or hosting critical assets locally. Another thing - antivirus software on production servers can be brutal. They’ll quarantine or scan Chrome processes, causing random delays. Check your security logs to see if Chrome.exe gets flagged during those PDF timeouts.