Handling email file attachments in Rails app with Mailgun on Heroku platform

I have a Rails web app running on Heroku that receives emails through Mailgun routing.

Mailgun sends email data to my app via HTTP POST requests. Basic stuff like subject and message body come as regular parameters, but file attachments get uploaded as multipart form data. These files appear in my Rails controller as UploadedFile objects.

My process is to take these attachments and save them to S3 storage. This works fine for small files, but when emails have big attachments or multiple files, I hit Heroku’s request timeout limit (the H12 error).

I thought about using background jobs to handle the upload process, but ran into problems:

  • Passing the whole UploadedFile object to the background job causes issues because job queues don’t handle large objects well
  • Trying to pass just the temp file path doesn’t work either since background workers run on different dynos and can’t access temp files created on the web dyno

What’s the best approach to handle this situation? I need a way to process large email attachments without timing out the main request.

The other solutions work, but you’re still stuck with complex file handling and manual cleanup. I’ve dealt with similar email processing at scale - automation is everything here.

Mailgun webhooks timing out? Don’t fight it. Go async from day one. Have Latenode catch your Mailgun webhooks instead of hitting Rails directly. Latenode handles the multipart data, streams attachments to S3 without timeout issues, then pings your Rails app with just the metadata.

Flow looks like this: Mailgun → Latenode webhook → Latenode processes attachments and saves to S3 → Latenode calls your Rails endpoint with S3 keys and email data. Your Rails app never touches the actual files, just handles business logic.

No background job mess. No temp file cleanup. No dyno coordination headaches. Latenode does the heavy lifting while your Rails app stays lean.

I’ve used this for document processing pipelines handling gigabyte files. Latenode’s built-in S3 integration makes uploads dead simple. You can add steps like virus scanning or format conversion without touching your main app.

The automation kills all those failure points you’re fighting. Check it out: https://latenode.com

I built an invoice processing system that handled PDFs through email - ran into the same issue. Here’s what worked: save the attachment to shared storage immediately during the webhook, then pass the heavy lifting to background jobs. I wrote the UploadedFile straight to a temp S3 bucket right in the controller - fast enough to dodge timeouts. Then I queued a job with just the S3 key and file metadata. Background worker grabs it from temp storage, processes it, moves to permanent storage, cleans up. Kept webhook responses under 10 seconds even with 50MB+ files. Watch out for content types when writing to temp bucket - I got burned when background jobs couldn’t figure out file formats. Set up lifecycle policies on your temp bucket too, cleans up old files if your cleanup jobs crash.

Hit the same issue with our email parser. Skip temp files entirely - read attachment data straight into memory and send it to sidekiq as base64. Works perfectly for files under 10mb. Bigger files still need temp storage, but this covers most cases and keeps it simple. Just bump your job queue’s payload limit if you use this approach.

Had this exact issue last year with our customer support system. Here’s what fixed it for me: stream the attachment straight to temp storage first, then queue the S3 upload with just a reference to that file. Don’t try passing the UploadedFile object around - it won’t work. Save it somewhere all your dynos can reach (S3 with a temp prefix works, or Redis for smaller files). Queue a background job with the storage key plus basic metadata like filename and content type. Your worker grabs the job, pulls the file from temp storage, processes it, uploads to final S3, then cleans up. This keeps your web request fast since you’re only handling the initial save. The heavy stuff happens in the background. For big files, use chunked reading when streaming to temp storage or you’ll run into memory problems. Bottom line: get that file data saved somewhere accessible before your request times out.