I’m trying to fetch a file from Amazon S3 within a Zapier workflow and make it available for subsequent steps in my automation. It seems odd that Zapier’s native S3 connector doesn’t include a download option, especially since other storage services like Google Drive and Dropbox have this feature built in.
I think I need to use Zapier’s Code step with JavaScript. Here’s what I’ve put together so far, but I’m stuck on a couple of issues - how to properly configure the AWS region and how to format the output so Zapier can use the downloaded file in later steps.
const aws = require('aws-sdk');
aws.config.update({
accessKeyId: "your-access-key-here",
secretAccessKey: "your-secret-here"
});
const s3Client = new aws.S3();
s3Client.getObject({
Bucket: "example-bucket",
Key: "document.pdf"
}, function(err, result) {
if (err) {
console.log("Error fetching file: " + err);
} else {
console.log("Successfully retrieved " + result.ContentLength + " bytes");
// need to process result.Body somehow
}
});
// not sure what to return here for Zapier
return {someOutput: "???"};
Any guidance would be helpful. I’m surprised this workflow isn’t more common, so maybe I’m missing an easier approach?
I ran into this exact limitation a few months ago and ended up taking a slightly different route. Instead of handling the base64 conversion directly in the Code step, I found it more reliable to use the AWS SDK’s createPresignedUrl
method to generate a temporary download link, then use Zapier’s built-in “Get File” action to actually retrieve it.
The advantage is that you avoid potential memory issues with large files and don’t have to worry about base64 encoding quirks. Your Code step would look something like this:
const aws = require('aws-sdk');
aws.config.update({
accessKeyId: inputData.accessKey,
secretAccessKey: inputData.secretKey,
region: inputData.region
});
const s3 = new aws.S3();
const url = s3.getSignedUrl('getObject', {
Bucket: inputData.bucket,
Key: inputData.key,
Expires: 3600
});
return { downloadUrl: url };
Then use the downloadUrl output in a subsequent HTTP GET request or file download step. Works much better for larger files in my experience.
honestly i think you’re overcomplicating this… just use the s3.getObject().promise() approach but make sure you’re returning the right format. zapier expects specific output structure. try returning {hydrate: {type: 'file', url: presignedUrl}}
instead of raw base64 - works better with zapiers file handling system imo
The main issue with your code is that you’re using the callback-based approach but not handling the asynchronous nature properly in Zapier. You need to wrap everything in a Promise or use async/await. Also, for the region configuration, add it directly in the aws.config.update() call.
Here’s the corrected approach:
const aws = require('aws-sdk');
aws.config.update({
accessKeyId: inputData.accessKey,
secretAccessKey: inputData.secretKey,
region: 'us-east-1' // specify your region
});
const s3 = new aws.S3();
const params = {
Bucket: inputData.bucket,
Key: inputData.key
};
return s3.getObject(params).promise().then(data => {
return {
fileContent: data.Body.toString('base64'),
contentType: data.ContentType,
size: data.ContentLength
};
});
The key is converting the file content to base64 encoding so Zapier can handle it properly in subsequent steps. Make sure to pass your AWS credentials through input fields rather than hardcoding them.