I’m having trouble getting document previews to work with ngx-doc-viewer when using files from Amazon S3 with temporary access URLs.
My Setup:
I created a temporary S3 URL and confirmed it works by opening it directly in my browser. Then I tried to show the document using different viewer services.
First Attempt - Google Docs Viewer:
let tempUrl = getS3TemporaryUrl();
let encodedLink = encodeURIComponent(tempUrl);
let googleViewerUrl = 'https://docs.google.com/viewer?url=' + encodedLink + '&embedded=true';
Result: Shows “No Preview Available” message
Second Attempt - Microsoft Office Online:
let officeViewerUrl = 'https://view.officeapps.live.com/op/view.aspx?src=' + encodedLink;
Result: Error saying “File is not publicly accessible”
What I Notice:
When I use ngx-doc-viewer’s default URL option, the file downloads instead of showing a preview. This tells me the URL works fine, but something prevents the external viewers from accessing it.
I tried URL encoding as suggested in some tutorials, but it didn’t help.
My Questions:
Why do these viewer services fail with S3 temporary URLs even when the files are reachable?
Has anyone got this working with ngx-doc-viewer and private S3 files?
Are there other document viewers that handle private S3 URLs better?
Had this exact frustration when building document previews for a client portal. The problem is Google and Microsoft viewers make server-side requests to fetch your docs, but they can’t pass the auth headers that S3 temp URLs need. Here’s what worked for me: when I generate the temp S3 URL, I copy the file to a public location (public S3 bucket or CDN) with a random filename. The public copy gets auto-deleted after a set time. This kills the auth barrier while staying secure through random filenames and cleanup. Performance is way better since viewers access files directly without proxy delays. For sensitive stuff, I watermark the cached copies with user info before making them public.
I’ve hit this exact problem at work multiple times. Google and Microsoft viewers can’t handle S3’s authentication when they try to fetch your files from their servers.
Skip the proxy endpoints and base64 stuff - I automated the whole thing instead. Build a system that converts documents to web-friendly formats and serves them through a simple API.
Here’s my setup: document uploads to S3 trigger automatic conversion to PDF or HTML, store the converted version somewhere publicly accessible, then return a viewer-ready URL. No more temporary URL or auth headaches.
The automation handles different file types, manages conversions, and cleans up old files automatically. Takes about an hour to set up and eliminates all the viewer service headaches.
I use Latenode for the automation since it connects S3, conversion services, and your frontend without the mess of custom proxy endpoints.
same headache here with S3 temp URLs. google and microsoft’s viewers grab files from their servers, but your temp URL’s auth tokens expire too fast. set up a proxy endpoint on your backend that streams the S3 content directly - don’t expose the temp URL to those viewer services.
The issue arises because Google Docs and Office Online cannot authenticate using S3’s temporary URLs, as these services require direct access from their servers, but they do not manage authentication headers effectively. In my experience, I’ve approached this in two ways. The first option involves converting documents to base64 and embedding them directly in ngx-doc-viewer, which works well for smaller files, but larger ones can create performance issues. The second option is to set up a public endpoint on your server that manages permissions and streams the S3 content without revealing the temporary URL. For PDF documents, considering PDF.js might be your most effective solution since it handles private content more efficiently.