How to set up Google OAuth credentials for n8n instance running locally without public domain access

I need help setting up Google authentication for my self-hosted n8n workflow automation tool. My setup runs on a local server that I want to keep private and not expose to the internet.

When I try to connect Google services like Gmail triggers, I run into issues with the OAuth setup. Google Cloud Console wants me to create a web application client ID, but this needs a publicly accessible callback URL.

I tried using n8n’s tunnel feature as a workaround to get a public URL temporarily. However, this creates new problems. The tunnel generates random URLs each time I restart n8n, even when I set the N8N_TUNNEL_SUBDOMAIN variable. This breaks my Google OAuth settings every time.

Plus, using tunnels defeats the purpose of keeping my n8n instance private and secure. I want to avoid exposing my automation server to the public internet.

Has anyone found a reliable way to authenticate Google services with n8n while keeping everything local? I’m looking for alternatives that don’t require public domain access.

Docker networking might fix this without any external stuff. I run n8n in Docker with custom network settings - OAuth callbacks work through a local bridge network. Just set up a local DNS resolver that maps a custom domain to your container IP, then tell Google OAuth to use that domain. The trick is making sure your local network resolves the callback URL every time. This worked for me when I needed Gmail integration but couldn’t use tunnels because of corporate firewall rules. Container restarts don’t break auth since the domain mapping stays put. Needs some Docker networking know-how upfront but cuts out both security headaches and flaky tunnel services.

I set up a reverse proxy with a self-signed cert on my local network - worked perfectly. Just configure nginx to handle the OAuth callback locally and add the domain to your hosts file. Google actually accepts localhost URLs for dev, so you can use http://localhost:5678/rest/oauth2-credential/callback as your redirect URI in Google Cloud Console. Make sure n8n runs on the exact port you specify. This kills the tunnel dependency completely and you keep full local control. Auth flow works great, no random URL changes breaking things. I’ve run this setup for 6+ months with zero auth problems.

I encountered a similar issue while setting up my n8n instance. The solution I found effective was to create OAuth credentials as a Desktop Application rather than as a Web Application in the Google Cloud Console. This approach eliminates the need for a public callback URL, thus bypassing the need for any tunneling service. When you authenticate, a browser window will open for the OAuth process, after which you can copy the authorization code back to n8n. This method keeps your instance secure and private while still allowing access to Google services. I’ve implemented this for several months without any issues, though be prepared to re-authenticate when token expiration occurs. It’s a significantly safer alternative to exposing your server online.

Here’s what worked for me: I set up n8n locally but used SSH tunneling through my own server for OAuth callbacks. Unlike regular tunnels with random URLs that break constantly, you control the endpoint.

I run ssh -R 80:localhost:5678 [email protected] and set up a subdomain to handle callbacks. My n8n stays completely local, but Google can still hit the callback URL through my server proxy.

10 minutes to configure, been running solid for over a year. Auth works perfectly and tokens refresh without any problems.

ngrok is a good option! The paid version has a fixed subdomain which won’t change after restarts. You can also try using desktop app credentials for OAuth—this way, you can work without needing a callback URL.

The OAuth mess with n8n is exactly why I ditched it for Latenode. You’re basically fighting the platform’s design.

Latenode does Google auth completely differently - no tunnels, no localhost tricks, no desktop credential hacks. It just handles OAuth flows natively without exposing anything public.

I had the same tunnel issues when I used n8n. Every restart meant fixing OAuth credentials again. Total nightmare for production stuff.

With Latenode, Google connections just work. Platform handles all the OAuth mess behind the scenes. Authenticate once through their secure setup, then your workflows can hit Gmail, Sheets, Drive - whatever.

Auth stays solid through restarts and updates. No broken connections, no manual token fixes, no security holes.

Migrated my whole automation setup last year and never looked back. Same workflow power as n8n minus the OAuth headaches.

Check it out: https://latenode.com

honestly, just use service account keys instead of OAuth flow. create a service account in google cloud console, generate json credentials, then upload them directly to n8n. no callback urls needed and it works completely offline. a bit more setup but way more reliable than tunnels or localhost hacks imo.