Customers can also connect their own domains by pointing DNS records to my proxy server (proxy.myplatform.com). This works fine, but right now I have to manually update my reverse proxy config every time someone adds a custom domain:
This can’t be how big platforms handle thousands of custom domains. There has to be some automated way to update the proxy configuration when new domains are added. I’m wondering if they use dynamic routing or some kind of automation script that generates config files on the fly. What’s the standard approach for this kind of setup without manually editing config files every time?
cloudflare workers is super effective! you can route dynamically based on the host header, no need for manual config changes. just use workers KV to store your domain-subdomain mappings and let the worker take care of routing for you. it beats managing nginx configs any day!
Database lookups work but you still need something to handle routing logic and certificate management. I used to build custom proxy solutions until I realized I was wasting tons of time on plumbing.
The game changer is automating everything. When a customer adds a domain, you want automatic DNS verification, SSL certificate provisioning, routing rule updates, and renewal handling - all without touching code.
I built a workflow that monitors domain additions through webhooks, validates DNS config, generates certificates via Let’s Encrypt API, and updates proxy configuration. The whole thing runs hands-off.
For the proxy layer, use Cloudflare Workers or AWS CloudFront with origin rules that update programmatically. But honestly, automation is what makes or breaks these systems at scale.
Most people focus on technical proxy setup but miss the real problem: orchestrating all these moving parts reliably. You need proper error handling, retry logic, monitoring, and rollback capabilities.
Latenode handles this entire workflow beautifully. You can build the complete domain provisioning pipeline with visual automation that connects your domain API, DNS validation, certificate management, and proxy updates. No custom code needed.
The real challenge is handling the entire lifecycle, not just routing. I dealt with this exact issue on a platform with ~800 custom domains. Here’s what actually worked: I built a hybrid setup where the main proxy reads domain mappings from a shared cache that updates via API calls. User adds a domain through our dashboard → backend validates DNS → creates the mapping in our database → instantly updates Redis. The proxy checks Redis first for custom domains before falling back to subdomain routing. This killed config file regeneration and kept response times under 5ms. Here’s what everyone screws up: DNS validation before enabling routing. You MUST verify the customer owns the domain and configured it right, or you’ll route legit traffic to the wrong places. We use a verification token system - customers add it as a TXT record before we activate their mapping. For SSL: SNI-based certificate serving with auto Let’s Encrypt provisioning. Store certs in a database or distributed cache so any proxy instance can serve them. Once you treat everything as data instead of configuration, the automation becomes dead simple.
Most platforms skip static config files and use database-driven dynamic routing instead. I ditched nginx config generation for a custom proxy that queries a database for domain mappings in real-time. Here’s how it works: request hits your proxy → proxy checks domain mapping table → routes to the right backend server. You can use Traefik with API-driven dynamic config, or build your own with Express/Node.js as a reverse proxy doing database lookups. The trick is treating domain mappings as data, not config. For SSL, many platforms use wildcard certs for *.myplatform.com plus Let’s Encrypt automation for custom domains. Cache your domain lookups in Redis though - you don’t want to hit the database on every request.