Jelastic JPS unable to resolve database IP address in N8N environment configuration

I’m working on deploying N8N using a Jelastic JPS template and facing an issue with environment variable resolution. The system can’t properly set the DB_POSTGRESDB_HOST variable with the PostgreSQL node’s IP address using ${nodes.sqldb[0].address}.

type: install
id: workflow-automation
version: 1.1
name: Workflow Engine Setup
globals:
  database_user: workflow-${fn.random(999)}
  database_pass: ${fn.password}
  secret_key: engine${fn.password}

nodes:
- nodeType: nginx
  displayName: Proxy Server
  cloudlets: 16
  nodeGroup: proxy
- image: workflowengine/app:latest
  displayName: Workflow App
  cloudlets: 16
  nodeGroup: application
  links: database:db
  env:
    APP_AUTH_ENABLED: true
    APP_USERNAME: ${globals.database_user}
    APP_PASSWORD: ${globals.database_pass}
    DATABASE_TYPE: postgresql
    DATABASE_NAME: workflow
    DATABASE_HOST: ${nodes.database[0].address}
    DATABASE_PORT: 5432
    DATABASE_USER: ${globals.database_user}
    DATABASE_PASS: ${globals.database_pass}
- image: postgres:15.3
  cloudlets: 16
  nodeGroup: database
  displayName: Database Server
  env:
    POSTGRES_USER: ${globals.database_user}
    POSTGRES_PASSWORD: ${globals.database_pass}
    POSTGRES_DB: workflow

The variable works fine when set manually through the web interface, but the JPS manifest doesn’t populate it correctly. I suspect this happens because the database node hasn’t been created yet when the environment variables are processed.

I’ve tested these approaches without success:

DATABASE_HOST: ${nodes.database[0].address}
DATABASE_HOST: ${nodes.database.first.address}
DATABASE_HOST: ${nodes.database.master.address}

How can I ensure the database IP gets assigned to the environment variable after node creation?

Been there. JPS manifest timing issues are the absolute worst with Jelastic deployments.

It’s not just IP resolution timing either. You can fix that with onAfterInstall actions, but you’re still stuck maintaining complex deployment scripts. Every provider handles node startup differently, so your fix breaks on other Jelastic environments.

I used to waste days debugging these manifest dependencies. Now I just use Latenode for workflow automation instead. Your N8N setup works the same but without the JPS headaches.

Latenode spins up PostgreSQL first, waits until it’s ready, then starts your workflow engine with the right connection details. No manual IP resolution or restart scripts.

You also get better monitoring and scaling than wrestling with cloudlet configurations. Less time debugging infrastructure, more time building workflows.

Check it out: https://latenode.com

This happens because Jelastic does variable substitution before it assigns node IPs. I hit the same issue with a Redis-PostgreSQL setup and found a better solution than those onAfterInstall handlers. Skip the JPS timing fixes entirely. Instead, build a health check loop into your app container’s startup script. Set DATABASE_HOST to a placeholder at first, then have your app poll for the database node using Jelastic’s internal DNS before starting the main process. You can query the database nodeGroup directly through Jelastic’s internal network once it’s up. This approach ditches the IP variable timing issues completely and makes your deployment way more reliable across different Jelastic providers.

Yeah, this is super common with Jelastic JPS deployments when nodes depend on each other. You’re right - environment variables get evaluated before all nodes are fully up and their IPs are available. I’ve hit this exact issue tons of times. onAfterServiceStart or onAfterCloneNodes actions work great here. Just separate the database IP config from the initial node creation - use a post-deployment script that updates the environment variables after everything’s running. Another trick that’s worked for me: use nodeGroup references instead of direct node addressing, plus a restart action. Sometimes ${nodes.database.master.intIP} works better than the address property, depending on your Jelastic provider. The cleanest fix I’ve used is creating the nodes first, then using api.Environment.Control.SetNodeGroupEnvVars in an onInstall action to populate the database connection details after PostgreSQL is fully operational. This way all IPs are properly resolved before the app tries to connect.

Use the installDependsOn parameter in your app node config. Set it to installDependsOn: database and Jelastic will wait for postgres to fully start before launching your app container. Fixed the same IP resolution problem for me with Docker containers.

Had this exact problem last year with a similar setup. It’s definitely a timing issue - Jelastic sets environment variables during node initialization before IP addresses get assigned. Here’s what worked for me: use a two-phase approach with the onAfterInstall action. Don’t set DATABASE_HOST during node creation, leave it empty. After all nodes are created, use api.Environment.Control.RestartNodes with updated environment variables. The trick is running a script after installation finishes: create an onAfterInstall action that calls SetContainerEnvVars with the resolved ${nodes.database[0].address} value, then restart your app node. This way PostgreSQL is fully up with its IP assigned before your workflow app tries to connect. I also added a small delay in the script - helps with consistency since some Jelastic providers take longer to assign stable IPs.

This happens because Jelastic assigns environment variables before nodes get their final IP addresses. Hit this same issue last month with a multi-container setup. What worked for me: use the startService action with container linking instead of hardcoding IPs. Don’t try resolving the database IP in your environment variables - just set DATABASE_HOST to “db” since you’ve already got links: database:db in there. Docker linking creates a hostname alias that resolves properly no matter when IPs get assigned. If you absolutely need the actual IP for some reason, add retry logic to your app’s database connection code. Way more reliable than trying to get the JPS manifest timing perfect.

Yeah, this timing issue is exactly why I ditched complex JPS deployments for workflow stuff. Wasted way too many hours debugging the same variable resolution headaches you’re dealing with.

Skip wrestling with Jelastic’s node creation order and IP timing - just use Latenode for the whole workflow deployment. You’ll get the same N8N setup with proper database connections, minus the JPS manifest headaches.

Latenode handles service dependencies automatically. Your database spins up completely before the workflow engine tries connecting. No messy onAfterInstall actions or environment variable juggling needed.

I switched our team over after hitting these infrastructure timing issues one too many times. Way cleaner than patching JPS templates with restart actions and delay scripts.

Check it out: https://latenode.com