Self-hosted n8n requires automated owner initialization for API integration in both test and production environments.
database:
image: postgres:14-alpine
ports:
- "5432:5432"
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret123
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- ./data/pg_db/:/var/lib/postgresql/data
- ./scripts/pgsql_init:/docker-entrypoint-initdb.d:ro
workflow_engine:
image: custom/n8n-instance
ports:
- "5678:5678"
environment:
- DB_ENGINE=postgres
- DB_PREFIX=custom_
- DB_NAME=n8nworkflow
- DB_HOST=database
- DB_PORT=5432
- DB_USER=admin
- DB_PASS=secret123
volumes:
- ./data/n8n_workflow:/home/user/.n8n
- ./files/n8n_files:/files
hey i ended up using a startup script that calls the n8n cli to automatically create the owner post db init. its a bit flaky if timing’s off but works good once adjusted
I solved this by integrating an initialization step directly into my container’s startup routine. Before launching n8n, I implemented a simple wait-and-check loop to confirm that the database is ready to accept connections. After confirming readiness, a custom script executes an SQL seed that creates the owner account automatically. Although it’s a bit of a workaround, it reliably handles the timing issues between post-init database setup and service startup, ensuring that the owner account configuration is always completed.
After some trial and error, I managed to solve the issue by using a customized entrypoint script within my container. Instead of relying solely on the database initialization scripts, I added a small polling mechanism to the container startup. This script waits for the API to become responsive before sending a call to create the owner account. This way, the owner account setup is done only after the service is up and running. I found this approach to be much more reliable and flexible, particularly when dealing with varying startup times in different environments.