Unexpected migration errors when moving n8n to new database

I’m having trouble with my n8n setup. I moved it to a cloud database and now it’s acting up. Here’s what happened: 1. Exported data from the old Postgres database 2. Imported it into a new cloud SQL instance 3. Deployed Docker with n8n 1.51.2 4. Connected to the new database

But when I start it up, I get these weird errors:

Error: There was an error running database migrations
QueryFailedError: relation "migrations" already exists

I’m confused. Why is it trying to create a migrations table that’s already there? When does n8n usually run migrations?

I thought it would just start up normally without any migration attempts. Any ideas on what’s going on or how to fix this? Thanks for any help!

I’ve encountered similar issues when migrating n8n databases. The problem likely stems from a mismatch between the migration state in your new database and what n8n expects. Here’s what I’d suggest:

First, try clearing the migrations table in your new database. Sometimes residual migration data can cause conflicts. If that doesn’t work, you might need to manually run the migrations. n8n typically executes migrations on startup or when upgrading versions.

Another approach is to start with a fresh database and reimport your workflows and credentials separately, rather than migrating the entire database structure. This can sidestep migration issues altogether.

If these don’t resolve the problem, check your n8n version compatibility with the new database. Incompatibilities can sometimes manifest as migration errors.

Lastly, review your database connection settings. Ensure all parameters are correct, including any schema specifications.

I’ve dealt with this exact scenario before, and it can be frustrating. In my experience, the issue often stems from n8n not recognizing the existing migration state in your new database. Here’s what worked for me:

Instead of importing the entire database, I started with a fresh install of n8n on the new system. Then, I used n8n’s built-in export/import features to move my workflows and credentials separately. This approach bypassed the migration issues entirely.

If you prefer keeping your current setup, you might try manually updating the migrations table to match n8n’s expected state. Check the n8n documentation for the correct schema version your installation expects.

Lastly, double-check your database connection string. Sometimes, small errors there can cause unexpected behavior during startup and migrations.

Hope this helps! Let us know how it goes.

hey ryan, ive run into this before. its a pain! sounds like n8n is tryin to create tables that already exist. you could try droppin the migrations table and lettin n8n recreate it. or start fresh with an empty db and import your stuff manually. good luck mate!

Hello guys, I solve this problem looking to the owner in postgres. was spot on

Change the owner to the user that n8n use in database

DO $$
DECLARE
r RECORD;
BEGIN
FOR r IN
SELECT tablename
FROM pg_tables
WHERE schemaname = ‘public’
LOOP
EXECUTE format(‘ALTER TABLE public.%I OWNER TO workflow;’, r.tablename);
END LOOP;
END$$;

Looks like the issue was related to how the database users were defined.
In my case, I was using a Kubernetes deployment, and the first thing that came to mind was: why does the app have two user definitions?
Normally, an app should have a single database user with full privileges to perform all required operations.
So, I decided to use the same database POSTGRES_USER user as the POSTGRES_NON_ROOT_USER , and that solved the problem.

stringData:
  POSTGRES_USER: n8n_admin
  POSTGRES_PASSWORD: <your-password>
  POSTGRES_DB: n8n
  POSTGRES_NON_ROOT_USER: n8n_admin
  POSTGRES_NON_ROOT_PASSWORD: <your-password>

It seems the person who created the original deployment for the n8n service just copied the parameters from the Postgres setup, which caused the mismatch.
That’s why the solution described above — using the same user or granting the proper privileges to the non-root user — resolved the migration issue.

Additional problems you might encounter when migrating to Aurora RDS will be related to the SSL connection.
You’ll need to add a couple of parameters to the n8n-deployment.yaml file to resolve the issue::

            - name: DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED
              value: "false"
            - name: DB_POSTGRESDB_SSL_CA
              value: "/etc/ssl/certs/ca-certificates.crt"