Terraform Configuration Challenges for GKE Ingress with n8n Deployment

I’m attempting to deploy the n8n application using its helm chart to a GKE cluster. The issue I am experiencing is that when I configure the ingress to direct traffic to the application, user sessions are frequently lost. I’ve determined that this problem is linked to the ingress setup, as accessing the pod directly works correctly.

Currently, I’m trying to set up session affinity on the ingress, but I’m struggling to find resources that show how to achieve this with terraform. Another option I considered is implementing an Nginx ingress, but I lack experience in that area. I would appreciate any assistance in navigating this issue or suggestions for more effective ingress configurations.

Here’s the terraform configuration I am using for n8n:

resource "google_compute_managed_ssl_certificate" "n8n_ssl" {
  name = "${var.release_name}-ssl"
  managed {
    domains = ["n8n.${var.host}"]
  }
}

resource "helm_release" "n8n" {
  count           = 1
  depends_on      = [kubernetes_namespace.n8n, google_sql_database.n8n, google_sql_user.n8n, google_compute_managed_ssl_certificate.n8n_ssl]
  repository      = "https://8gears.container-registry.com/chartrepo/library"
  chart           = "n8n"
  version         = var.helm_version
  name            = var.release_name
  namespace       = var.namespace
  recreate_pods   = true
  values = [
    "${file("n8n_values.yaml")}" 
  ]
  set_sensitive {
    name  = "n8n.encryption_key"
    value = var.n8n_encryption_key
  }
  set {
    name  = "config.database.postgresdb.host"
    value = data.terraform_remote_state.cluster.outputs.database_connection
  }
  set {
    name  = "config.database.postgresdb.user"
    value = var.db_username
  }
  set_sensitive {
    name  = "secret.database.postgresdb.password"
    value = var.db_password
  }
  set {
    name  = "config.security.basicAuth.user"
    value = var.username
  }
  set_sensitive {
    name  = "config.security.basicAuth.password"
    value = var.password
  }
}

resource "kubernetes_ingress" "n8n_ingress" {
  wait_for_load_balancer = true
  depends_on = [google_compute_managed_ssl_certificate.n8n_ssl]
  metadata {
    name = "${var.release_name}-ingress"
    namespace = helm_release.n8n[0].namespace
    annotations = {
      "ingress.kubernetes.io/compress-enable" = "false",
      "ingress.gcp.kubernetes.io/pre-shared-cert" = google_compute_managed_ssl_certificate.n8n_ssl.name
    }
  }
  spec {
    backend {
      service_name = helm_release.n8n[0].name
      service_port = 80
    }
  }
}

And this is my n8n_values.yml file:

config:
  port: 5678
  generic:
    timezone: Europe/London
  database:
    type: postgresdb
  security:
    basicAuth:
      active: true

secret:
  database:
    postgresdb:
      password: ""

extraEnv:
  VUE_APP_URL_BASE_API: https://n8n.***/
  WEBHOOK_TUNNEL_URL: https://n8n.***/

image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  tag: latest

service:
  type: ClusterIP
  port: 80

Switch to BackendConfig for session affinity. Create a BackendConfig resource in terraform with sessionAffinity type CLIENT_IP and reference it in your ingress annotations. But honestly, the real issue is probably n8n’s default session setup with multiple replicas. Check your helm values - if you’re running multiple n8n pods without shared session storage, you’ll get random session drops no matter what you do with ingress. Drop your n8n deployment to a single replica temporarily to confirm this is what’s causing it. Long term, configure n8n to use external session storage like Redis or PostgreSQL sessions, not just the database for workflows. The ingress session affinity is basically a bandaid that breaks when pods restart or scale anyway.

Had the same issue with GKE ingress dropping sessions. Add the “cloud.google.com/backend-config”: “your-backend-config” annotation and create a BackendConfig resource with sessionAffinity set to CLIENT_IP. Works way better than using the ingress annotation. Also check that your n8n pods aren’t restarting - that’ll kill sessions too.

I’ve hit this exact issue with n8n on GKE. The problem is n8n stores sessions in memory, so when requests bounce between pods, your session’s gone. Quick fix for your terraform: add this annotation to your ingress metadata: “ingress.gcp.kubernetes.io/session-affinity”: “CLIENT_IP”. This keeps requests from the same IP hitting the same pod. But honestly? Ditch the GKE ingress for nginx-ingress-controller. Yeah, there’s a learning curve, but it handles sessions way better for apps like n8n. Use “nginx.ingress.kubernetes.io/affinity”: “cookie” - it’s more reliable than IP-based affinity. Best solution though: configure n8n to use Redis for session storage. No more session affinity headaches and it scales better.