Assistance needed for configuring GKE ingress with Terraform for n8n deployment

I’m trying to set up the n8n workflow automation tool using its Helm chart on a Google Kubernetes Engine cluster. However, I’m encountering a problem with the ingress setup, as it causes user sessions to drop continuously. I’ve confirmed that accessing the pod directly works without issues, indicating that there may be a configuration problem with the ingress.

Currently, I am attempting to implement session affinity for the ingress with Terraform, but I’m struggling to find resources that explain how to do this properly. Alternatively, I’m considering using an Nginx ingress controller, though I have limited expertise in that area. I would really appreciate any advice or suggestions regarding a better solution for the ingress setup. Thank you!

Below is my Terraform configuration for the n8n application:

resource "google_compute_managed_ssl_certificate" "n8n_ssl" {
  name = "${var.release_name}-ssl"
  managed {
    domains = ["n8n.${var.host}"]
  }
}

resource "helm_release" "n8n" {
  count           = 1
  depends_on      = [kubernetes_namespace.n8n, google_sql_database.n8n, google_sql_user.n8n, google_compute_managed_ssl_certificate.n8n_ssl]
  repository      = "https://8gears.container-registry.com/chartrepo/library"
  chart           = "n8n"
  version         = var.helm_version
  name            = var.release_name
  namespace       = var.namespace
  recreate_pods   = true
  values = [
    "${file("n8n_values.yaml")}" 
  ]
  set_sensitive {
    name  = "n8n.encryption_key"
    value = var.n8n_encryption_key
  }
  set {
    name  = "config.database.postgresdb.host"
    value = data.terraform_remote_state.cluster.outputs.database_connection
  }
  set {
    name  = "config.database.postgresdb.user"
    value = var.db_username
  }
  set_sensitive {
    name  = "secret.database.postgresdb.password"
    value = var.db_password
  }
  set {
    name  = "config.security.basicAuth.user"
    value = var.username
  }
  set_sensitive {
    name  = "config.security.basicAuth.password"
    value = var.password
  }
}

resource "kubernetes_ingress" "n8n_ingress" {
  wait_for_load_balancer = true
  depends_on = [google_compute_managed_ssl_certificate.n8n_ssl]
  metadata {
    name = "${var.release_name}-ingress"
    namespace = helm_release.n8n[0].namespace
    annotations = {
      "ingress.kubernetes.io/compress-enable"         = "false",
      "ingress.gcp.kubernetes.io/pre-shared-cert"     = google_compute_managed_ssl_certificate.n8n_ssl.name
    }
  }
  spec {
    backend {
      service_name = helm_release.n8n[0].name
      service_port = 80
    }
  }
}

And here is my n8n_values.yaml file:

config:
  port: 5678
  generic:
    timezone: Europe/London
  database:
    type: postgresdb
  security:
    basicAuth:
      active: true

secret:
  database:
    postgresdb:
      password: ""

extraEnv:
  VUE_APP_URL_BASE_API: https://n8n.***/
  WEBHOOK_TUNNEL_URL: https://n8n.***/

image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  tag: latest

service:
  type: ClusterIP
  port: 80

The problem isn’t session management - your Terraform config has a fundamental mismatch. Your ingress backend points to helm_release.n8n[0].name as the service name, but Helm doesn’t create services with the same name as the release. Run kubectl get svc -n your-namespace after deployment to see what service actually got created. It’s probably something like release-name-n8n. Also, you’re using the deprecated backend block format in your ingress spec. Switch to default_backend or better yet, use the kubernetes_ingress_v1 resource instead. To troubleshoot right now, check if your service endpoints are pointing to the n8n pods with kubectl describe svc service-name. Those session drops might just be 502/503 errors in disguise because your ingress can’t reach the backend service.

Session affinity is just a band-aid fix. I ran into the same problem with n8n and solved it by switching to Redis for session storage instead of using memory. Just add Redis to your cluster and set the N8N_REDIS_HOST environment variable. Way more reliable than sticky sessions, which break whenever you do rolling updates or restart pods.

Your GKE ingress issues stem from a lack of session affinity, which is crucial for n8n’s functioning. By default, GKE’s ingress does not maintain sticky sessions. To address this, include the annotation "ingress.gcp.kubernetes.io/affinity-type" = "client_ip" in your kubernetes_ingress setup. This directs requests from the same IP to the same backend pod, though bear in mind this can be problematic with NAT configurations.

For a more robust solution, consider switching to an Nginx ingress controller. Install it via Helm and implement the annotations: "nginx.ingress.kubernetes.io/affinity" = "cookie" and "nginx.ingress.kubernetes.io/session-cookie-name" = "n8n-server". Cookie-based session management offers better reliability. Additionally, ensure your service selector correctly targets the n8n pods and verify that the database settings support session persistence.

Session dropping happens all the time with n8n behind load balancers. Your setup’s missing proper session handling config. Yeah, others talked about client IP affinity, but there’s a bigger issue - your service config doesn’t have a targetPort that matches n8n’s actual port. You’ve got n8n running on port 5678 in your n8n_values.yaml, but your service maps port 80 without specifying the target. Just add targetPort: 5678 to your service config. Also throw N8N_PROTOCOL=https and N8N_HOST=n8n.yourdomain.com into your extraEnv section - n8n needs to know its external URL for sessions to work right. For production, ditch the old kubernetes_ingress resource and switch to kubernetes_ingress_v1. The newer one gives you way better control over backend configs and health checks, which you definitely need for workflow apps like n8n to stay stable.

Your session dropping happens because n8n stores sessions in memory by default. With multiple replicas, users bounce between pods that don’t share session data.

Don’t just rely on ingress session affinity - configure n8n to store sessions in your PostgreSQL database instead. Add these to your extraEnv section: set N8N_USER_MANAGEMENT_JWT_SECRET to a consistent value and make sure N8N_DATABASE_TYPE is configured for session persistence.

Also, you’re using the deprecated kubernetes_ingress resource. Switch to kubernetes_ingress_v1 for better backend configuration support. Charlotte’s session affinity annotation helps, but combining it with proper session storage gives much more reliable results. I’ve seen sticky sessions fail in production - this database approach actually works.