Assistance needed for configuring session persistence for GKE ingress with Terraform for n8n deployment

I’m trying to deploy n8n using its Helm chart on a Google Kubernetes Engine (GKE) cluster with Terraform. While the deployment goes smoothly, I’m encountering issues with the ingress configuration that is causing users to lose their sessions frequently.

When I access the application through the ingress, it keeps logging users out. However, if I access the pod directly, the application behaves as expected. This leads me to believe the ingress might be the problem.

I’m currently attempting to set up session affinity on the ingress, but I can’t find clear guidance on how to accomplish this with Terraform. Alternatively, I’m thinking about implementing an Nginx ingress, but I lack experience with that setup. I would really appreciate any help or suggestions towards a better solution for my ingress issue. Here’s my current Terraform configuration for n8n:

resource "google_compute_managed_ssl_certificate" "n8n_ssl" {
  name = "${var.release_name}-ssl"
  managed {
    domains = ["n8n.${var.host}"]
  }
}

resource "helm_release" "n8n" {
  count           = 1
  depends_on      = [kubernetes_namespace.n8n, google_sql_database.n8n, google_sql_user.n8n, google_compute_managed_ssl_certificate.n8n_ssl]
  repository      = "https://8gears.container-registry.com/chartrepo/library"
  chart           = "n8n"
  version         = var.helm_version
  name            = var.release_name
  namespace       = var.namespace
  recreate_pods   = true
  values = [
    "${file("n8n_values.yaml")}" 
  ]
  set_sensitive {
    name  = "n8n.encryption_key"
    value = var.n8n_encryption_key
  }
  set {
    name  = "config.database.postgresdb.host"
    value = data.terraform_remote_state.cluster.outputs.database_connection
  }
  set {
    name  = "config.database.postgresdb.user"
    value = var.db_username
  }
  set_sensitive {
    name  = "secret.database.postgresdb.password"
    value = var.db_password
  }
  set {
    name  = "config.security.basicAuth.user"
    value = var.username
  }
  set_sensitive {
    name  = "config.security.basicAuth.password"
    value = var.password
  }
}

resource "kubernetes_ingress" "n8n_ingress" {
  wait_for_load_balancer = true
  depends_on = [google_compute_managed_ssl_certificate.n8n_ssl]
  metadata {
    name = "${var.release_name}-ingress"
    namespace = helm_release.n8n[0].namespace
    annotations = {
      "ingress.kubernetes.io/compress-enable" = "false",
      "ingress.gcp.kubernetes.io/pre-shared-cert" = google_compute_managed_ssl_certificate.n8n_ssl.name
    }
  }
  spec {
    backend {
      service_name = helm_release.n8n[0].name
      service_port = 80
    }
  }
}

I would highly appreciate any guidance on adding session affinity to this ingress setup or recommendations for a better method.

Looking at your configuration, the session loss issue stems from GKE’s default load balancer not maintaining sticky sessions. I’ve dealt with this exact problem deploying workflow tools like n8n where user sessions contain critical state information.

The most reliable fix I found was implementing session affinity through BackendConfig. You’ll need to create a kubernetes_manifest resource for the BackendConfig in Terraform, then reference it in your ingress annotations. Here’s what worked for me:

Create a BackendConfig with affinityType: "CLIENT_IP" and link it using the cloud.google.com/backend-config annotation on your ingress. However, be aware that CLIENT_IP affinity can cause issues if users are behind corporate NATs or proxies.

Alternatively, if your n8n deployment supports external session storage like Redis, that’s often the cleanest solution for production environments. You can configure n8n to store sessions externally rather than relying on pod-local storage, which eliminates the need for sticky sessions entirely.

I noticed you’re using basic auth in your configuration - make sure that’s not conflicting with n8n’s own session management. Sometimes multiple authentication layers can cause unexpected session behavior.

I ran into a similar session persistence problem when deploying stateful applications on GKE. The issue you’re experiencing is likely due to the default round-robin load balancing behavior of the GKE ingress controller. For the Google Cloud Load Balancer ingress, you can enable session affinity by adding the cloud.google.com/backend-config annotation to your ingress and creating a corresponding BackendConfig resource. In your Terraform setup, you would need to create a google_compute_backend_service resource with session affinity configured, then reference it through a BackendConfig. However, I found switching to nginx-ingress-controller much more straightforward for this use case. The configuration is simpler and you get better control over session handling. You can enable cookie-based session affinity with just a few annotations like nginx.ingress.kubernetes.io/affinity: "cookie" and nginx.ingress.kubernetes.io/session-cookie-name. Given that n8n maintains user state locally, the nginx approach worked better for me since it provides more granular session control compared to the GCP load balancer’s IP-based affinity which can be problematic with NAT scenarios.

had the same headache with n8n sessions getting dropped constantly. try adding the annotation ingress.gcp.kubernetes.io/affinity-type: "client-ip" directly to your ingress metadata - thats the quickest fix without needing extra backendconfig resources. also check if your n8n pods are properly configured for sticky sessions in the values.yaml file since some versions need explicit session handling enabled.