Terraform Configuration for GKE Ingress of n8n and Issues with Session Management

I’m trying to deploy the n8n application using its helm chart within a Google Kubernetes Engine (GKE) cluster via Terraform. Currently, I’m facing challenges with session management due to the ingress setup.

The major problem is that the user sessions keep dropping when accessing through the ingress, while accessing the pod directly works without issues. It’s clear that the ingress configuration is to blame.

I’m attempting to enable session affinity on the ingress, but I’m struggling to find useful resources on how to accomplish this using Terraform. Alternatively, I could switch to an Nginx ingress, but I lack experience with that as well. If anyone has suggestions or strategies for better ingress handling, I’d appreciate the help. Thank you!

Here’s the Terraform code I’m using for deploy n8n:

resource "google_compute_managed_ssl_certificate" "n8n_ssl" {
  name = "${var.release_name}-ssl"
  managed {
    domains = ["n8n.${var.host}"]
  }
}

resource "helm_release" "n8n" {
  count           = 1
  depends_on      = [kubernetes_namespace.n8n, google_sql_database.n8n, google_sql_user.n8n, google_compute_managed_ssl_certificate.n8n_ssl]
  repository      = "https://8gears.container-registry.com/chartrepo/library"
  chart           = "n8n"
  version         = var.helm_version
  name            = var.release_name
  namespace       = var.namespace
  recreate_pods   = true
  values = [
    "${file("n8n_values.yaml")}" 
  ]
  set_sensitive {
    name  = "n8n.encryption_key"
    value = var.n8n_encryption_key
  }
  set {
    name  = "config.database.postgresdb.host"
    value = data.terraform_remote_state.cluster.outputs.database_connection
  }
  set {
    name  = "config.database.postgresdb.user"
    value = var.db_username
  }
  set_sensitive {
    name  = "secret.database.postgresdb.password"
    value = var.db_password
  }
  set {
    name  = "config.security.basicAuth.user"
    value = var.username
  }
  set_sensitive {
    name  = "config.security.basicAuth.password"
    value = var.password
  }
}

resource "kubernetes_ingress" "n8n_ingress" {
  wait_for_load_balancer = true
  depends_on = [google_compute_managed_ssl_certificate.n8n_ssl]
  metadata {
    name = "${var.release_name}-ingress"
    namespace = helm_release.n8n[0].namespace
    annotations = {
      "ingress.kubernetes.io/compress-enable"         = "false",
      "ingress.gcp.kubernetes.io/pre-shared-cert"     = google_compute_managed_ssl_certificate.n8n_ssl.name
    }
  }
  spec {
    backend {
      service_name = helm_release.n8n[0].name
      service_port = 80
    }
  }
}

Also, here’s my n8n_values.yaml:

config:
  port: 5678
  generic:
    timezone: Europe/London
  database:
    type: postgresdb
  security:
    basicAuth:
      active: true

secret:
  database:
    postgresdb:
      password: ""

extraEnv:
  VUE_APP_URL_BASE_API: https://n8n.***/
  WEBHOOK_TUNNEL_URL: https://n8n.***/

image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  tag: latest

service:
  type: ClusterIP
  port: 80

Can someone provide guidance on how to correctly set session affinity, or suggest more effective methods for configuring this ingress?

The root cause is likely the GKE ingress load balancer routing requests to different pods, breaking n8n’s websocket connections. I ran into this exact scenario last year and found that GKE’s default ingress doesn’t handle websocket persistence well.

Your current ingress configuration is missing some crucial annotations for websocket support. Add these to your ingress annotations:

annotations = {
  "ingress.kubernetes.io/compress-enable"     = "false"
  "ingress.gcp.kubernetes.io/pre-shared-cert" = google_compute_managed_ssl_certificate.n8n_ssl.name
  "cloud.google.com/connection-draining-timeout" = "300"
  "kubernetes.io/ingress.class" = "gce"
}

However, switching to nginx-ingress-controller will save you considerable headaches. It handles websockets much more gracefully out of the box. You can deploy it via Terraform using the official helm chart and then use standard nginx annotations for session affinity. The performance difference for applications like n8n is noticeable, especially when dealing with long-running workflows that maintain persistent connections.

The session dropping issue stems from n8n’s websocket connections being distributed across multiple pods without proper session persistence. I encountered this exact problem when deploying n8n on GKE. The solution involves modifying your service configuration to enable session affinity at the Kubernetes service level, not just the ingress. Add this to your n8n_values.yaml under the service section:

service:
  type: ClusterIP
  port: 80
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800

Also, update your ingress annotations to include "cloud.google.com/backend-config": "n8n-backend-config" and create a BackendConfig resource with session affinity enabled. This ensures that requests from the same client IP are routed to the same pod, maintaining websocket connections that n8n relies on for real-time updates.

i faced a similar thing with n8n sesions. gke ingress struggles with websockets. try adding "nginx.ingress.kubernetes.io/affinity": "cookie" to annotations. or even better, switch to nginx-ingress controller, it manages sticky sessions way better for n8n!