Seeking assistance with Terraform for GKE ingress in n8n deployment

I’m trying to set up n8n on a GKE cluster using its Helm chart. I’m having trouble with the ingress. The app keeps losing the session when I access it through the ingress. It works fine when I connect to the pod directly.

I want to add session affinity to the ingress but I’m not sure how to do it with Terraform. I’ve also thought about using an Nginx ingress but I’m new to that.

Here’s a simplified version of my Terraform config:

resource "helm_release" "workflow_app" {
  chart      = "workflow-automation"
  name       = "my-workflow"
  namespace  = "automation"
  
  set {
    name  = "config.database.type"
    value = "postgres"
  }
}

resource "kubernetes_ingress" "workflow_ingress" {
  metadata {
    name = "workflow-ingress"
    annotations = {
      "ingress.kubernetes.io/ssl-redirect" = "true"
    }
  }
  spec {
    backend {
      service_name = "my-workflow"
      service_port = 80
    }
  }
}

Can someone help me figure out how to fix the session issue or suggest a better way to set up the ingress? Thanks!

I’ve dealt with similar issues when deploying n8n on GKE. In my experience, using an Nginx ingress controller can provide more flexibility and better session management. Here’s what worked for me:

  1. Install the Nginx ingress controller using Helm.
  2. Configure your ingress resource to use the Nginx class.
  3. Add annotations for session affinity and timeouts.

Here’s a rough example of how your ingress resource might look:

resource "kubernetes_ingress_v1" "workflow_ingress" {
  metadata {
    name = "workflow-ingress"
    annotations = {
      "kubernetes.io/ingress.class": "nginx",
      "nginx.ingress.kubernetes.io/affinity": "cookie",
      "nginx.ingress.kubernetes.io/session-cookie-name": "route",
      "nginx.ingress.kubernetes.io/session-cookie-expires": "172800",
      "nginx.ingress.kubernetes.io/session-cookie-max-age": "172800"
    }
  }
  spec {
    rule {
      http {
        path {
          path = "/",
          backend {
            service {
              name = "my-workflow",
              port {
                number = 80
              }
            }
          }
        }
      }
    }
  }
}

This approach has been more reliable for me than trying to manage sessions at the GKE ingress level. It also gives you more control over other aspects of your ingress configuration.

I’ve encountered similar challenges with n8n deployments on GKE. One approach that’s worked well for me is using a Google Cloud Load Balancer (GCLB) instead of a standard Kubernetes Ingress. GCLB provides built-in session affinity and can be managed through Terraform.

Here’s a simplified example of how you might set this up:

resource "google_compute_global_address" "default" {
  name = "n8n-ip-address"
}

resource "google_compute_global_forwarding_rule" "default" {
  name                  = "n8n-forwarding-rule"
  ip_address            = google_compute_global_address.default.address
  port_range            = "80"
  target                = google_compute_target_http_proxy.default.self_link
  load_balancing_scheme = "EXTERNAL"
}

resource "google_compute_target_http_proxy" "default" {
  name    = "n8n-target-proxy"
  url_map = google_compute_url_map.default.self_link
}

resource "google_compute_url_map" "default" {
  name            = "n8n-url-map"
  default_service = google_compute_backend_service.default.self_link
}

resource "google_compute_backend_service" "default" {
  name        = "n8n-backend"
  port_name   = "http"
  protocol    = "HTTP"
  timeout_sec = 10
  enable_cdn  = false

  backend {
    group = google_container_node_pool.default.instance_group_urls[0]
  }

  session_affinity = "CLIENT_IP"
}

This setup should provide better session management for your n8n deployment. Remember to adjust the node pool and other specifics to match your GKE cluster configuration.

have u tried adding session affinity to the service instead of ingress? smth like this:

resource "kubernetes_service" "workflow_service" {
  metadata {
    name = "my-workflow"
  }
  spec {
    session_affinity = "ClientIP"
    # other service specs
  }
}

might solve ur issue without messing with ingress. lmk if it helps!