Session loss occurs when using GKE ingress. Attempting to enable session affinity. Below is a refreshed Terraform and YAML setup:
resource "google_compute_managed_ssl_certificate" "ssl_cert_app" {
name = "${var.app_id}-ssl"
managed {
domains = ["n8n.${var.host_domain}"]
}
}
resource "helm_release" "n8n_app_release" {
depends_on = [kubernetes_namespace.app_ns]
repository = "dummyRepoURI"
chart = "n8nChart"
version = var.chart_version
name = var.app_id
namespace = var.app_namespace
values = [file("app_config.yaml")]
}
resource "kubernetes_ingress" "n8n_ingress_route" {
depends_on = [google_compute_managed_ssl_certificate.ssl_cert_app]
metadata {
name = "${var.app_id}-ingress"
namespace = helm_release.n8n_app_release.namespace
annotations = {
"k8s.ingress/ssl-cert" = google_compute_managed_ssl_certificate.ssl_cert_app.name
}
}
spec {
backend {
service_name = helm_release.n8n_app_release.name
service_port = 80
}
}
}
config:
port: 5678
timezone: 'UTC'
database: postgres
auth:
enabled: true
secret:
db_password: ""
extraEnv:
BASE_API_URL: "https://n8n.example.com"
WEBHOOK_URL: "https://n8n.example.com/api"
image:
repository: n8nio/n8n
tag: latest
service:
type: ClusterIP
port: 80
I encountered a similar challenge when working with GKE ingress and session persistence. In my experience, ensuring alignment between the ingress and the backend service’s settings is essential. The session loss might be caused by default configurations that don’t guarantee sticky sessions, so double-checking the annotations and load balancing settings in both the ingress and the service definitions helped in my case. I eventually found that specifying certain session-affinity options within the service and verifying that the ingress controller respected these settings resolved the issue. Additional consultation of updated GKE documentation also proved valuable.
Drawing from my own experiences with similar deployments, I found that resolving session loss issues often requires tweaking settings on both the service and ingress layers. I started by configuring the service to use sessionAffinity as ClientIP and then adjusted the ingress annotations to ensure the load balancer honors these sessions throughout the network. It was beneficial to perform tests on a staging environment to verify that the configuration changes properly maintained session persistence. Keeping up to date with GKE documentation also helped clarify the required settings for the specific ingress controller in use.
hey, i had a similar issue - i fixed session drops by tweaking the sessionAffinity on the svc and ensuring the ingress picked it up. sometimes even a little misconfig can break sticky sessions, so double-check both setups.