I’m trying to deploy n8n using its Helm chart on a Google Kubernetes Engine (GKE) cluster with Terraform. While the deployment goes smoothly, I’m encountering issues with the ingress configuration that is causing users to lose their sessions frequently.
When I access the application through the ingress, it keeps logging users out. However, if I access the pod directly, the application behaves as expected. This leads me to believe the ingress might be the problem.
I’m currently attempting to set up session affinity on the ingress, but I can’t find clear guidance on how to accomplish this with Terraform. Alternatively, I’m thinking about implementing an Nginx ingress, but I lack experience with that setup. I would really appreciate any help or suggestions towards a better solution for my ingress issue. Here’s my current Terraform configuration for n8n:
resource "google_compute_managed_ssl_certificate" "n8n_ssl" {
name = "${var.release_name}-ssl"
managed {
domains = ["n8n.${var.host}"]
}
}
resource "helm_release" "n8n" {
count = 1
depends_on = [kubernetes_namespace.n8n, google_sql_database.n8n, google_sql_user.n8n, google_compute_managed_ssl_certificate.n8n_ssl]
repository = "https://8gears.container-registry.com/chartrepo/library"
chart = "n8n"
version = var.helm_version
name = var.release_name
namespace = var.namespace
recreate_pods = true
values = [
"${file("n8n_values.yaml")}"
]
set_sensitive {
name = "n8n.encryption_key"
value = var.n8n_encryption_key
}
set {
name = "config.database.postgresdb.host"
value = data.terraform_remote_state.cluster.outputs.database_connection
}
set {
name = "config.database.postgresdb.user"
value = var.db_username
}
set_sensitive {
name = "secret.database.postgresdb.password"
value = var.db_password
}
set {
name = "config.security.basicAuth.user"
value = var.username
}
set_sensitive {
name = "config.security.basicAuth.password"
value = var.password
}
}
resource "kubernetes_ingress" "n8n_ingress" {
wait_for_load_balancer = true
depends_on = [google_compute_managed_ssl_certificate.n8n_ssl]
metadata {
name = "${var.release_name}-ingress"
namespace = helm_release.n8n[0].namespace
annotations = {
"ingress.kubernetes.io/compress-enable" = "false",
"ingress.gcp.kubernetes.io/pre-shared-cert" = google_compute_managed_ssl_certificate.n8n_ssl.name
}
}
spec {
backend {
service_name = helm_release.n8n[0].name
service_port = 80
}
}
}
I would highly appreciate any guidance on adding session affinity to this ingress setup or recommendations for a better method.