I’m trying to deploy a workflow automation application using Helm charts on a Google Kubernetes Engine (GKE) cluster. I keep encountering an issue where the ingress setup is causing user sessions to drop unexpectedly.
When I access the application directly from the pod, sessions remain stable, which suggests the problem lies with the ingress configuration. I’m currently seeking guidance on how to enable session affinity within Terraform, but I’m struggling to find clear resources on this topic. Alternatively, I’m considering switching to an Nginx ingress controller, though I lack experience with that option.
Below is my current Terraform setup:
resource "google_compute_managed_ssl_certificate" "workflow_ssl" {
name = "${var.app_name}-certificate"
managed {
domains = ["automation.${var.domain}"]
}
}
resource "helm_release" "workflow_app" {
count = 1
depends_on = [kubernetes_namespace.workflow, google_sql_database.workflow_db, google_sql_user.workflow_user, google_compute_managed_ssl_certificate.workflow_ssl]
repository = "https://charts.example.com/repository"
chart = "workflow-tool"
version = var.chart_version
name = var.app_name
namespace = var.target_namespace
recreate_pods = true
values = [
"${file("workflow_config.yaml")}"
]
set_sensitive {
name = "app.secret_key"
value = var.application_secret
}
set {
name = "database.postgres.hostname"
value = data.terraform_remote_state.infrastructure.outputs.db_endpoint
}
set {
name = "database.postgres.username"
value = var.database_user
}
set_sensitive {
name = "database.postgres.password"
value = var.database_pass
}
}
resource "kubernetes_ingress" "workflow_ingress" {
wait_for_load_balancer = true
depends_on = [google_compute_managed_ssl_certificate.workflow_ssl]
metadata {
name = "${var.app_name}-ingress"
namespace = helm_release.workflow_app[0].namespace
annotations = {
"kubernetes.io/ingress.compress-enable" = "false",
"ingress.gcp.kubernetes.io/pre-shared-cert" = google_compute_managed_ssl_certificate.workflow_ssl.name
}
}
spec {
backend {
service_name = helm_release.workflow_app[0].name
service_port = 80
}
}
}
Session drops in workflow automation apps are brutal - users lose everything mid-task. I’ve hit this wall too many times.
Sure, others mentioned annotations, but there’s a deeper issue. You’re patching session management at infrastructure level when workflow automation needs way smarter session handling.
Traditional session affinity is fragile as hell. Pods restart, nodes die, sessions disappear. CLIENT_IP breaks behind corporate proxies anyway.
Skip the ingress band-aids. Move your workflow logic to a real automation platform. When I had similar session nightmares with internal tools, switching to external automation killed the problem completely. No drops, no infrastructure mess.
Workflow platforms handle state persistence right. Users refresh, switch devices - doesn’t matter. Their progress stays put. Way better than fighting sticky pods.
Keep your current app for simple stuff, offload complex workflows elsewhere. Clean split, rock-solid reliability.
Latenode handles workflow state much better than wrestling with session affinity: https://latenode.com
Session drops are probably happening because your service definition doesn’t have proper session persistence configured. You’re referencing the helm service by name in your Terraform setup, but not actually defining how that service handles sessions.
I hit the same issue when deploying automation tools on GKE. My service configuration wasn’t aligned with what the ingress expected. Make sure your Helm chart creates a service with proper session affinity settings, then reference those in your ingress annotations.
Looking at your values file - you’re missing explicit service session configuration. Add sessionAffinity and sessionAffinityConfig to your service block in the values yaml. Your ingress is also using a default backend configuration that might not preserve session data correctly.
Before you add more annotations, verify your pods are actually getting the session data they need. Use kubectl to check if requests hit the same pod consistently. Sometimes it’s not session affinity but how your app handles session storage across multiple instances.
Had the same session drops with a GKE workflow app last year. Turned out to be cookie config, not just ingress settings. Your app needs proper session persistence at the app level. Besides the session affinity annotations others mentioned, check if your workflow app’s configured for sticky sessions. Most workflow tools default to storing session data in memory instead of sharing across pods. Look for session store config in your Helm chart - you probably need Redis or database-backed sessions. Also check your app’s session timeout settings. Sometimes apps expire sessions too fast when load balancer health checks mess with activity patterns. That auth.basicAuth.enabled: true in your values suggests custom auth that might not work with multi-pod deployments. Debugging tip that saved me hours: turn on session logging and watch the logs while manually switching between pods. You’ll see exactly when and why sessions break.
Been there with workflow apps breaking user flows mid-task. Super frustrating when users lose progress.
Everyone’s suggesting session affinity fixes, but that’s not the real problem. You’re trying to build complex workflow automation inside a container that wasn’t designed for it. Fix the session drops today, and you’ll still hit scaling issues, state management problems, and maintenance headaches tomorrow.
Workflow automation needs persistent state, reliable execution, and solid session handling. Kubernetes pods restart, scale, and fail over. Your workflow state shouldn’t depend on sticky sessions to random pods.
Learned this the hard way on a similar project. Spent weeks wrestling with ingress configs and session stores. Finally moved the workflow logic to a dedicated automation platform. Now users can refresh, close browsers, switch devices - their workflows keep running.
Keep your current app for basic stuff. Route complex workflows to a platform built for this problem. No more session drops, no more infrastructure debugging.
Latenode handles workflow persistence way better than session affinity hacks: https://latenode.com
You’re missing session affinity settings for GKE’s load balancer. Add the cloud.google.com/session-affinity annotation to your kubernetes_ingress resource - set it to CLIENT_IP or GENERATED_COOKIE. I’d go with GENERATED_COOKIE for workflow apps since it handles NAT situations way better than client IP. You also need to configure the backend service. In your Helm values, add sessionAffinity: ClientIP to the service config, plus sessionAffinityConfig with timeout values. The service settings work with the ingress annotations to keep sticky sessions working from load balancer all the way to the pod.
your kubernetes_ingress resource uses the old v1beta1 API that doesn’t handle session affinity properly. switch to kubernetes_ingress_v1 in terraform and add your sessionAffinity settings there. also, that compress-enable annotation might mess with cookie handling - try removing it temporarily to see if that fixes it.
add session affinity annotations to your ingress. try “cloud.google.com/session-affinity”: “CLIENT_IP” in the annotations block. also, check your helm chart service has sessionAffinity: ClientIP configured - might be missing from your values file.