I’m working with the default GitHub Actions template from Google for GKE deployments. The basic setup works fine, but now I need to add support for multiple environments. I want pushes to the main branch to deploy to production, while pushes to the development branch should go to staging.
# Original deployment step
- name: Deploy Application
run: |-
./kustomize edit set image gcr.io/MY_PROJECT/APP_IMAGE:VERSION=gcr.io/$PROJECT_ID/$APP_NAME:$GITHUB_SHA
./kustomize build . | kubectl apply -f -
kubectl rollout status deployment/$APP_DEPLOYMENT
kubectl get services -o wide
I modified the kustomize build command to:
kubectl kustomize configs/environments/staging | kubectl apply -f -
My directory structure looks like this:
configs/
├── shared/
│ ├── app.yml
│ ├── kustomization.yml
│ └── svc.yml
└── environments/
├── production/
│ ├── app.yml
│ ├── kustomization.yml
│ └── svc.yml
└── staging/
├── app.yml
├── kustomization.yml
└── svc.yml
The kustomization.yml files in environments contain:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../shared
patchesStrategicMerge:
- app.yml
- svc.yml
However, the GitHub Actions workflow fails with:
Error: Missing kustomization file 'kustomization.yaml'.
Running kubectl kustomize configs/environments/staging locally works perfectly and shows the expected output.
Your GitHub Actions runner probably has a different kubectl version or kustomize setup than your local machine. GitHub Actions is picky about tool versions.
I hit this same issue months ago managing deployments across dev, staging, and prod. Rather than fight kubectl versions and kustomize inconsistencies in CI, I automated the whole deployment pipeline.
I built workflows that trigger on branch pushes and handle environment logic automatically. Code hits main? Deploys to production. Development branch? Goes to staging. No more manual kustomize commands or version headaches.
The automation covers:
- Branch detection
- Environment variable switching
- Image tagging and deployment
- Rollback when needed
Killed all those “works locally but breaks in CI” problems. I can easily add environments or change deployment logic without touching GitHub Actions yaml files.
Your kustomize structure looks good, but deployment orchestration is where automation really pays off. Way cleaner than debugging kubectl versions in runners.
Check out Latenode for a smooth automation experience: https://latenode.com
GitHub Actions can’t find your kustomization.yml file. I hit the same issue setting up our multi-environment pipeline. You’re mixing the original kustomize binary with kubectl kustomize, and the workflow isn’t navigating to the right directory first. Pick one tool and stick with it - don’t switch between ./kustomize and kubectl kustomize. Also check that your checkout action grabs the full repo structure. The default checkout depth sometimes misses nested directories. Adding fetch-depth: 0 to your checkout step fixed it for me. For branch-based deployments, use an environment variable to set the target directory dynamically. Something like ENVIRONMENT=$([ “$GITHUB_REF” == “refs/heads/main” ] && echo “production” || echo “staging”) then reference that in your kustomize path. Keeps things clean across environments.
The error happens because you’re mixing tools. Your workflow uses ./kustomize edit but then switches to kubectl kustomize - they behave differently. When you run ./kustomize edit set image, it’s looking for a kustomization file in the current directory, but yours are nested in environment folders.
I hit this same issue when I moved from single to multi-environment deployments. Fix it by restructuring your workflow steps. First figure out your target environment from the branch, then cd into that specific directory before running any kustomize commands. Set an environment variable early in your workflow, then use cd configs/environments/$ENVIRONMENT before your kustomize operations.
This way both the image editing and build happen in the right context where kustomization.yml actually lives. Your directory structure’s fine - just need to align where the workflow runs.
This happens because GitHub Actions runners aren’t in the right directory when they try to find your kustomization.yml files. I hit the same issue last year with multi-environment deployments. You need to either add working-directory to your workflow steps or use full paths from your repo root. Also double-check that your kustomization files are actually getting checked out - sometimes actions/checkout struggles with nested folders. Toss in a debug step that lists your directory contents so you can see what’s actually there. That’ll save you some headaches. Another fix: ditch kubectl kustomize and download the standalone kustomize binary instead. Just grab it in your workflow and run it directly. This avoids version mismatches between your local kubectl and whatever’s on the runner. The standalone version works way better in CI anyway.
Been fighting the same deployment mess for years. The path issues are real, but CI inconsistencies will keep biting you.
I gave up on GitHub Actions for multi-environment deployments. Spent way too much time debugging runner problems and kustomize version mismatches.
Built automation that handles everything instead. It watches repo branches and auto-deploys based on pushes. Main goes to production, development hits staging - exactly what you need.
It handles kustomize builds, image tags, and kubectl commands without touching workflow yaml. Need a new environment? Configure once instead of debugging more Actions steps.
No more “works locally, breaks in CI” headaches. Deployment logic runs outside GitHub Actions, so you dodge all the runner issues.
Your kustomize setup looks good. Automation is where you’ll save real time on the deployment side.
Check out Latenode for smooth automation: https://latenode.com
looks like a pathing issue. add cd configs/environments/staging before running kustomize or use absolute paths in your workflow. github actions runners start from repo root, so relative paths break. i just hardcode the full path like kubectl kustomize ./configs/environments/staging to avoid the hassle.