Building a GitHub Actions Pipeline That Deploys to Kubernetes
Build a CI/CD pipeline from scratch: test on every pull request, build and push a Docker image on merge to main, then deploy to Kubernetes automatically. No third-party deployment tools required.
Before you begin
- A GitHub repository with your application
- A Kubernetes cluster (local or cloud)
- Docker Hub or GitHub Container Registry account
- kubectl configured for your cluster
You need two pipelines: one that validates pull requests (tests must pass before merge), and one that deploys after merge. This tutorial builds both using GitHub Actions and deploys to Kubernetes without any additional tooling.
What You'll Build
Push to feature branch → Run tests (PR check)
Merge to main → Build image → Push to registry → Update Kubernetes deployment
Step 1: Store Secrets in GitHub
Go to your repository → Settings → Secrets and variables → Actions → New repository secret.
Add:
DOCKERHUB_USERNAME— your Docker Hub usernameDOCKERHUB_TOKEN— a Docker Hub access token (not your password — create one at hub.docker.com → Account Settings → Security)KUBE_CONFIG— base64-encoded kubeconfig for your cluster
Generate the kubeconfig secret:
cat ~/.kube/config | base64 | tr -d '\n'
Copy the output into the KUBE_CONFIG secret.
For production, use a restricted kubeconfig that only has access to the namespace you're deploying to. Don't paste your admin kubeconfig into GitHub secrets.
Step 2: Create the Test Workflow
mkdir -p .github/workflows
# .github/workflows/test.yml
name: Test
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Run linter
run: npm run lint
Adapt the language steps to your stack (Python: setup-python + pip install -r requirements.txt + pytest; Go: setup-go + go test ./...).
Step 3: Create the Deploy Workflow
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
env:
IMAGE: ${{ secrets.DOCKERHUB_USERNAME }}/my-app
DEPLOYMENT_NAME: my-app
NAMESPACE: production
jobs:
deploy:
runs-on: ubuntu-latest
needs: [] # Add test job name here if you want to require tests first
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set image tag
id: tag
run: echo "TAG=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ env.IMAGE }}:${{ steps.tag.outputs.TAG }}
${{ env.IMAGE }}:latest
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Configure kubectl
run: |
mkdir -p ~/.kube
echo "${{ secrets.KUBE_CONFIG }}" | base64 -d > ~/.kube/config
chmod 600 ~/.kube/config
- name: Deploy to Kubernetes
run: |
kubectl set image deployment/${{ env.DEPLOYMENT_NAME }} \
app=${{ env.IMAGE }}:${{ steps.tag.outputs.TAG }} \
-n ${{ env.NAMESPACE }}
kubectl rollout status deployment/${{ env.DEPLOYMENT_NAME }} \
-n ${{ env.NAMESPACE }} \
--timeout=5m
- name: Verify deployment
run: |
kubectl get deployment ${{ env.DEPLOYMENT_NAME }} \
-n ${{ env.NAMESPACE }} \
-o jsonpath='{.spec.template.spec.containers[0].image}'
The image tag uses the first 8 characters of the Git commit SHA — unique per commit, traceable back to the source.
Step 4: Create the Kubernetes Deployment
Make sure your Kubernetes deployment exists before the pipeline runs. The workflow uses kubectl set image which updates an existing deployment — it doesn't create one.
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 2
selector:
matchLabels:
app: my-app
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: myusername/my-app:latest
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
EOF
Step 5: Add a Rollback on Failure
If kubectl rollout status fails (the new pods never become ready), roll back automatically:
- name: Deploy to Kubernetes
run: |
kubectl set image deployment/${{ env.DEPLOYMENT_NAME }} \
app=${{ env.IMAGE }}:${{ steps.tag.outputs.TAG }} \
-n ${{ env.NAMESPACE }}
if ! kubectl rollout status deployment/${{ env.DEPLOYMENT_NAME }} \
-n ${{ env.NAMESPACE }} --timeout=5m; then
echo "Rollout failed, rolling back..."
kubectl rollout undo deployment/${{ env.DEPLOYMENT_NAME }} \
-n ${{ env.NAMESPACE }}
exit 1
fi
Step 6: Use GitHub Container Registry Instead of Docker Hub
GitHub Container Registry (ghcr.io) doesn't require a separate account and uses your GitHub token for auth:
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ghcr.io/${{ github.repository }}:${{ steps.tag.outputs.TAG }}
GITHUB_TOKEN is automatically available in every workflow — no secret configuration needed.
Step 7: Validate the Pipeline
Push a commit to main and watch the Actions tab:
git add .github/workflows/
git commit -m "ci: add test and deploy workflows"
git push origin main
Check GitHub → Actions → the running workflow. When it completes:
# Confirm the new image is running
kubectl get deployment my-app -n production \
-o jsonpath='{.spec.template.spec.containers[0].image}'
# myusername/my-app:a1b2c3d4
Production Improvements
Environment protection rules — in GitHub Settings → Environments, require a manual approval before deploying to production.
Separate staging and production workflows — trigger staging on merge to main, production on a tagged release (on: push: tags: ['v*']).
Store image tag in git — instead of kubectl set image, commit the new tag to a values file and let ArgoCD or Flux detect the change. This gives you a git audit trail of every deployment.
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.