GitOps with Argo CD: A Production Setup Guide
Argo CD is the most widely adopted GitOps tool for Kubernetes. Getting it installed is easy. Getting it right — RBAC, multi-cluster, app-of-apps, secret management, and high availability — takes more thought. Here's the production setup guide.

Argo CD turns your Git repository into the source of truth for everything running in Kubernetes. When you push a change, Argo CD detects the diff between Git and the cluster, and syncs the cluster to match. Rollbacks are git reverts. Audit trails are commit history. Drift detection is continuous.
Getting a basic Argo CD installation working takes 15 minutes. Building a production setup that handles multiple teams, multiple clusters, secrets, RBAC, and HA takes considerably more thought. This guide covers the full production configuration.
Installation
Install Argo CD into its own namespace:
kubectl create namespace argocd
kubectl apply -n argocd \
-f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Pin to a specific release tag in production — check https://github.com/argoproj/argo-cd/releases for latest stableFor production, use the HA manifest instead:
kubectl apply -n argocd \
-f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/ha/install.yamlThe HA manifest deploys:
- 3 replicas of
argocd-server - 3 replicas of
argocd-repo-server - 3 replicas of
argocd-application-controller(StatefulSet) - Redis in HA mode (Sentinel)
For most clusters, the standard install is sufficient until you have 50+ applications or multiple teams. The HA overhead is real — 3× the pods, 3× the memory.
Install via Helm (Recommended for Production)
Managing Argo CD via its own Helm chart gives you version-pinned, reproducible installs:
1helm repo add argo https://argoproj.github.io/argo-helm
2
3helm upgrade --install argocd argo/argo-cd \
4 --namespace argocd \
5 --create-namespace \
6 --version 7.7.0 \
7 --values argocd-values.yaml \
8 --wait1# argocd-values.yaml
2global:
3 domain: argocd.example.com
4
5configs:
6 params:
7 server.insecure: false # Require TLS
8 cm:
9 # Allow Argo CD to manage itself (app-of-apps)
10 resource.exclusions: |
11 - apiGroups:
12 - cilium.io
13 kinds:
14 - CiliumIdentity
15 clusters:
16 - "*"
17 # Sync windows — block syncs during maintenance
18 timeout.reconciliation: 180s
19
20server:
21 replicas: 2
22 ingress:
23 enabled: true
24 ingressClassName: nginx
25 hostname: argocd.example.com
26 tls: true
27 annotations:
28 cert-manager.io/cluster-issuer: letsencrypt-prod
29
30repoServer:
31 replicas: 2
32 resources:
33 requests:
34 cpu: 100m
35 memory: 256Mi
36 limits:
37 memory: 512Mi
38
39applicationSet:
40 replicas: 2
41
42redis-ha:
43 enabled: true # HA Redis for productionRepository Structure
Argo CD is opinionated about nothing regarding repository structure — you choose. Two common patterns:
Monorepo
All application manifests in a single repository, organised by environment:
gitops-repo/
├── apps/
│ ├── production/
│ │ ├── api/
│ │ │ ├── deployment.yaml
│ │ │ ├── service.yaml
│ │ │ └── kustomization.yaml
│ │ └── frontend/
│ └── staging/
│ ├── api/
│ └── frontend/
├── platform/
│ ├── cert-manager/
│ ├── karpenter/
│ └── monitoring/
└── argocd/
├── applications/
└── projects/
Strengths: single place to audit everything, easy cross-app changes, simple permissions model. Weaknesses: teams step on each other in PRs, repository grows large over time, access control is coarse (read/write to the repo = access to all apps).
Multi-Repo
Each team or application has its own repository:
platform-repo/ # Platform team — Argo CD config, platform add-ons
team-a-repo/ # Team A's applications
team-b-repo/ # Team B's applications
Strengths: clear ownership, team autonomy, independent permissions per repo. Weaknesses: Argo CD credentials needed for each repo, cross-team changes require PRs to multiple repos, no single view of all changes.
For most organisations: monorepo for platform-level resources (Karpenter, cert-manager, monitoring), per-team repos for application manifests. Argo CD manages both through separate Applications.
The App-of-Apps Pattern
Managing 50 Argo CD Application objects individually doesn't scale. The app-of-apps pattern uses a single root Application that manages all other Application objects as Kubernetes resources.
argocd/
├── root-app.yaml # The root Application — manages everything else
└── applications/
├── cert-manager.yaml
├── karpenter.yaml
├── monitoring.yaml
├── team-a-prod.yaml
└── team-b-prod.yaml
Root application:
1# root-app.yaml — applied once manually to bootstrap
2apiVersion: argoproj.io/v1alpha1
3kind: Application
4metadata:
5 name: root
6 namespace: argocd
7 finalizers:
8 - resources-finalizer.argocd.argoproj.io
9spec:
10 project: default
11 source:
12 repoURL: https://github.com/myorg/gitops-repo
13 targetRevision: HEAD
14 path: argocd/applications
15 destination:
16 server: https://kubernetes.default.svc
17 namespace: argocd
18 syncPolicy:
19 automated:
20 prune: true # Delete Applications removed from Git
21 selfHeal: true # Re-sync if someone manually modifies an ApplicationChild application example:
1# argocd/applications/cert-manager.yaml
2apiVersion: argoproj.io/v1alpha1
3kind: Application
4metadata:
5 name: cert-manager
6 namespace: argocd
7 finalizers:
8 - resources-finalizer.argocd.argoproj.io
9spec:
10 project: platform
11 source:
12 repoURL: oci://quay.io/jetstack/charts
13 chart: cert-manager
14 targetRevision: v1.16.2
15 helm:
16 values: |
17 crds:
18 enabled: true
19 global:
20 leaderElection:
21 namespace: cert-manager
22 destination:
23 server: https://kubernetes.default.svc
24 namespace: cert-manager
25 syncPolicy:
26 automated:
27 prune: false # Don't auto-delete cert-manager resources
28 selfHeal: true
29 syncOptions:
30 - CreateNamespace=true
31 - ServerSideApply=trueBootstrap: apply root-app.yaml manually once. Argo CD then manages itself and all child applications from Git.
ApplicationSet: Generating Applications at Scale
ApplicationSet generates multiple Application objects from a template. Use for per-team or per-environment application generation. For a deep-dive on all ApplicationSet generators and multi-cluster patterns, see Argo CD ApplicationSet: Multi-Cluster Deployment and Generator Patterns.
1apiVersion: argoproj.io/v1alpha1
2kind: ApplicationSet
3metadata:
4 name: team-apps
5 namespace: argocd
6spec:
7 generators:
8 - git:
9 repoURL: https://github.com/myorg/gitops-repo
10 revision: HEAD
11 directories:
12 - path: "apps/production/*" # One Application per directory
13 template:
14 metadata:
15 name: "{{path.basename}}-prod"
16 spec:
17 project: production
18 source:
19 repoURL: https://github.com/myorg/gitops-repo
20 targetRevision: HEAD
21 path: "{{path}}"
22 destination:
23 server: https://kubernetes.default.svc
24 namespace: "{{path.basename}}"
25 syncPolicy:
26 automated:
27 prune: true
28 selfHeal: true
29 syncOptions:
30 - CreateNamespace=trueWhen a team adds a new directory under apps/production/, Argo CD automatically creates an Application for it. No manual Application creation required.
Projects: Multi-Team Access Control
Argo CD AppProject defines which repositories, clusters, and namespaces a set of applications can use. Teams are scoped to their project.
1apiVersion: argoproj.io/v1alpha1
2kind: AppProject
3metadata:
4 name: team-a
5 namespace: argocd
6spec:
7 description: "Team A applications"
8
9 sourceRepos:
10 - "https://github.com/myorg/team-a-repo"
11 - "https://github.com/myorg/shared-charts"
12
13 destinations:
14 - namespace: "team-a-*" # All team-a-* namespaces
15 server: https://kubernetes.default.svc
16 - namespace: "team-a-*"
17 server: https://production-cluster.example.com
18
19 clusterResourceWhitelist: [] # No cluster-scoped resource access
20 namespaceResourceWhitelist:
21 - group: "apps"
22 kind: Deployment
23 - group: ""
24 kind: Service
25 - group: ""
26 kind: ConfigMap
27 # Add resource types as needed
28
29 roles:
30 - name: developer
31 description: "Team A developers"
32 policies:
33 - p, proj:team-a:developer, applications, get, team-a/*, allow
34 - p, proj:team-a:developer, applications, sync, team-a/*, allow
35 groups:
36 - team-a-developers # SSO group
37
38 - name: readonly
39 description: "Team A read-only"
40 policies:
41 - p, proj:team-a:readonly, applications, get, team-a/*, allow
42 groups:
43 - team-a-oncallDevelopers can view and manually sync their applications. They cannot modify ClusterRole, ClusterRoleBinding, or other cluster-scoped resources — clusterResourceWhitelist: [] blocks all of them. Platform engineers work in the platform project with broader permissions.
RBAC
Argo CD RBAC is configured in the argocd-rbac-cm ConfigMap:
1apiVersion: v1
2kind: ConfigMap
3metadata:
4 name: argocd-rbac-cm
5 namespace: argocd
6data:
7 policy.default: role:readonly # Default permission for authenticated users
8 policy.csv: |
9 # Platform engineers — full access
10 p, role:platform-admin, *, *, */*, allow
11
12 # Team leads — sync and manage their project
13 p, role:team-lead, applications, *, */*, allow
14 p, role:team-lead, repositories, get, *, allow
15
16 # Readonly — view only
17 p, role:readonly, applications, get, */*, allow
18 p, role:readonly, clusters, get, *, allow
19
20 # SSO group bindings
21 g, platform-engineers, role:platform-admin
22 g, team-a-leads, role:team-lead
23 scopes: "[groups]"Combined with SSO integration (Dex for OIDC, or direct GitHub/Google/Okta), users inherit permissions from their SSO group memberships. No Argo CD-specific user management required.
Secret Management
Argo CD stores repository credentials and cluster secrets in Kubernetes Secrets in the argocd namespace. Application secrets (the secrets your apps use) should not go through Argo CD — they belong in your external secret management layer (ESO, Vault).
For Helm values that contain secrets, two approaches:
Argo CD Vault Plugin (AVP): An Argo CD plugin that fetches secrets from Vault (or AWS Secrets Manager) and substitutes them into Helm values or raw manifests at sync time. The manifest in Git contains placeholders:
1# In Git — no secret value
2apiVersion: v1
3kind: Secret
4metadata:
5 name: db-credentials
6 annotations:
7 avp.kubernetes.io/path: "secret/production/db"
8type: Opaque
9stringData:
10 password: <password> # AVP substitutes this at sync timeExternal Secrets Operator: Don't store secrets in your GitOps repo at all. Store ExternalSecret manifests (referencing the external secret manager) and let ESO sync the actual values. This is the cleaner separation — Argo CD manages ExternalSecret objects, ESO manages Secret objects.
Sync Policies
1syncPolicy:
2 automated:
3 prune: true # Delete resources removed from Git
4 selfHeal: true # Re-sync if cluster drifts from Git
5 allowEmpty: false # Never auto-sync to an empty application
6 syncOptions:
7 - Validate=true # Validate manifests before applying
8 - CreateNamespace=true # Create namespace if missing
9 - PrunePropagationPolicy=foreground
10 - ServerSideApply=true # Use server-side apply for better conflict handling
11 - RespectIgnoreDifferences=true
12 retry:
13 limit: 5
14 backoff:
15 duration: 5s
16 factor: 2
17 maxDuration: 3mprune: true — enables garbage collection of resources deleted from Git. Without this, deleted resources accumulate in the cluster. Enable with caution on first rollout — it will delete anything in the cluster not represented in Git.
selfHeal: true — re-syncs if someone applies a change directly with kubectl. This is the GitOps enforcement mechanism. Disable it if your team isn't ready to commit to Git-only changes.
ServerSideApply: true — uses Kubernetes server-side apply instead of client-side apply. Handles large manifests (CRDs) and prevents field manager conflicts when multiple tools (Helm, Argo CD) touch the same resource.
Sync Windows
Block automated syncs during maintenance windows or release freezes:
1apiVersion: argoproj.io/v1alpha1
2kind: AppProject
3metadata:
4 name: production
5spec:
6 syncWindows:
7 - kind: deny
8 schedule: "0 22 * * 1-5" # Block weeknight deploys after 10pm
9 duration: 8h
10 applications: ["*"]
11 namespaces: ["*"]
12 clusters: ["*"]
13 - kind: allow
14 schedule: "0 9 * * 1-5" # Explicitly allow business hours
15 duration: 13h
16 applications: ["*"]
17 manualSync: true # Allow manual sync even during deny windowsManual syncs can be permitted even during deny windows — useful when you need to apply an emergency fix outside the normal window.
Notifications
Argo CD notifications send alerts to Slack, PagerDuty, email, or any webhook on application sync events, health degradations, and drift detection:
1# argocd-notifications-cm
2apiVersion: v1
3kind: ConfigMap
4metadata:
5 name: argocd-notifications-cm
6 namespace: argocd
7data:
8 trigger.on-sync-failed: |
9 - when: app.status.operationState.phase in ['Error', 'Failed']
10 send: [app-sync-failed]
11 trigger.on-health-degraded: |
12 - when: app.status.health.status == 'Degraded'
13 send: [app-health-degraded]
14 template.app-sync-failed: |
15 message: |
16 Application {{.app.metadata.name}} sync failed.
17 Error: {{.app.status.operationState.message}}
18 {{.context.argocdUrl}}/applications/{{.app.metadata.name}}
19 service.slack: |
20 token: $slack-token
21 username: ArgoCD
22 icon: ":argo:"Frequently Asked Questions
Should Argo CD manage itself?
Yes — the app-of-apps pattern includes an Application for Argo CD itself. Argo CD upgrades go through Git (update the Helm chart version in the app-of-apps), get reviewed in a PR, and are applied the same way as any other change. This gives you version control and rollback for Argo CD itself.
How do I handle Argo CD during a cluster bootstrap?
Chicken-and-egg: Argo CD manages cluster resources, but Argo CD itself needs to be installed before it can manage anything. Solutions:
- Install Argo CD via Terraform or a bootstrap script, then hand it off to manage itself
- Use the
argocdCLI to bootstrap the root application after Argo CD is running - Use Flux for Argo CD installation and Argo CD for everything else (unusual but valid)
Most teams use Terraform for the initial Argo CD install (see Layer 04 in Terraform for EKS), then add an Argo CD Application that manages Argo CD's own Helm release.
Argo CD vs Flux?
Both are CNCF GitOps tools. Argo CD has a richer UI, better multi-tenancy with Projects, and stronger adoption. Flux is more lightweight, has better Helm OCI support, and is more "Kubernetes-native" in its resource model. For teams that prioritise UI and multi-team features: Argo CD. For teams that prefer a CLI-first, minimal-UI approach: Flux. Both work well.
How do I do progressive delivery (canary deployments) with Argo CD?
Use Argo Rollouts alongside Argo CD. Argo Rollouts extends Kubernetes with a Rollout resource that supports canary, blue-green, and progressive delivery strategies with automatic analysis. Argo CD manages the Rollout manifest; Argo Rollouts executes the progressive deployment strategy. See Argo Rollouts: Progressive Delivery with Canary and Blue-Green Deployments for the full setup guide.
What's the right number of Argo CD instances?
One per cluster, or one managing multiple clusters. A single Argo CD instance can manage 100+ clusters — it connects to remote clusters via kubeconfig secrets. The multi-cluster model (one Argo CD managing everything) is simpler operationally but creates a single point of failure for GitOps. Large organisations often run one Argo CD per region or per environment tier.
For the infrastructure provisioning that precedes Argo CD setup, see Terraform for Kubernetes: Managing EKS with Infrastructure as Code. For secret management alongside Argo CD, see Secrets Management in Kubernetes: Vault vs ESO vs SOPS. For the complete Argo CD feature reference (ApplicationSet, Image Updater, notifications, sharding, and Crossplane integration), see Argo CD: GitOps Continuous Delivery for Kubernetes.
Setting up Argo CD for a multi-team platform? Talk to us at Coding Protocols — we help platform teams implement GitOps that scales to dozens of teams without becoming an operational bottleneck.


