CI/CD Supply Chain Security: How Source Code Leaks Happen
When source code leaks through CI/CD infrastructure, it exposes the exact failure modes that supply chain security controls are designed to prevent. Here's what supply chain incidents reveal about CI/CD hygiene and how to apply those lessons to your own pipelines.

In early 2025, a supply chain security incident exposed source code from a major AI company. The root causes — overly permissive CI/CD pipelines, unpinned actions, and broad COPY directives in Docker builds — are common patterns worth reviewing. The incident wasn't a sophisticated breach. It was the class of failure that supply chain security frameworks are explicitly designed to prevent: secrets in CI logs, over-permissioned pipeline tokens, and build artefacts that contained more than they should have.
What makes incidents like this worth analysing isn't the organisation involved — it's that the failure modes are generic. The same CI/CD misconfigurations that lead to partial source exposure exist in thousands of engineering organisations. The difference between "near miss" and "headline" is often just how interesting the exposed content is to an attacker or researcher.
This post breaks down the categories of failure, maps them to supply chain security controls, and gives you the checklist to verify your own pipelines aren't carrying the same risks.
The Failure Categories
Source code and build artefact leaks through CI/CD infrastructure almost always involve one or more of these failure modes. Understanding them as categories — rather than specific incidents — is what makes the analysis applicable to your stack.
1. Secrets in Build Logs
The most common CI/CD leak vector is build log output that captures secrets — tokens, API keys, source paths, internal URLs — either through verbose logging, error backtraces, or debug output that was enabled temporarily and never disabled.
1# Dangerous — environment variables logged during verbose build
2- name: Build
3 run: |
4 set -x # This echoes every command, including ones that expand secrets
5 npm run build
6 env:
7 API_KEY: ${{ secrets.API_KEY }}With set -x enabled, every shell command is echoed to stdout before execution. A command like curl -H "Authorization: Bearer $API_KEY" becomes curl -H "Authorization: Bearer sk-ant-..." in the log output. If the build log is accessible to anyone with repository read access (the default for public repos and many private ones), the secret is exposed.
The supply chain angle: Build logs are often retained indefinitely and accessible to more principals than the secrets themselves. A secret in GitHub Actions secrets is scoped to authorised workflows. The same secret printed to a build log is accessible to anyone who can read the Actions run history.
Control: Secret scanning on log output. GitHub Actions automatically redacts registered secrets from logs, but only registered ones — dynamic values, derived tokens, and credentials fetched at runtime are not automatically redacted.
# Safer — explicit redaction for runtime-fetched credentials
- name: Fetch and use token
run: |
TOKEN=$(aws secretsmanager get-secret-value --secret-id my-token --query SecretString --output text)
echo "::add-mask::$TOKEN" # Register for log redaction
curl -H "Authorization: Bearer $TOKEN" https://api.example.com2. Over-Permissioned Pipeline Tokens
GitHub Actions GITHUB_TOKEN, GitLab CI_JOB_TOKEN, and equivalent CI tokens are granted broader permissions than necessary in many configurations. Modern GitHub defaults contents to read for new repositories, but many organisations and legacy repositories still have contents: write as the inherited default — and some workflow events (like pull_request_target) run with elevated permissions that grant write access even on forks. A compromised workflow step with write permissions can push to the repository it's running in.
For private repositories holding proprietary source code, a workflow token that can read all repository contents is a credential that, if leaked or used by a malicious action, exfiltrates the codebase.
The minimal permissions model:
1# Declare minimal permissions at the workflow level
2permissions:
3 contents: read # Read source only
4 packages: write # Publish to GHCR if needed
5 id-token: write # OIDC token for cloud auth (never long-lived keys)
6
7jobs:
8 build:
9 runs-on: ubuntu-latest
10 permissions:
11 contents: read # Override at job level for even tighter scopeLocking contents: read at the workflow level means the workflow cannot push, cannot create releases, and cannot modify repository contents. If a malicious or compromised action step attempts to push code, it fails with a permissions error instead of silently succeeding.
3. Third-Party Actions Without Pinning
The GitHub Actions marketplace has thousands of community actions. Many are high-quality and widely used. Some are abandoned, some are compromised post-publication, and some were malicious from the start.
# Dangerous — latest tag of a community action
- uses: some-org/some-action@main
# Safer — pinned to a specific commit SHA
- uses: some-org/some-action@a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2When an action is referenced by tag or branch, the code it runs can change without your pipeline changing. An attacker who gains write access to some-org/some-action can push malicious code that exfiltrates GITHUB_TOKEN, environment variables, and repository contents to an external endpoint — and your pipeline will pick it up automatically on the next run.
Pinning to a commit SHA means you're running a specific, audited version. The SHA is immutable; if the upstream action is compromised, your pipeline is unaffected until you explicitly update the pin.
This is the supply chain attack vector that the tj-actions/changed-files incident in 2023 exploited — a popular action was compromised to exfiltrate CI secrets from thousands of repositories. Pipelines pinned by SHA were unaffected.
4. Build Artefacts Containing Source
Build pipelines that produce artefacts (Docker images, npm packages, binaries) can inadvertently include source code, build metadata, or internal configuration in the output.
Common patterns:
- Docker build context too broad:
COPY . /appcopies the entire repository into the image, including source files, test fixtures,.envfiles, and internal tooling - Source maps in production bundles: JavaScript bundles with source maps embedded include the original TypeScript/JavaScript source
- Debug symbols in binaries: Go or C++ binaries compiled without stripping debug symbols contain function names, file paths, and sometimes string literals from source
1# Dangerous — copies everything
2COPY . /app
3
4# Safer — explicit copy of only what the runtime needs
5COPY package.json package-lock.json ./
6RUN npm ci --only=production
7COPY src/ ./src/
8COPY public/ ./public/For Docker images specifically: scan the image layers after build to verify what's included. docker history shows what each layer added. Tools like dive provide a filesystem-level view of every layer.
For JavaScript bundles: strip source maps from production builds or host them in a separate, access-controlled location rather than embedding them in the public bundle.
5. Insecure Caching of Build Artefacts
CI caches (GitHub Actions actions/cache, GitLab pipeline caches) persist between runs and are often shared more broadly than developers expect. A cache key collision can cause one branch's build cache to be restored by another branch's workflow — potentially exposing data from a protected branch to a less-protected one.
1# Potentially unsafe — cache shared by branch
2- uses: actions/cache@v4
3 with:
4 path: ~/.npm
5 key: ${{ runner.os }}-node-${{ hashFiles('package-lock.json') }}
6
7# Safer — include branch in cache key to prevent cross-branch leakage
8- uses: actions/cache@v4
9 with:
10 path: ~/.npm
11 key: ${{ runner.os }}-node-${{ github.ref_name }}-${{ hashFiles('package-lock.json') }}
12 restore-keys: |
13 ${{ runner.os }}-node-${{ github.ref_name }}-For sensitive builds (proprietary source, signed artefacts), disable caching entirely or use environment-specific cache isolation.
The Supply Chain Controls That Would Have Helped
Mapping the failure modes above to the supply chain security stack covered in Supply Chain Security Tools for Kubernetes:
Trivy / secret scanning: Most CI platforms offer secret scanning on repository contents, but this doesn't catch secrets in build log output or runtime environment. Tools like truffleHog and gitleaks can be run against repository history to find historical secret exposure.
SLSA provenance: If the build pipeline generates signed provenance (who built this artefact, from which source commit, on which infrastructure), a leaked artefact can be traced back to its exact build origin. This doesn't prevent the leak but is essential for incident response — knowing exactly which build produced a leaked artefact tells you what else might have been exposed.
Cosign image signing: A signed container image proves the image came from your pipeline and hasn't been tampered with. If source was unintentionally included in an image layer, signed provenance provides an audit trail — and Kyverno admission control ensures only signed images run in production, which limits the blast radius of a tampered image.
Kyverno / OPA policies: While policy engines focus on Kubernetes admission, the same policy-as-code approach applies to CI pipeline configuration. Tools like checkov and semgrep can scan GitHub Actions workflows for the misconfigurations above — unpinned actions, missing permission declarations, dangerous shell flags — before they reach production.
The CI/CD Security Checklist
Apply this to every pipeline that handles proprietary source, customer data, or deployment credentials:
Permissions:
-
permissions:declared at workflow and job level, minimal for the job's purpose -
contents: writeonly on workflows that explicitly need to push (releases, changelog updates) - No
GITHUB_TOKENwithactions: writeunless the workflow needs to trigger other workflows
Secrets handling:
- No
set -xin shell steps that expand secret environment variables - Runtime-fetched credentials registered with
::add-mask::before use - Secret rotation schedule exists and is enforced
- No long-lived tokens — OIDC federated auth (GitHub Actions → AWS/GCP/Azure) preferred
Third-party actions:
- All community actions pinned to commit SHA, not tag
- Pinned SHAs reviewed when updated (check the commit diff)
-
actions/checkout,actions/setup-node, etc. pinned to official releases by SHA - Automated tools (Dependabot, Renovate) configured to propose SHA-pinned updates
Build artefacts:
- Docker images built with explicit
COPYinstructions, notCOPY . /app -
.dockerignoreexcludes:.env,.git,test/,docs/,*.md, local credentials - No source maps in production JavaScript bundles (or hosted separately)
- Image layers reviewed post-build with
diveordocker history
Caching:
- Cache keys include branch/environment to prevent cross-context leakage
- No sensitive build artefacts in cache paths
Monitoring:
- Build log retention policy reviewed — logs containing secrets should not be retained indefinitely
- Alert on unexpected network connections from CI runners (a compromised action exfiltrating data will make outbound HTTP calls)
The Broader Point
Supply chain security incidents, like most CI/CD security events, aren't failures of intent — the engineers building the pipeline weren't trying to expose source code. They're failures of default settings and accumulated assumptions: that build logs aren't that sensitive, that popular community actions are safe, that the CI token is scoped appropriately.
Supply chain security investments pay off exactly here. SLSA provenance, image signing, and CI permission minimisation don't require a security team or a compliance budget — they require treating the pipeline with the same scrutiny you apply to production infrastructure.
The pipeline that builds and deploys your production system is part of your production system. Its access to secrets, source code, and deployment credentials makes it a higher-value target than most of the services it deploys.
Frequently Asked Questions
Are GitHub Actions more or less secure than self-hosted CI runners?
GitHub-hosted runners are ephemeral (fresh VM per job) and isolated, which prevents credential persistence across runs. Self-hosted runners introduce the attack surface of the runner host itself — persistent state, network access from within your infrastructure, and credential files left on disk. For sensitive builds, GitHub-hosted runners with minimal permissions are often more secure despite the trust they require from GitHub as an infrastructure provider.
How do I audit which actions my pipelines use and whether they're pinned?
# Find all action references in your workflows
grep -r "uses:" .github/workflows/ | grep -v "#"
# Find unpinned actions (refs by tag, not SHA)
grep -r "uses:" .github/workflows/ | grep -v "@[a-f0-9]\{40\}"Tools like zizmor and actionlint do deeper static analysis of GitHub Actions workflow files, flagging security issues including unpinned actions, dangerous permissions, and injection vulnerabilities.
What's the risk of a workflow that has contents: read but also id-token: write?
id-token: write allows the workflow to request an OIDC token from GitHub. If that token is used to authenticate to a cloud provider (AWS, GCP, Azure) with broad permissions, the workflow has significant cloud access even without repository write permissions. Scope the IAM role the OIDC token assumes to the minimum the workflow needs, same as you would for any service account.
Should I use Dependabot or Renovate to manage action pins?
Both work. Renovate is more configurable — you can set it to update action pins with semantic grouping, hold back updates until they've been available for N days, and require specific reviewers for security-sensitive dependencies. For teams already using Renovate for npm/Go/Python dependency management, adding GitHub Actions support is a small additional configuration.
For the broader supply chain security toolset for Kubernetes, see Supply Chain Security Tools for Kubernetes: What to Use and When.
Reviewing your CI/CD pipeline for supply chain risks? Talk to us at Coding Protocols — we help platform teams build pipelines that are defensible by default, not just in principle.

