Beyond Cron Jobs: The Definitive Guide to CI/CD Pipeline Certificate Integration

An expired certificate is no longer just an IT inconvenience; it's a public-facing failure that erodes customer trust and can halt business operations entirely. In 2023, a major UK broadcaster's servi...

Tim Henrich
February 03, 2026
7 min read
5 views

Beyond Cron Jobs: The Definitive Guide to CI/CD Pipeline Certificate Integration

An expired certificate is no longer just an IT inconvenience; it's a public-facing failure that erodes customer trust and can halt business operations entirely. In 2023, a major UK broadcaster's services went off-air due to a single expired TLS certificate. This isn't a rare occurrence. It's a symptom of a systemic problem: treating certificate management as a manual, periodic task in an era of hyper-automated, continuous delivery.

The days of setting a calendar reminder to renew a certificate are over. In today's dynamic, cloud-native world, infrastructure is ephemeral, services scale in seconds, and developer velocity is paramount. The only way to keep pace is to embed security directly into the software delivery lifecycle.

This guide moves beyond simple renewal scripts and dives deep into modern, automated certificate integration within your CI/CD pipelines. We'll explore the tools, architectures, and best practices that transform certificate management from a high-risk bottleneck into a seamless, secure, and fully automated component of your DevSecOps strategy.

The Ticking Clock: Why Manual Certificate Management is Obsolete

The shift towards automation isn't just a trend; it's a necessity driven by fundamental changes in how we build and deploy software. The old ways of managing machine identities are cracking under the pressure of modern development paradigms.

The Unsustainable Burden of Short-Lived Certificates

The industry standard for TLS certificate validity has been shrinking for years. What was once a 3-year standard became 1 year, and now, thanks to the widespread adoption of protocols like ACME, 90-day certificates from providers like Let's Encrypt are commonplace. For internal services, leading organizations are pushing this window even further, issuing certificates valid for only 24-72 hours.

This acceleration is a massive security win. A shorter validity period drastically reduces the risk window if a private key is ever compromised. However, it creates an impossible situation for manual management. The 2023 Keyfactor State of Machine Identity Management Report revealed that a staggering 55% of organizations still use spreadsheets to track certificates. Trying to manage thousands of certificates with 90-day, 7-day, or 1-day lifespans in a spreadsheet is not just inefficient; it's a guaranteed recipe for an outage.

Secret Sprawl: Your CI/CD Variables Are Not a Vault

A common anti-pattern is to treat CI/CD platform variables as a secure vault. Developers, needing to get a feature shipped, might generate a key and certificate and paste them into environment variables in Jenkins, GitLab CI, or GitHub Actions.

This creates "secret sprawl," a dangerous situation where sensitive private keys are scattered across multiple systems with varying levels of access control. These keys are often long-lived, rarely rotated, and easily exposed through accidental logging or overly permissive access rights. A single compromised developer account could give an attacker access to keys for critical production services.

The Modern DevSecOps Toolkit: A Comparison of Automated Approaches

To solve these challenges, a new ecosystem of tools has emerged, each designed to automate a specific aspect of certificate and identity management within the pipeline. The modern approach isn't about finding a single tool but composing a toolchain that fits your architecture.

For Public Endpoints: The ACME Protocol and cert-manager

For any internet-facing service, the Automated Certificate Management Environment (ACME) protocol is the undisputed standard. It allows an automated client to prove ownership of a domain to a Certificate Authority (CA) and obtain a trusted TLS certificate.

In the Kubernetes ecosystem, cert-manager is the de facto tool for this job. It runs as a controller within your cluster, watching for specific Kubernetes resources (like Ingress objects) and automatically handling the entire ACME lifecycle—issuance, renewal, and cleanup—without any human intervention.

For Internal Services: The Self-Service PKI with HashiCorp Vault

While ACME is perfect for public services, it's not suitable for the thousands of internal microservices that need to communicate securely via mutual TLS (mTLS). For this, you need an internal Public Key Infrastructure (PKI).

Tools like HashiCorp Vault and Smallstep provide powerful API-driven PKI engines. You can configure your own internal Certificate Authority and establish policies that define who (or what) can request certificates, for which domains, and with what parameters. This enables a self-service model where a CI/CD pipeline can programmatically request a short-lived certificate for a new service on-demand, completely removing the security team as a manual gatekeeper.

For Workload Identity: The Future with SPIFFE/SPIRE

In highly dynamic environments like Kubernetes, IP addresses are meaningless for identity. A pod's IP can change every time it's rescheduled. The modern solution is cryptographic workload identity.

The SPIFFE (Secure Production Identity Framework for Everyone) standard and its production-ready implementation, SPIRE, provide a framework for issuing strong, verifiable identities to every single workload. These identities are expressed as short-lived X.509 certificates (called SVIDs). A CI/CD pipeline doesn't just deploy a service; it bootstraps the service with an identity that it can use to securely communicate with other SPIFFE-aware services, moving beyond simple domain-based certificates to true application-layer identity.

For Supply Chain Security: The Rise of Sigstore

Securing the application isn't enough; you must also secure the artifacts that build it. High-profile supply chain attacks have made code and artifact signing a non-negotiable stage in modern CI/CD.

The Linux Foundation's Sigstore project has emerged as the standard for solving this problem. Using its cosign tool, a pipeline can sign container images, binaries, and other artifacts. The magic of Sigstore is its use of a free, public CA called Fulcio, which issues short-lived signing certificates based on an OpenID Connect (OIDC) identity. This means your pipeline can prove it was, for example, "the GitHub Actions job for my-org/my-repo on the main branch" at the time of signing, creating a verifiable and transparent chain of custody without the hassle of managing long-lived signing keys.

Putting It All Together: A Real-World Implementation

Let's walk through a concrete example of a fully automated workflow using GitLab CI, Kubernetes, HashiCorp Vault, and cert-manager.

The Goal: When a developer merges code to the main branch, the CI/CD pipeline will build a container image, request a short-lived mTLS certificate for internal communication, and deploy the service to Kubernetes, where cert-manager will provision a public TLS certificate.

The Architecture Overview

  1. GitLab CI: Orchestrates the pipeline.
  2. Kubernetes: The target deployment environment.
  3. HashiCorp Vault: Acts as the internal CA and secrets manager. It's configured to trust GitLab's OIDC provider.
  4. cert-manager: Runs in Kubernetes to manage public certificates from Let's Encrypt.

Step 1: Authenticating Your CI/CD Job with JWT/OIDC

First, we eliminate static secrets. The GitLab Runner will authenticate to Vault using a short-lived JSON Web Token (JWT) that GitLab automatically generates for each job.

In Vault, you configure a JWT Auth Role that trusts GitLab:

# Enable the JWT auth method
vault auth enable jwt

# Configure Vault to trust GitLab's OIDC provider
vault write auth/jwt/config oidc_discovery_url="https://gitlab.com" bound_issuer="gitlab.com"

# Create a role that maps a GitLab project to a Vault policy
vault write auth/jwt/role/my-app-role \
    user_claim="user_email" \
    bound_claims_type="glob" \
    bound_claims='{"project_path": "my-group/my-app", "ref_type": "branch", "ref": "main"}' \
    policies="pki-my-app-policy" \
    ttl="15m"

This role specifies that only jobs running for the main branch of the my-group/my-app project can assume this role, and the token they receive will only be valid for 15 minutes.

Step 2: Requesting an Internal Certificate from Vault

Now, your .gitlab-ci.yml can use this token to fetch secrets and certificates.

deploy:
  stage: deploy
  image: alpine:latest
  before_script:
    - apk add --no-cache curl jq vault
  script:
    # 1. Authenticate to Vault using the CI_JOB_JWT
    - export VAULT_ADDR="https://vault.example.com"
    - VAULT_TOKEN=$(vault write -field=token auth/jwt/login role=my-app-role jwt=$CI_JOB_JWT)
    - export VAULT_TOKEN

    # 2. Request a short-lived mTLS certificate from Vault's PKI engine
    - CERT_DATA=$(vault write -format=json pki_int/issue/my-app-role common_name="my-app-internal.svc.cluster.local" ttl="72h")

    # 3. Extract the certificate, key, and CA chain
    - echo $CERT_DATA | jq -r .data.certificate > tls.crt
    - echo $CERT_DATA | jq -r .data.private_key > tls.key
    - echo $CERT_DATA | jq -r .data.issuing_ca > ca.crt

    # 4. Create a Kubernetes secret and deploy the application
    - kubectl create secret tls my-app-mtls-secret --cert=tls.crt --key=tls.key
    # ... rest of deployment script (e.g., kubectl apply -f deployment.yaml) ...

In this job, the private key for the mTLS certificate exists only for the duration of the script's execution within the secure runner environment. It is immediately injected into a Kubernetes secret and never logged or stored as a pipeline artifact.

Step 3: Automating Public Certificates with cert-manager

The final piece is the public-facing certificate. Your application's deployment manifest includes an Ingress resource with special annotations that cert-manager understands.

ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    # Tell cert-manager to use the 'letsencrypt-prod' issuer
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  rules:
  - host: "my-app.example.com"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80
  tls:
  - hosts:
    - "my-app.example.com"
    # cert-manager will create and populate this secret
    secretName: my-app-public-tls-secret

When the pipeline runs kubectl apply -f ingress.yaml, cert-manager sees the new Ingress,

Share This Insight

Related Posts