Automating Trust: A Guide to CI/CD Pipeline Certificate Integration in the Age of 90-Day Validity

The era of the "set it and forget it" SSL certificate is officially over. With Google’s proposal to reduce public TLS certificate validity from 398 days to just 90 days, and internal microservices dem...

Tim Henrich
February 16, 2026
7 min read
61 views

Automating Trust: A Guide to CI/CD Pipeline Certificate Integration in the Age of 90-Day Validity

The era of the "set it and forget it" SSL certificate is officially over. With Google’s proposal to reduce public TLS certificate validity from 398 days to just 90 days, and internal microservices demanding Just-in-Time (JIT) identities, the manual management of machine identities has become a mathematical impossibility.

For DevOps engineers and security architects, this shift presents a stark choice: automate the certificate lifecycle within the CI/CD pipeline, or face a future of constant outages and "Shadow PKI."

In this guide, we will explore how to architect a "Zero-Touch" PKI integration for your pipelines. We will move beyond theory into practical implementation using industry-standard tools like HashiCorp Vault and Kubernetes, and discuss why external monitoring remains your critical safety net.

The "Wildcard Nightmare": A Case for Automation

To understand the necessity of pipeline integration, we must first look at the failure mode of the traditional approach. Consider the scenario of a hypothetical FinTech startup scaling its microservices architecture.

The Old Workflow:
The Operations team manually generates a wildcard certificate (*.fintech-internal.net) valid for two years. This .p12 file is emailed to developers or uploaded to a shared password manager. Developers then "bake" this certificate into their Docker images or mount it manually as a Kubernetes secret across 50 different microservices.

The Incident:
A developer’s laptop is compromised. The attacker extracts the wildcard private key. Because this single key underpins the trust model for the entire internal network, the Security team is forced to revoke it immediately.

The Consequence:
Revoking the wildcard cert triggers a catastrophic outage. Every service relying on that manual file injection fails simultaneously. The team spends 48 hours manually re-issuing and re-deploying certificates to every service.

This scenario highlights the two deadly sins of modern PKI: Long-lived certificates and poor isolation. The solution is to treat certificates not as static assets, but as ephemeral resources requested, used, and discarded by the pipeline.

The Architecture: The "Broker" Model

The gold standard for CI/CD certificate integration is the Broker Model. In this architecture, the CI runner (e.g., GitHub Actions, GitLab Runner) never generates a private key locally using OpenSSL. Instead, it authenticates against a centralized Secrets Manager which acts as an Intermediate CA.

The Workflow

  1. Authentication: The CI job authenticates to the Secrets Manager using its machine identity (OIDC).
  2. Request: The pipeline requests a certificate for a specific scope (e.g., payment-service).
  3. Generation: The Secrets Manager generates the key pair and signs the certificate.
  4. Injection: The certificate and key are injected into the build environment as environment variables (in memory).
  5. Deployment: The assets are deployed to the target environment (e.g., Kubernetes Secrets).
  6. Destruction: The CI environment is torn down; the key never touches a disk.

Implementation Guide: HashiCorp Vault & GitHub Actions

Let’s implement this using HashiCorp Vault, the industry standard for secrets management, and GitHub Actions.

Step 1: Configure the Vault PKI Engine

First, your security team configures Vault to issue certificates. This ensures that policy (key length, algorithm) is enforced centrally.

# Enable the PKI secrets engine
vault secrets enable pki

# Tune the engine for 1-year max TTL (internal CA)
vault secrets tune -max-lease-ttl=8760h pki

# Generate the root certificate (or import your own)
vault write pki/root/generate/internal \
    common_name=my-internal-ca.com \
    ttl=8760h

# Create a role for the web service
vault write pki/roles/web-service \
    allowed_domains="myservice.internal" \
    allow_subdomains=true \
    max_ttl="72h"

Note the max_ttl="72h". We are deliberately moving to short-lived certificates to reduce the blast radius of a compromise.

Step 2: The CI/CD Pipeline Configuration

In your GitHub Actions workflow, avoid using long-lived API tokens. Instead, use OIDC to authenticate the runner with Vault.

name: Deploy Service with Certs

on:
  push:
    branches: [ main ]

permissions:
  id-token: write # Required for OIDC
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v3

      - name: Import Secrets from Vault
        uses: hashicorp/vault-action@v2
        with:
          url: https://vault.example.com
          method: jwt
          role: github-runner
          secrets: |
            pki/issue/web-service common_name=api.myservice.internal format=pem_bundle | CERT_BUNDLE ;

      - name: Deploy to Kubernetes
        env:
          KUBE_CONFIG: ${{ secrets.KUBE_CONFIG }}
          TLS_DATA: ${{ steps.vault.outputs.CERT_BUNDLE }}
        run: |
          # Extract cert and key from the bundle (in memory)
          echo "$TLS_DATA" | jq -r .certificate > tls.crt
          echo "$TLS_DATA" | jq -r .private_key > tls.key

          # Create/Update the Kubernetes Secret
          kubectl create secret tls app-tls \
            --cert=tls.crt \
            --key=tls.key \
            --dry-run=client -o yaml | kubectl apply -f -

          # Clean up immediately
          rm tls.crt tls.key

Why this works: The developer never sees the private key. It is generated on-the-fly, injected, deployed, and the pipeline cleans up after itself.

The Kubernetes Approach: Cert-Manager

If you are deploying exclusively to Kubernetes, you can offload the lifecycle management entirely from the CI pipeline to the cluster itself using cert-manager.

In this model, the CI pipeline deploys a Certificate manifest, and the cluster controller handles the request, issuance, and rotation.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: payment-service-cert
  namespace: production
spec:
  secretName: payment-service-tls
  duration: 2160h # 90 days
  renewBefore: 360h # 15 days
  subject:
    organizations:
      - My Company
  commonName: payments.company.internal
  dnsNames:
    - payments.company.internal
  issuerRef:
    name: vault-issuer
    kind: ClusterIssuer

This is often preferred for "set and forget" operations. However, it introduces a new risk: Silent Failure.

The Safety Net: Why You Still Need External Monitoring

Automation is powerful, but it is not infallible. A common anti-pattern in DevOps is assuming that because a certificate is set to auto-rotate, it will auto-rotate.

The "Zombie Certificate" Problem

Consider these common failure scenarios in automated pipelines:
1. The Reload Failure: cert-manager successfully renews the secret in Kubernetes, but the Nginx Ingress Controller fails to reload the configuration. The application continues serving the old, expired certificate despite the new one existing on disk.
2. The Pipeline Break: An upstream change in the Vault policy breaks the CI pipeline's ability to request certs. The failure is buried in a log file nobody checks until the current cert expires.
3. The CDN Gap: Your origin server rotates correctly, but the CDN (Content Delivery Network) caching the SSL termination is stuck holding the old certificate.

This is where external validation becomes mandatory. You cannot rely solely on your internal tools to report their own health. You need an observer that mimics the user's perspective.

integrating Expiring.at

Tools like Expiring.at provide this critical layer of "trust but verify." By monitoring the public-facing (or internal) endpoints, you validate the actual certificate being presented to the client, not just the one sitting in your secrets store.

A robust monitoring strategy includes:
* Daily Scans: Checking all endpoints for expiry dates.
* Chain Validation: Ensuring the intermediate chain hasn't been broken during a rotation.
* Change Detection: Alerting when a certificate changes (fingerprint change), which helps detect unauthorized replacements or "Shadow PKI."

Best Practices for 2024-2025

To wrap up, here are the actionable best practices for integrating certificates into your SDLC:

  1. Stop Baking Certs into Images: Never include .pem or .p12 files in your Docker images. It makes rotation impossible without a full redeploy and exposes keys if the registry is public. Use runtime injection (CSI drivers or Environment Variables).
  2. Lint for Secrets: Use pre-commit hooks like TruffleHog or GitLeaks to block any commit that looks like a private key.
  3. Standardize on ACME: Even for internal CAs, use the ACME protocol. It provides a uniform interface for developers, whether the backend is Let's Encrypt, Vault, or a private CA.
  4. Shorten Lifespans: Move internal certificate validity from years to days (or hours). If a key is stolen, it should be useless by the time the attacker figures out how to use it.
  5. Separate Duties: The developer writing the code should not have the ability to mint the production identity. The CI pipeline acts as the trusted gatekeeper.

Conclusion

The transition to automated certificate management is no longer just a "nice to have"—it is a requirement for survival in a world of 90-day public certificates and ephemeral microservices. By leveraging the Broker Model with tools like Vault and cert-manager, you can achieve "Invisible Security," where developers focus on code, and the pipeline handles trust.

However, never confuse automation with infallibility. As you build these pipelines, ensure you have a robust monitoring solution like Expiring.at to verify that your automation is delivering the security promises you’ve made to your users.

Ready to secure your automated infrastructure? Start by auditing your current certificate inventory and identifying the "long-lived" liabilities in your stack today.

Share This Insight

Related Posts