The 90-Day Countdown: Unifying Certificate Management Across Multi-Cloud Environments

The landscape of Public Key Infrastructure (PKI) and certificate management is undergoing a seismic shift. For years, managing TLS/SSL certificates was treated as an annual operational chore—a task re...

Tim Henrich
March 16, 2026
7 min read
22 views

The 90-Day Countdown: Unifying Certificate Management Across Multi-Cloud Environments

The landscape of Public Key Infrastructure (PKI) and certificate management is undergoing a seismic shift. For years, managing TLS/SSL certificates was treated as an annual operational chore—a task relegated to spreadsheet tracking and calendar reminders. Today, driven by shrinking lifespans, the threat of quantum computing, and the exponential explosion of machine identities across distributed clouds, manual certificate management is no longer just inefficient; it is a critical vulnerability.

According to the Ponemon Institute, the average cost of a certificate-related outage is now over $300,000 per hour. In a multi-cloud architecture where microservices span across AWS, Azure, and Google Cloud Platform (GCP), a single expired certificate can trigger a cascading failure that takes down entire production environments.

With Google’s Chromium Root Program proposing a reduction in the maximum validity of public TLS certificates from 398 days to just 90 days, the industry is facing a hard truth: automation is no longer optional. It is a mathematical necessity.

In this comprehensive guide, we will explore the unique challenges of multi-cloud certificate management, outline the three pillars of modern PKI architecture, and provide a technical deep dive into automating your certificate lifecycles using industry-standard tools.


The Multi-Cloud Chaos: Why Native Tools Are Breaking Your Infrastructure

Modern DevOps teams rarely operate in a single cloud. You might host your frontend on AWS, run your data analytics in GCP, and maintain legacy active directories in Azure. While each cloud provider offers its own robust certificate management tools—such as AWS Certificate Manager (ACM), Azure Key Vault, and Google Certificate Authority Service (CAS)—relying strictly on these native tools introduces severe architectural flaws.

The "Swivel Chair" Problem

Cloud-native PKI tools do not communicate with each other. If your security team needs to audit all active certificates, they are forced into a "swivel chair" workflow: logging into the AWS console, exporting a list, logging into Azure, exporting another list, and attempting to reconcile them manually. This siloed visibility creates massive blind spots. An internal API certificate expiring in Azure might suddenly sever communication with a critical microservice hosted in AWS, and because the monitoring is fragmented, root-cause analysis becomes incredibly difficult.

Shadow IT and Rogue Certificates

In fast-paced CI/CD environments, developers need certificates instantly to test new services. When centralized PKI processes are too slow, engineers often spin up self-signed certificates or use unauthorized Certificate Authorities (CAs) for quick testing. These "rogue" certificates inevitably make their way into production, bypassing corporate security policies and creating unmonitored expiration time bombs.

Vendor Lock-in

Relying solely on a specific cloud provider's native PKI tools makes migrating workloads between clouds painful. The cryptographic trust anchors become tied to a specific vendor. If you want to move a Kubernetes cluster from Amazon EKS to Google GKE, you must completely re-architect your certificate issuance pipeline.


The Machine Identity Boom and the 90-Day Mandate

To understand the urgency of multi-cloud certificate management, we must look at two converging trends: machine identities and shrinking lifespans.

According to Gartner, machine identities—containers, microservices, APIs, and virtual machines—now outnumber human identities by a staggering ratio of 45:1. In a Zero Trust multi-cloud environment, every single one of these machines requires a cryptographic identity (a certificate) to authenticate via mutual TLS (mTLS).

Simultaneously, the lifespan of these certificates is plummeting. The impending Google Chromium proposal to force 90-day lifespans for public certificates means that IT teams will have to renew certificates four times as often. For a mid-sized enterprise managing 10,000 certificates, a 90-day lifespan translates to over 100 certificate renewals every single day.

Manual renewal workflows, calendar reminders, and spreadsheet tracking are mathematically non-viable in this new reality. You must transition to a fully automated lifecycle.


The 3 Pillars of Multi-Cloud PKI

To build a robust, future-proof multi-cloud certificate management pipeline, engineering teams must architect their systems around three core pillars: Centralized Visibility, Automated Execution, and Crypto-Agility.

Pillar 1: Centralized Visibility (Decentralized Execution)

You cannot secure or renew what you cannot see. The first step is establishing a "single pane of glass" that discovers and monitors all certificates across AWS, Azure, GCP, and on-premises environments.

This is where dedicated monitoring tools become invaluable. Using a platform like Expiring.at allows teams to consolidate expiration tracking across the entire multi-cloud fleet. By decoupling the monitoring of certificates from the issuance of certificates, you empower your security team to maintain global oversight while allowing decentralized DevOps teams to continue using their preferred local automation tools.

Pillar 2: Automated Execution via ACME

Proprietary APIs are the enemy of multi-cloud portability. Instead of writing custom scripts for different cloud providers, organizations must standardize on the ACME protocol (RFC 8555).

The Automated Certificate Management Environment (ACME), popularized by Let's Encrypt, allows for automated domain validation and certificate issuance. By deploying ACME clients across your multi-cloud infrastructure, you can fully automate the request, deployment, and renewal phases without human intervention.

Pillar 3: Crypto-Agility and PQC Readiness

In August 2024, the National Institute of Standards and Technology (NIST) finalized its first set of Post-Quantum Cryptography (PQC) standards (FIPS 203, 204, and 205). Quantum computers capable of breaking traditional RSA and ECC encryption are on the horizon.

Crypto-agility is the ability to rapidly swap out legacy cryptographic algorithms for quantum-resistant ones without breaking your multi-cloud infrastructure. By abstracting the certificate issuance process from the underlying CA, you ensure that when the time comes to migrate to PQC, you can rotate all certificates via a central policy engine rather than rewriting application code.


Technical Deep Dive: Automating Multi-Cloud PKI with Kubernetes

To see how these pillars work in practice, let's look at the industry-standard architecture for multi-cloud certificate automation: combining HashiCorp Vault as the centralized enterprise CA with cert-manager as the Kubernetes execution engine.

Step 1: Setting up the Centralized Trust Anchor (HashiCorp Vault)

Instead of relying on cloud-specific key vaults, HashiCorp Vault can be deployed as a cloud-agnostic PKI engine. Using Infrastructure as Code (IaC) tools like Terraform, you can provision your root and intermediate CAs consistently.

Here is an example of how to configure a Vault PKI secrets engine using Terraform:

# Enable the PKI secrets engine
resource "vault_mount" "pki" {
  path        = "pki"
  type        = "pki"
  description = "Multi-cloud PKI engine"

  # Set the maximum lease time to 90 days to enforce short lifespans
  max_lease_ttl_seconds = 7776000 
}

# Generate an internal Root CA
resource "vault_pki_secret_backend_root_cert" "root" {
  backend              = vault_mount.pki.path
  type                 = "internal"
  common_name          = "Multi-Cloud Enterprise Root CA"
  ttl                  = "87600h"
  format               = "pem"
  private_key_format   = "der"
  key_type             = "rsa"
  key_bits             = 4096
}

# Create a role that dictates what domains can be issued
resource "vault_pki_secret_backend_role" "microservices" {
  backend          = vault_mount.pki.path
  name             = "microservices-role"
  ttl              = "2160h" # 90 days
  allow_any_name   = false
  allowed_domains  = ["svc.cluster.local", "internal.example.com"]
  allow_subdomains = true
}

Step 2: Deploying cert-manager in Kubernetes

With the central CA established, we need a way to automatically inject certificates into our workloads running in Amazon EKS, Google GKE, or Azure AKS. cert-manager is the standard Kubernetes add-on for this task.

First, we define an Issuer in Kubernetes that tells cert-manager how to authenticate with HashiCorp Vault:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: vault-issuer
spec:
  vault:
    server: https://vault.example.com
    path: pki/sign/microservices-role
    auth:
      kubernetes:
        role: cert-manager
        secretRef:
          name: vault-token
          key: token

Step 3: Automated Certificate Injection

Now, application developers do not need to know anything about PKI, Vault, or cloud-specific APIs. When they deploy a new microservice, they simply request a Certificate resource. cert-manager will automatically generate the private key locally in the cluster (ensuring private keys never cross the network), generate a Certificate Signing Request (CSR), send it to Vault, and mount the resulting certificate as a Kubernetes Secret.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: payment-service-cert
  namespace: production
spec:
  # Enforce a 30-day lifespan, renewing automatically at 15 days
  duration: 720h
  renewBefore: 360h
  secretName: payment-service-tls
  issuerRef:
    name: vault-issuer
    kind: ClusterIssuer
  commonName: payment.internal.example.com
  dnsNames:
  - payment.internal.example.com
  - payment.production.svc.cluster.local

Share This Insight

Related Posts