Beyond Expiration Dates: A Modern Guide to Kubernetes Certificate Management

In the world of cloud-native infrastructure, TLS certificates are the bedrock of trust. They secure everything from customer-facing web applications to the internal API calls between microservices. Ye...

Tim Henrich
October 24, 2025
6 min read
193 views

Beyond Expiration Dates: A Modern Guide to Kubernetes Certificate Management

In the world of cloud-native infrastructure, TLS certificates are the bedrock of trust. They secure everything from customer-facing web applications to the internal API calls between microservices. Yet, for years, a simple, preventable error has remained a leading cause of catastrophic outages: an expired certificate. In a dynamic Kubernetes environment, where services and endpoints are ephemeral, manual certificate management isn't just inefficient—it's a critical operational risk.

The good news is that the industry has moved far beyond manual shell scripts and calendar reminders. Modern Kubernetes certificate management is an automated, identity-centric discipline. It's about building a resilient, observable, and secure lifecycle for every certificate in your cluster, from public-facing Ingress controllers to the ephemeral identities of every single pod.

This guide will walk you through the current best practices for managing certificates in Kubernetes. We'll cover the foundational tools, explore advanced Zero Trust patterns, and show you how to build a robust system that automates issuance, enforces policy, and provides complete visibility to prevent the next certificate-related outage.

The Kubernetes Certificate Landscape: More Than Just Ingress

When you first think of certificates in Kubernetes, you probably picture the lock icon in a browser, secured by a certificate on an Ingress controller. While crucial, this is just the tip of the iceberg. A mature Kubernetes environment relies on a diverse set of certificates:

  • North-South Traffic (Ingress): These are the public-facing certificates that secure traffic from outside the cluster to your services. They are typically issued by public Certificate Authorities (CAs) like Let's Encrypt.
  • East-West Traffic (mTLS): For a true Zero Trust posture, communication between microservices inside the cluster must be encrypted and authenticated. This is achieved with mutual TLS (mTLS), where each service presents a certificate to verify its identity. These are issued by a private, internal CA.
  • System Components: Kubernetes itself uses certificates extensively. Admission webhooks, API server extensions, and internal components like etcd all rely on TLS for secure communication.

Managing this "certificate sprawl" manually is an impossible task. It’s error-prone, doesn't scale, and creates significant security blind spots. The declarative, API-driven nature of Kubernetes demands an automated solution.

The De Facto Standard: Automating with cert-manager

In the Kubernetes ecosystem, cert-manager has emerged as the undisputed standard for automating the certificate lifecycle. It's a powerful controller that runs in your cluster and turns certificate management into a declarative process, just like deploying a ReplicaSet or a Service.

cert-manager introduces a few key Custom Resource Definitions (CRDs):

  • Issuer / ClusterIssuer: These resources represent a certificate authority capable of signing certificate requests. An Issuer is namespaced, while a ClusterIssuer is available across the entire cluster. You can configure issuers for Let's Encrypt, private CAs like HashiCorp Vault, or even a simple self-signed CA for development.
  • Certificate: This resource is a declarative request for a certificate. You define the domain names (Common Name and Subject Alternative Names), the desired issuer, and a reference to a Kubernetes Secret where the signed certificate and private key will be stored.

cert-manager continuously watches these resources. When it sees a Certificate object, it automatically generates a private key, creates a certificate signing request, and presents it to the configured Issuer. Once the certificate is signed, cert-manager saves it to the specified Secret and ensures it is automatically renewed before it expires.

Practical Example: Securing an Ingress with Let's Encrypt DNS-01

The most common use case is automatically securing an Ingress. While the HTTP-01 challenge is simple, it requires exposing your service to the internet. The DNS-01 challenge is more powerful, allowing you to issue wildcard certificates and secure internal services without public exposure.

Here’s how to set it up using cert-manager with AWS Route 53 for the DNS-01 challenge.

Step 1: Install cert-manager

First, install cert-manager into your cluster using its Helm chart.

# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager chart
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.4 \
  --set installCRDs=true

Step 2: Create a ClusterIssuer

Next, create a ClusterIssuer to represent the Let's Encrypt production CA. This YAML configures it to solve DNS-01 challenges using AWS Route 53. You'll need an IAM role with permissions to modify Route 53 records, which you grant to the cert-manager service account.

# lets-encrypt-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: your-email@example.com
    privateKeySecretRef:
      name: letsencrypt-prod-private-key
    solvers:
    - dns01:
        route53:
          region: us-east-1
          # Assumes you are using IAM Roles for Service Accounts (IRSA)
          # Or have otherwise configured credentials.

Step 3: Annotate Your Ingress

Finally, simply annotate your Ingress resource to tell cert-manager which ClusterIssuer to use. cert-manager will handle the rest.

# my-app-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  namespace: my-app
  annotations:
    # Tell cert-manager to use our ClusterIssuer
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - "app.example.com"
    # cert-manager will store the cert/key in this secret
    secretName: my-app-tls-secret
  rules:
  - host: "app.example.com"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80

Once you apply this Ingress, cert-manager automatically:
1. Detects the annotation and the tls block.
2. Creates a Certificate resource behind the scenes.
3. Communicates with Let's Encrypt to begin the challenge.
4. Temporarily creates the required TXT record in your Route 53 zone to prove ownership of app.example.com.
5. Receives the signed certificate, creates the my-app-tls-secret, and stores the certificate and private key there.
6. Monitors the certificate and renews it (by default, 30 days before expiration).

This declarative, hands-off approach eliminates manual work and the risk of human error.

Advanced Strategies for a Zero Trust Architecture

While cert-manager is the foundation, a mature strategy goes further to secure internal traffic and enforce strict governance.

Securing East-West Traffic with a Service Mesh

A service mesh like Istio or Linkerd is the gold standard for securing internal, service-to-service communication. A mesh injects a sidecar proxy next to each of your application pods. All traffic between pods is routed through these proxies.

The mesh automates mTLS by running its own internal CA. It automatically issues short-lived certificates to every single workload, rotating them as frequently as every few hours. This is done completely transparently to your applications. Developers don't need to modify their code to handle TLS; the mesh enforces it at the platform level. This ensures that all internal traffic is encrypted and that services can cryptographically verify the identity of their callers.

Enforcing Certificate Policies with Policy-as-Code

How do you ensure that developers are requesting certificates that comply with your organization's security policies? You can use a policy engine like Kyverno or OPA Gatekeeper to enforce rules declaratively.

For example, you can write a policy that blocks any Certificate resource that requests a wildcard domain, which are often considered a security risk.

Here is an example of a Kyverno ClusterPolicy that does exactly that:

```yaml

disallow-wildcard-certificates.yaml

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-wildcard-certificates
annotations:
policies.kyverno.io/title: Disallow Wildcard Certificates
policies.kyverno.io/category: Security
policies.kyverno.io/severity: medium
policies.kyverno.io/description: >-
Wildcard certificates are a security risk. This policy prevents
the creation of Certificate resources that request a wildcard domain
in either

Share This Insight

Related Posts