Ubuntu 24.04 “Cert verify failed”: Fix CA bundles and intermediate chains properly

Was this helpful?

You’re patching servers, your pipeline is green, and then—out of nowhere—cert verify failed shows up like a surprise audit.
apt can’t fetch packages, curl refuses to talk, Docker pulls die, and the app logs look like a medieval curse:
“unknown authority”, “unable to get local issuer certificate”, “self-signed certificate in certificate chain”.

Ubuntu 24.04 is not uniquely fragile here. It’s just honest. It verifies TLS properly, and it won’t pretend your certificate chain is fine when it’s not.
The fix is rarely “turn off verification.” The fix is: repair trust stores and serve correct intermediate chains. Do it once, do it correctly, and move on.

Fast diagnosis playbook

When TLS verification fails, people tend to thrash: reinstall random packages, flip environment variables, blame Ubuntu, blame “the network”.
Don’t. You’re debugging a chain of trust problem. Treat it like a production incident: isolate, reproduce, and determine which trust store is being used.

First: reproduce with one known tool

  • If the failure is in an app, reproduce with openssl s_client (ground truth) and curl -v (client behavior).
  • Capture: hostname, port, SNI, proxy/no-proxy, and whether the failing host is in a container/VM.

Second: decide whether this is “server chain” or “client trust store”

  • If openssl s_client shows missing intermediates, fix the server first. That’s the clean fix.
  • If the server chain is fine but clients still fail, your client trust store is stale or bypassed (custom CA bundle, broken symlink, old image, weird Java store).

Third: check for interception

  • If the certificate issuer is your company, a proxy is terminating TLS. Add the corporate root CA to the OS trust store (and possibly to application-specific stores).
  • If the certificate subject doesn’t match the hostname, you’re not reaching the server you think you are (proxy, captive portal, DNS hijack, wrong VIP).

Fourth: scope blast radius

  • Does it fail only on one host? That’s local trust store, clock, or proxy config.
  • Does it fail across a fleet? That’s a CA rotation, a proxy policy change, or a broken server chain deployed widely.
  • Does it fail only in containers? That’s image CA bundle drift (or Alpine vs Debian trust differences).

One paraphrased idea from W. Edwards Deming that operations folks keep re-learning the hard way: Without data, you’re just another person with an opinion (paraphrased idea).
Collect the handshake evidence before you “fix” anything.

What “cert verify failed” really means on Ubuntu 24.04

“Cert verify failed” is not one error. It’s a family of failures that all end up at the same boundary: the client couldn’t build a trusted path from the presented server certificate to a trusted root CA.
That can happen for a handful of concrete reasons:

  • Missing intermediate certificate(s) on the server (most common in the wild).
  • Client trust store missing the root CA (common with private PKI or corporate TLS inspection).
  • Wrong system time (cert not yet valid / expired).
  • Hostname mismatch (connecting to the wrong service, wrong SNI, or DNS/Proxy weirdness).
  • Application uses its own CA bundle and ignores the OS store (hello, Java; also some Python, Node, and containerized apps).
  • Broken CA bundle packaging (rare, but happens after partial upgrades, manual edits, or image layering accidents).
  • Revocation/CRL/OCSP policy in strict clients (less common on Ubuntu defaults, but common in enterprise tooling).

Ubuntu 24.04 uses OpenSSL 3 in the base system and ships ca-certificates providing Mozilla’s CA trust set as a bundle.
Most command-line tools ultimately consult that, unless they’re hardwired to something else.

The goal is not to “make the error go away.” The goal is to make the certificate chain correct and the trust anchor explicit.
If you do that, the error disappears as a side effect. That’s the correct causal direction.

Joke #1: Turning off TLS verification in production is like removing the smoke detector because it won’t stop beeping. The beeping was doing its job.

Facts and history that explain today’s mess

A little context makes today’s failures feel less random. Here are a few concrete, actually-useful facts:

  1. CA bundles on Linux largely track Mozilla’s root program. The ca-certificates package is essentially a distribution-friendly wrapper around that trust list.
  2. Intermediate certificates are not universally cached. Some clients cache intermediates, some don’t, and caches expire. A server that “works on my laptop” can still be broken.
  3. Let’s Encrypt’s early chain changes burned a lot of people. Cross-signing and chain selection issues caused intermittent failures on older clients and misconfigured servers.
  4. SNI (Server Name Indication) became mandatory in practice. Without SNI, you can get the wrong certificate from a multi-tenant endpoint and fail hostname verification.
  5. OpenSSL 3 tightened the ecosystem. It didn’t “break TLS”; it made weak, ambiguous, or misconfigured behavior more visible and less tolerated.
  6. Corporate TLS interception is older than most container platforms. It predates Kubernetes, Docker, and even some cloud providers; it just became more painful once everything went HTTPS.
  7. Java historically carried its own trust store. Many enterprise apps still do, which is why “works with curl, fails in app” keeps happening.
  8. Path building can differ by library. GnuTLS vs OpenSSL vs NSS can succeed or fail differently on the same chain, depending on how intermediates are handled.
  9. There’s no “standard location” that every program honors. Linux has conventions, but each stack can override them, and many do.

Trust stores on Ubuntu 24.04: who reads what

Before running commands, get one mental model straight: “the OS CA store” is not a single file. It’s a set of sources compiled into bundle files,
and different tools pick different entry points.

Key files and directories

  • /etc/ssl/certs/ca-certificates.crt — the main PEM bundle (common default for OpenSSL-based tools).
  • /etc/ssl/certs/ — hashed symlinks and individual certs used by OpenSSL lookups.
  • /usr/local/share/ca-certificates/ — where you drop local CAs (.crt) to be added to the system store.
  • /etc/ca-certificates.conf — which CAs are enabled/disabled when generating bundles.
  • /var/lib/ca-certificates/ — generated/managed content (don’t hand-edit it).

Who uses what (typical)

  • curl: usually uses OpenSSL CA path/bundle; respects SSL_CERT_FILE/SSL_CERT_DIR.
  • apt: uses system OpenSSL/gnutls stack depending on build; on Ubuntu it typically relies on system trust via ca-certificates.
  • Python requests: uses certifi by default in many virtualenvs unless configured to use system store.
  • Java: uses cacerts in the JRE unless explicitly configured otherwise.
  • Docker: uses OS trust for registry calls, but also has per-registry cert directories.

Translation into operator-speak: if only one app fails, don’t keep reinstalling ca-certificates. You’re probably patching the wrong trust store.

Practical tasks (commands, outputs, decisions)

Below are hands-on tasks you can run on Ubuntu 24.04. Each one includes: a command, what typical output means, and the decision you make next.
These are written for real incidents: servers, CI runners, containers, and corporate networks.

Task 1: Confirm system time (boring, first anyway)

cr0x@server:~$ timedatectl
               Local time: Mon 2025-12-30 11:22:43 UTC
           Universal time: Mon 2025-12-30 11:22:43 UTC
                 RTC time: Mon 2025-12-30 11:22:43
                Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

Meaning: If the clock isn’t synchronized, you can get “certificate not yet valid” or “expired” errors even with perfect chains.
Decision: If System clock synchronized is no, fix NTP first. Don’t touch certificates until time is sane.

Task 2: Reproduce with curl and capture the exact verification failure

cr0x@server:~$ curl -Iv https://repo.example.internal/
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* SSL certificate problem: unable to get local issuer certificate
curl: (60) SSL certificate problem: unable to get local issuer certificate

Meaning: The client couldn’t build a chain to a trusted root. This often means either missing intermediates (server-side) or missing root CA (client-side).
Decision: Move to openssl s_client to see what the server actually presented.

Task 3: Inspect the server chain with OpenSSL (ground truth)

cr0x@server:~$ openssl s_client -connect repo.example.internal:443 -servername repo.example.internal -showcerts -verify_return_error </dev/null
depth=0 CN = repo.example.internal
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = repo.example.internal
verify error:num=21:unable to verify the first certificate
verify return:1
Verification error: unable to verify the first certificate
---
Certificate chain
 0 s:CN = repo.example.internal
   i:CN = Example Issuing CA 01
-----BEGIN CERTIFICATE-----
...leaf...
-----END CERTIFICATE-----
---

Meaning: The server presented only the leaf certificate. No intermediate was sent. Clients that don’t have the intermediate cached will fail.
Decision: Fix the server to present the full chain (leaf + intermediate(s)). This is the preferred fix.

Task 4: Check which CA bundle curl is using

cr0x@server:~$ curl -v https://example.com/ 2>&1 | grep -E "CAfile|CApath"
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs

Meaning: curl is using the system bundle and hashed directory.
Decision: If your corporate CA isn’t trusted, you need to add it via /usr/local/share/ca-certificates and run update-ca-certificates.

Task 5: Verify the OS CA bundle exists and is not obviously broken

cr0x@server:~$ ls -l /etc/ssl/certs/ca-certificates.crt
-rw-r--r-- 1 root root 223541 Dec 30 10:58 /etc/ssl/certs/ca-certificates.crt

Meaning: Bundle exists and has non-trivial size.
Decision: If it’s missing or tiny (a few bytes), suspect broken package state or a botched configuration management run. Reinstall ca-certificates.

Task 6: Reinstall and regenerate CA certificates cleanly

cr0x@server:~$ sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease
Reading package lists... Done
cr0x@server:~$ sudo apt-get install --reinstall -y ca-certificates
Reading package lists... Done
Building dependency tree... Done
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.
Setting up ca-certificates (20240203) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.

Meaning: Package is present; bundle regeneration ran.
Decision: If errors show during “Updating certificates”, stop and read them. Permissions, disk full, or broken files in local CA directory can block updates.

Task 7: Check for local custom CAs and whether they were imported

cr0x@server:~$ ls -l /usr/local/share/ca-certificates
total 8
-rw-r--r-- 1 root root 2048 Dec 30 10:57 corp-proxy-root-ca.crt
-rw-r--r-- 1 root root 1876 Dec 30 10:57 corp-issuing-ca.crt

Meaning: Local CA files exist in the correct directory and are .crt.
Decision: If your CA files are .pem or have weird extensions, rename to .crt and run update-ca-certificates.

Task 8: Import local CAs into Ubuntu trust store

cr0x@server:~$ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
2 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.

Meaning: Your two local CAs were added and hashed into /etc/ssl/certs.
Decision: Retest the failing tool. If it still fails, you’re likely dealing with an intermediate-chain issue on the server or an app using a different trust store.

Task 9: Verify a chain against the system store explicitly

cr0x@server:~$ openssl verify -CAfile /etc/ssl/certs/ca-certificates.crt /tmp/server-leaf.pem
CN = repo.example.internal
error 20 at 0 depth lookup: unable to get local issuer certificate
error /tmp/server-leaf.pem: verification failed

Meaning: The leaf alone can’t be verified because OpenSSL can’t find its issuer certificate (intermediate) in the trust store.
Decision: If the intermediate is not supposed to be a trusted root, don’t add it to the trust store just to “make it work”. Fix the server chain instead.

Task 10: Fetch and verify the full chain from the server

cr0x@server:~$ openssl s_client -connect repo.example.internal:443 -servername repo.example.internal -showcerts </dev/null | sed -n '/BEGIN CERTIFICATE/,/END CERTIFICATE/p' > /tmp/chain.pem
cr0x@server:~$ openssl verify -CAfile /etc/ssl/certs/ca-certificates.crt -untrusted /tmp/chain.pem /tmp/chain.pem
/tmp/chain.pem: OK

Meaning: Once intermediates are available, verification succeeds.
Decision: If openssl verify succeeds but your app fails, your app is likely using another CA bundle (Python certifi, Java cacerts, embedded bundle).

Task 11: Diagnose apt HTTPS failures with targeted output

cr0x@server:~$ sudo apt-get update -o Debug::Acquire::https=true
Ign:1 https://repo.example.internal noble InRelease
Err:1 https://repo.example.internal noble InRelease
  Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown.
Reading package lists... Done
W: Failed to fetch https://repo.example.internal/dists/noble/InRelease  Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown.

Meaning: apt can’t validate the repo’s certificate chain.
Decision: Either the repo server is missing intermediates or your system store doesn’t include the corporate root. Don’t use [trusted=yes] as a band-aid unless you enjoy being paged.

Task 12: Check whether a proxy is in play for apt and curl

cr0x@server:~$ env | grep -iE 'http_proxy|https_proxy|no_proxy'
https_proxy=http://proxy.corp.internal:3128
no_proxy=localhost,127.0.0.1,.corp.internal

Meaning: Requests may be going through a proxy, possibly doing TLS interception.
Decision: If the proxy is intercepting, install its root CA into the system trust store. If you intended direct access, fix environment and apt config.

Task 13: Confirm the certificate you receive matches the hostname (SNI correctness)

cr0x@server:~$ openssl s_client -connect 10.20.30.40:443 -servername repo.example.internal </dev/null 2>/dev/null | openssl x509 -noout -subject -issuer
subject=CN = repo.example.internal
issuer=CN = Example Issuing CA 01

Meaning: With SNI provided, you got the expected certificate.
Decision: If subject is wrong without SNI, your client/tool might not be sending SNI (rare today, but still possible in embedded stacks). Force SNI-capable tooling or correct the endpoint.

Task 14: Identify whether Python is using certifi instead of system CAs

cr0x@server:~$ python3 -c "import ssl; print(ssl.get_default_verify_paths())"
DefaultVerifyPaths(cafile='/etc/ssl/certs/ca-certificates.crt', capath='/etc/ssl/certs', openssl_cafile_env='SSL_CERT_FILE', openssl_cafile='/usr/lib/ssl/cert.pem', openssl_capath_env='SSL_CERT_DIR', openssl_capath='/usr/lib/ssl/certs')
cr0x@server:~$ python3 -c "import requests; import certifi; print(certifi.where())"
/usr/lib/python3/dist-packages/certifi/cacert.pem

Meaning: The SSL module defaults to the OS store, but requests may prefer certifi’s bundle depending on packaging and environment.
Decision: If your corporate CA is only in the OS store, configure your app to use it (or update certifi bundle in that environment, or vendor a proper CA path).

Task 15: Check Java trust store if only JVM apps fail

cr0x@server:~$ sudo keytool -list -keystore /etc/ssl/certs/java/cacerts -storepass changeit | head
Keystore type: JKS
Keystore provider: SUN

Your keystore contains 141 entries

Meaning: Java has its own CA store.
Decision: If corporate roots aren’t present, import them into Java cacerts (or point the JVM at the OS bundle explicitly). Fixing only OS trust won’t save you here.

Task 16: Docker registry trust path (if image pulls fail)

cr0x@server:~$ sudo ls -l /etc/docker/certs.d
total 4
drwxr-xr-x 2 root root 4096 Dec 30 11:01 registry.corp.internal:5000
cr0x@server:~$ sudo ls -l /etc/docker/certs.d/registry.corp.internal:5000
total 4
-rw-r--r-- 1 root root 2048 Dec 30 11:01 ca.crt

Meaning: Docker has per-registry trust anchors; it can work even if OS trust is incomplete (or vice versa).
Decision: Pick one model. In corporate environments, I prefer adding the corporate root to OS trust and keeping Docker overrides minimal and documented.

Intermediate chains: the silent killer

Most “cert verify failed” incidents that look mysterious are just missing intermediates. Servers are supposed to present a chain
that lets clients link the leaf cert up to a trusted root. The root itself is usually not sent; clients already have it.
Intermediates must be sent by the server.

How this fails in practice

A common misconfiguration is installing only the leaf certificate on the TLS endpoint. Browsers might still succeed because:
(a) they cached the intermediate from elsewhere, (b) they fetched it via AIA (Authority Information Access), or (c) they behave more forgivingly.
Meanwhile, headless clients in production fail because they don’t fetch intermediates or they’re locked down from outbound fetches.

What to do instead

  • On the server side, configure the TLS service with the full chain file (leaf + intermediate(s), in correct order).
  • Confirm with openssl s_client -showcerts that the intermediate is actually being served.
  • Prefer fixing the server. Adding intermediates as trust anchors on clients is operational debt disguised as “a quick fix”.

Joke #2: Certificate chains are like org charts—if you skip the middle managers, nothing gets approved and everyone blames the intern.

Corporate proxies and TLS interception

In many enterprises, outbound HTTPS is intercepted: the proxy terminates TLS, inspects traffic, then re-encrypts to the destination.
Your server certificate issuer becomes “Corporate Proxy Root CA” (or similar), and clients will fail unless that root is trusted.

How to confirm interception

Run openssl s_client to a public site from the affected host and inspect the issuer. If it’s not a public CA you expect,
you’re not talking directly to the internet. You’re talking to your company.

How to fix it correctly

  • Get the corporate root CA (and any issuing intermediates used by the proxy) in PEM format.
  • Install it into /usr/local/share/ca-certificates and run update-ca-certificates.
  • Then audit application-specific trust stores (Java, Python certifi, embedded). Decide whether to standardize on OS trust or maintain per-app stores.

What not to do

  • Do not disable verification in apt/curl globally.
  • Do not sprinkle “insecure” flags across CI scripts and call it a day.
  • Do not import leaf certificates as trusted roots. That’s not trust; that’s surrender.

Three corporate mini-stories from the trenches

1) The incident caused by a wrong assumption: “The load balancer handles the chain”

A mid-size company ran internal package repositories behind a managed load balancer. A team renewed the leaf certificate,
uploaded it, saw browsers succeed, and moved on. The next morning, patching windows started failing across a chunk of Ubuntu hosts:
apt couldn’t fetch updates. “Cert verify failed” everywhere.

The assumption was simple and wrong: they believed the load balancer automatically served the correct intermediate chain.
It didn’t. It served only the leaf. The reason it “worked in browsers” was cached intermediates and, on some networks,
opportunistic AIA fetching. Headless apt clients didn’t fetch. CI runners inside restricted networks also didn’t fetch.

Debugging took longer than it should have because everyone started by reinstalling ca-certificates on clients.
That changed nothing. The failure was server-side. Once someone ran openssl s_client -showcerts and saw a chain of length one,
the fix became obvious: configure the load balancer with a proper fullchain.

The lasting lesson wasn’t “use fullchain” (everyone already knew that). It was: trust nothing that you haven’t verified from the failing client’s vantage point.
The same endpoint can appear fine from a laptop and broken from a locked-down server.

2) The optimization that backfired: “Bake CA bundles into the container for speed”

Another org tried to speed up container builds by pinning a base image layer containing CA certificates and never touching it.
Their logic was tidy: CA bundles change slowly, and updating them “wastes build time.” The platform team even standardized a path,
and app teams were told to rely on it.

Months later, a corporate proxy policy rolled out. Outbound HTTPS interception became mandatory for certain subnets, and the proxy used a new root CA.
Hosts were updated with the corporate root via config management. Containers were not. Suddenly, containers could not call external APIs,
could not pull dependencies, and some couldn’t even ship logs.

The outage pattern was spicy: services worked on bare-metal hosts but failed in Kubernetes. Developers blamed Kubernetes.
SREs blamed “network changes.” It was neither. It was an image-level CA bundle that had fossilized.

The resolution was pragmatic: base images started rebuilding CA bundles on a schedule, and application images stopped overriding trust paths.
The “optimization” saved seconds per build and cost days of incident time. A very expensive trade.

3) The boring but correct practice that saved the day: “One trust pipeline, one source of truth”

A financial services shop had the opposite approach: relentless boredom. All corporate CAs lived in a versioned repo,
distributed via config management, and installed into OS trust stores with a small set of standard procedures.
They also had a simple daily check: run a TLS verification probe from representative hosts (including containers) to critical endpoints.

When an intermediate CA rotated on an internal service, a few hosts started failing verification. The probe caught it quickly.
The on-call didn’t need tribal knowledge. The runbook said: check served chain length; compare issuer; confirm local trust store state; fix server chain.
That’s what they did.

The fix was on the server side: a fullchain file was missing an intermediate after a certificate renewal automation change.
Because they had a central trust pipeline, nobody tried to “fix clients” by importing intermediates as roots.
The blast radius stayed small and the incident stayed short.

It wasn’t glamorous. It was correct. In production, “boring” is a compliment.

Common mistakes: symptom → root cause → fix

1) curl fails, browser works

Symptom: curl: (60) unable to get local issuer certificate; browsers load fine.

Root cause: Server missing intermediate chain; browser cached intermediate or fetched via AIA.

Fix: Configure server/load balancer with fullchain (leaf + intermediate(s)). Validate with openssl s_client -showcerts.

2) apt update fails only on some hosts

Symptom: apt HTTPS errors on a subset of fleet; others fine.

Root cause: Drift: some hosts missed CA updates, have broken ca-certificates state, or have wrong proxy env.

Fix: Check time, reinstall ca-certificates, run update-ca-certificates, and compare proxy variables/config across hosts.

3) “certificate signed by unknown authority” only in containers

Symptom: Host curl works; container curl fails; Docker pulls fail in cluster.

Root cause: Container image has stale CA bundle or uses different trust store than host.

Fix: Ensure CA bundle is updated inside image; avoid pinning CA layer forever; install corporate root CA during build or mount OS trust where appropriate.

4) Java app fails, curl succeeds

Symptom: JVM logs show PKIX path building failed; curl to same endpoint works.

Root cause: Java trust store missing corporate root CA or not using OS store.

Fix: Import corporate CA into Java cacerts or configure JVM to use OS trust bundle.

5) “self-signed certificate in certificate chain” after proxy rollout

Symptom: Suddenly widespread failures; issuer looks like a corporate CA; chain includes “self-signed”.

Root cause: TLS interception with an untrusted corporate root on clients (or wrong root distributed).

Fix: Install the correct corporate root CA into OS trust (and app stores). Confirm issuer matches expected corporate root.

6) Verification fails after “cleaning up cert files”

Symptom: update-ca-certificates errors; bundle missing; tools fail across the board.

Root cause: Someone edited generated files in /etc/ssl/certs or deleted hashed links.

Fix: Reinstall ca-certificates, rerun update-ca-certificates, and stop managing generated paths with ad-hoc scripts.

7) Only one domain fails, everything else works

Symptom: One internal service errors; others OK.

Root cause: That service serves wrong chain, wrong certificate, or wrong SNI mapping on LB.

Fix: Use openssl s_client -servername to confirm correct cert; fix server config.

8) Sporadic failures that “heal themselves”

Symptom: Same client sometimes succeeds, sometimes fails, without config changes.

Root cause: Multiple backends behind a VIP; some serve correct chain, others don’t. Or multiple proxies with different cert policies.

Fix: Hit each backend directly; compare served chains; standardize TLS config across pool members.

Checklists / step-by-step plan

Step-by-step plan for a single failing Ubuntu 24.04 host

  1. Confirm time and basic network (timedatectl, DNS resolution, route/proxy env). Fix time first.
  2. Reproduce with curl using -Iv and capture the exact error. If it’s hostname mismatch, stop and fix endpoint/SNI.
  3. Run OpenSSL chain inspection with -showcerts -verify_return_error -servername. Determine if intermediates are missing.
  4. If intermediates missing, fix the server/LB to serve fullchain. Retest from the same host.
  5. If issuer is corporate/proxy, install corporate root CA into /usr/local/share/ca-certificates and run update-ca-certificates.
  6. If only one app fails, identify its trust store (Python certifi, Java cacerts, container bundle, embedded CA path).
  7. Lock it down: remove any temporary insecure flags, document the CA distribution method, and add a basic probe check to catch future rotations.

Checklist for fixing the server-side chain (what “proper” looks like)

  • Leaf certificate matches hostname (SAN includes DNS name).
  • Leaf is signed by an intermediate CA (not self-signed, unless private root intentionally used).
  • Server presents leaf + intermediate(s) in correct order.
  • Root CA is not served (usually unnecessary and sometimes discouraged).
  • Verification succeeds from a clean client using OS trust store.

Checklist for client-side trust hygiene (how to avoid repeat incidents)

  • Corporate roots are distributed via config management, not hand-copied.
  • OS store is updated with update-ca-certificates after changes.
  • Containers rebuild CA bundles on a schedule or during build reliably.
  • App teams know whether they should use OS trust or an app-specific store.
  • Probes validate critical endpoints daily from representative environments (host + container + CI runner).

FAQ

1) Is reinstalling ca-certificates always the fix?

No. It fixes broken local bundle generation and stale packages, but it won’t fix a server that fails to send intermediates.
Use openssl s_client -showcerts to decide whether the problem is server chain or client trust.

2) Should I add the intermediate certificate to my client trust store?

Usually no. Intermediates are not trust anchors; they are part of the chain the server should send.
Adding intermediates to trust stores “works”, but it spreads a server misconfiguration into every client and container you own.

3) Why does it work on my laptop but fail on servers?

Cached intermediates, different trust stores, different proxies, or different DNS/SNI behavior.
Servers are often stricter and more isolated, which is exactly why they fail first.

4) Where do I put a corporate root CA on Ubuntu 24.04?

Put a PEM-encoded cert in /usr/local/share/ca-certificates/ with a .crt extension, then run sudo update-ca-certificates.
Verify it’s included by checking tools like curl again.

5) apt fails with certificate errors. Can I use [trusted=yes] in sources.list?

That bypasses signature checks for that repository, not TLS verification in a robust way, and it’s a security downgrade either way.
Fix the CA trust or the server chain. Only consider insecure shortcuts for short-lived break-glass scenarios, and remove them quickly.

6) My Python app fails but system curl works. Why?

Many Python environments use certifi’s CA bundle instead of the OS trust store.
Confirm with python3 -c "import certifi; print(certifi.where())" and decide whether to point requests at system CAs or update the bundle.

7) How can I tell if a proxy is intercepting TLS?

Inspect the issuer of a certificate when connecting to a known public site.
If the issuer is an internal corporate CA, you’re being intercepted. That’s not automatically evil; it’s just a trust distribution requirement.

8) Do I need to restart services after updating CAs?

Often yes for long-running processes. Many programs read CA bundles at startup and keep them in memory.
For system daemons and application servers, plan a restart or rolling deploy after CA updates.

9) What’s the difference between “unable to get local issuer certificate” and “self-signed certificate”?

“Unable to get local issuer” usually means missing intermediate or missing trusted root.
“Self-signed” often means the chain includes a cert that isn’t trusted as a root, or you’re hitting a proxy/mitm certificate without trusting its root.

10) How do I prevent this class of outage?

Fix server chains correctly, centralize CA distribution, avoid frozen CA bundles in images, and add a small daily TLS probe to critical endpoints.
The secret is consistency, not heroics.

Conclusion: next steps that actually stick

Ubuntu 24.04 isn’t being difficult when it says “cert verify failed.” It’s being accurate.
The fastest durable fix is almost always one of two things: serve the correct intermediate chain on the server, or install the correct root CA on the client.
Anything else is a detour that turns into policy, and policy turns into pain.

Next steps that pay rent:

  • Run the fast diagnosis playbook on one failing host and capture the OpenSSL chain output.
  • Fix server-side fullchain wherever you control the endpoint; treat it as the primary remediation.
  • Standardize corporate CA installation via /usr/local/share/ca-certificates + update-ca-certificates.
  • Audit app-specific trust stores (Java, Python certifi, containers) and pick a standard approach.
  • Add a simple probe that verifies TLS from representative environments, so you find the next break before your users do.

Do the chain right once. Future you will be slightly less tired, which is the closest thing operations has to a victory parade.

← Previous
Power Limits and Boost: Why Your GPU Has a Mind of Its Own
Next →
ZFS Special VDEVs on SAS SSDs: The Pro Move for Metadata

Leave a comment