You run curl on Ubuntu 24.04, it spits out SSL_connect, handshake failure, or the ever-helpful curl: (35), and your pipeline stalls. Someone suggests “it’s the network.” Someone else blames “OpenSSL being weird.” Meanwhile, you just need the command to work because prod isn’t going to deploy itself.
This is the field guide for that moment: fast checks first, then deeper diagnosis when the obvious stuff (time, trust, SNI) isn’t obvious anymore. It’s opinionated because your incident channel deserves fewer theories and more decisions.
Fast diagnosis playbook (what to check first/second/third)
If you only have five minutes and a pager that’s already angry, do this in order. This sequence is optimized for “most common” and “most destructive” causes on Ubuntu 24.04 in corporate networks.
First: Time + DNS + basic reachability (cheap checks, high payoff)
- Check system time and NTP sync. If time is wrong, TLS is wrong. Don’t negotiate with cryptography.
- Confirm you’re resolving the hostname you think you are. Wrong IP means wrong certificate, which looks like “TLS failure.”
- Confirm you’re hitting the right port and it’s actually TLS. A proxy, load balancer, or service mesh can put plain HTTP on 443 and call it “innovative.”
Second: Trust chain and CA store (the most common “it worked yesterday” failure)
- Verify CA bundle exists and is current. On Ubuntu, that’s usually the
ca-certificatespackage plus generated bundle files. - Check whether your company inserts a TLS inspection proxy. If so, the “real” issuer is your corporate root, not Let’s Encrypt / DigiCert / etc.
- Look for expired intermediates. These show up as “unable to get local issuer certificate” and ruin your morning.
Third: SNI/ALPN/TLS version (where “works in browser” lies to you)
- Test SNI explicitly. Without SNI, many servers send a default certificate that doesn’t match your hostname.
- Test ALPN negotiation (HTTP/2 vs HTTP/1.1). Some middleboxes fail on HTTP/2 and you get a handshake that dies mid-flight.
- Force TLS 1.2 or TLS 1.3 to isolate compatibility issues. This is a diagnostic move, not a permanent “solution.”
One quote worth keeping on your monitor:
Werner Vogels (paraphrased idea): “Everything fails all the time; design and operate assuming it will.”
Interesting facts and short history (why this keeps happening)
- SNI exists because IPs got expensive. As hosting consolidated, multiple domains needed to share one IP, but TLS needed to know which cert to present. SNI (Server Name Indication) solved that by putting the hostname in the ClientHello.
- TLS 1.3 changed the handshake shape. It removed legacy ciphers and altered negotiation details; some older proxies “support TLS” but not the reality of TLS 1.3 traffic patterns.
- ALPN is why HTTP/2 usually “just works.” The client and server agree on
h2vshttp/1.1during TLS negotiation; when ALPN breaks, you can see handshake weirdness, not just slower transfers. - “CA trust store” is not one thing. Ubuntu maintains system trust via
/etc/ssl/certsand generated bundles; applications sometimes ship their own bundle, causing delightful inconsistencies. - Certificate chains are often “served” incorrectly. Servers must send intermediates; many misconfigs rely on clients having cached intermediates from prior visits, which is why “works in browser” can mislead.
- Clock skew has been a TLS problem since day one. Validity windows are simple and brutal: too early or too late and verification fails, regardless of your feelings.
- Let’s Encrypt pushed automation into the mainstream. Great for ops velocity. Also great for discovering which appliances can’t handle modern chains at 03:00.
- cURL error numbers are stable folklore. “(35) SSL connect error” is a catch-all that hides dozens of distinct handshake failures. You need verbose output to stop guessing.
Joke #1: TLS is like a VIP list—if your clock is wrong, you’re either not born yet or already dead. Neither gets you into the club.
A practical mental model of a curl TLS handshake
When curl https://api.example.com fails on Ubuntu 24.04, you’re usually failing one of four gates:
Gate 1: You reached the right peer
TCP connect succeeded and you are speaking to the system you intended. Failures here show up as timeouts, refused connections, or a TLS handshake that looks like nonsense because you didn’t actually hit a TLS endpoint.
Gate 2: You negotiated a common protocol
Client and server must agree on TLS version and cipher suite. With TLS 1.3, the “cipher suite list” semantics differ from TLS 1.2. Middleboxes that do traffic inspection sometimes mishandle this negotiation, and you’ll see alerts like protocol_version or handshake_failure.
Gate 3: You verified identity (SNI + certificate + SAN + chain)
The server presents a certificate chain. The hostname you requested must match a Subject Alternative Name entry. The chain must lead to a trusted root in your local trust store. If SNI is absent or wrong, you might get the wrong certificate entirely, and everything after that is just the client doing its job by refusing to proceed.
Gate 4: You verified time validity
The certificate must be valid now. This is where NTP being slightly broken becomes a full outage. Certificates are not forgiving; they don’t care that the VM resumed from a paused state and the clock woke up in last Tuesday.
Operational rule: Don’t “fix” TLS handshake failures by disabling verification (-k) unless you’re diagnosing in a safe sandbox. In production, it’s the security equivalent of pulling the smoke detector battery because it’s loud.
Hands-on tasks: commands, outputs, and decisions (12+)
These are real tasks you can run on Ubuntu 24.04. Each one includes: a command, an example output, what it means, and what decision you make next.
Task 1: Capture the real curl error with full TLS verbosity
cr0x@server:~$ curl -v https://api.example.com/health
* Trying 203.0.113.10:443...
* Connected to api.example.com (203.0.113.10) port 443 (#0)
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, handshake failure (552):
* OpenSSL SSL_connect: SSL_ERROR_SSL in connection to api.example.com:443
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SSL in connection to api.example.com:443
Meaning: TCP connect worked; TLS negotiation failed. The alert came from the peer (or a middlebox acting as peer).
Decision: Move to protocol/SNI/CA/time checks. Don’t waste time on “is the host reachable.” It is.
Task 2: Confirm system time and NTP state
cr0x@server:~$ timedatectl
Local time: Mon 2025-12-29 09:18:44 UTC
Universal time: Mon 2025-12-29 09:18:44 UTC
RTC time: Mon 2025-12-29 09:18:43
Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
Meaning: Clock is synchronized. Time probably isn’t the culprit.
Decision: If System clock synchronized: no or time is off, fix NTP before anything else. If it’s correct, proceed.
Task 3: Check NTP sources (spot “synced to nonsense”)
cr0x@server:~$ chronyc sources -v
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current best, '+' = combined, '-' = not combined.
| / .- RFC 5905 reachability register.
|| .- Last sample was +/- offset between local clock and source.
|| | .- Polling interval. .- Leap status.
|| | | | | | |
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* ntp1.corp.local 2 6 377 22 -153us[-401us] +/- 12ms
^+ ntp2.corp.local 2 6 377 19 +211us[ +19us] +/- 15ms
Meaning: You’re synced to sane sources with tiny offsets.
Decision: If you see a local clock source (#) or huge offsets, fix time first. TLS debugging with bad time is self-harm.
Task 4: Validate DNS resolution matches expectation
cr0x@server:~$ getent ahosts api.example.com
203.0.113.10 STREAM api.example.com
203.0.113.10 DGRAM api.example.com
203.0.113.10 RAW api.example.com
Meaning: Name resolves to one IP. Great—unless that IP is wrong for your environment.
Decision: If this IP is unexpected (wrong region/VIP), investigate split-horizon DNS, resolv.conf, or VPN DNS push.
Task 5: Verify the route isn’t doing something surprising
cr0x@server:~$ ip route get 203.0.113.10
203.0.113.10 via 192.0.2.1 dev eth0 src 192.0.2.55 uid 1000
cache
Meaning: Traffic goes via your expected gateway.
Decision: If it routes over a tunnel or unexpected interface, suspect proxies/VPNs and test from a clean network path.
Task 6: Confirm you’re actually speaking TLS on that port
cr0x@server:~$ timeout 5 openssl s_client -connect api.example.com:443 -brief
40F7E0A5D37F0000:error:0A00010B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:358:
Meaning: “Wrong version number” often means you hit a non-TLS service (plain HTTP on 443) or a proxy expecting CONNECT.
Decision: Check proxy variables, load balancer listeners, and whether you should be using https_proxy with CONNECT.
Task 7: Test SNI explicitly (the “right cert” test)
cr0x@server:~$ openssl s_client -connect 203.0.113.10:443 -servername api.example.com -showcerts </dev/null | head -n 20
CONNECTED(00000003)
---
Certificate chain
0 s:CN = api.example.com
i:C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1
-----BEGIN CERTIFICATE-----
MIIF...
-----END CERTIFICATE-----
---
Server certificate
subject=CN = api.example.com
issuer=C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1
Meaning: With SNI, you got a certificate for api.example.com. Good sign.
Decision: If the subject is some default cert (or unrelated hostname), fix SNI usage or verify your DNS/VIP maps to the right backend.
Task 8: See why verification fails (chain, hostname, expiry)
cr0x@server:~$ openssl s_client -connect api.example.com:443 -servername api.example.com -verify_return_error </dev/null
...
depth=0 CN = api.example.com
verify error:num=20:unable to get local issuer certificate
Verification error: unable to get local issuer certificate
Verify return code: 20 (unable to get local issuer certificate)
Meaning: The chain can’t be built to a trusted root locally. Either your CA store is missing the issuer, or a proxy is swapping certs to an untrusted corporate root.
Decision: Inspect the issuer and compare with your trust store. If you’re in a corporate network, confirm corporate root CA installation.
Task 9: Identify the issuer and SANs quickly
cr0x@server:~$ echo | openssl s_client -connect api.example.com:443 -servername api.example.com 2>/dev/null | openssl x509 -noout -issuer -subject -dates -ext subjectAltName
issuer=C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1
subject=CN = api.example.com
notBefore=Dec 1 00:00:00 2025 GMT
notAfter=Mar 1 23:59:59 2026 GMT
X509v3 Subject Alternative Name:
DNS:api.example.com, DNS:api-internal.example.com
Meaning: Hostname is present in SAN; validity window looks fine.
Decision: If SAN doesn’t include your hostname, this is a server-side cert issue (or you’re using the wrong hostname). Fix the endpoint, not curl.
Task 10: Verify CA bundle and update it (system trust)
cr0x@server:~$ dpkg -l | grep -E '^ii\s+ca-certificates\s'
ii ca-certificates 20240203 all Common CA certificates
Meaning: CA bundle package is installed.
Decision: If missing, install it. If installed but stale, update and regenerate.
cr0x@server:~$ sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease
Get:2 http://security.ubuntu.com/ubuntu noble-security InRelease [110 kB]
Fetched 110 kB in 1s (168 kB/s)
Reading package lists... Done
cr0x@server:~$ sudo apt-get install --only-upgrade ca-certificates
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ca-certificates is already the newest version (20240203).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
cr0x@server:~$ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Meaning: Your system CA store is consistent.
Decision: If certificates were added/removed, re-test curl. If still failing, suspect corporate MITM CA not installed or app-specific CA bundle use.
Task 11: Detect proxy settings that silently change the handshake
cr0x@server:~$ env | grep -iE 'https?_proxy|no_proxy'
https_proxy=http://proxy.corp.local:3128
http_proxy=http://proxy.corp.local:3128
no_proxy=localhost,127.0.0.1,.corp.local
Meaning: curl will likely tunnel through a proxy, which can fail CONNECT, intercept TLS, or require auth.
Decision: If the target should be reached directly, unset proxy vars for the test or add hostname to no_proxy.
cr0x@server:~$ curl -v --noproxy api.example.com https://api.example.com/health
* Trying 203.0.113.10:443...
* Connected to api.example.com (203.0.113.10) port 443 (#0)
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
> GET /health HTTP/1.1
< HTTP/1.1 200 OK
ok
Meaning: Direct works; proxy path was the problem.
Decision: Fix proxy configuration (CONNECT allowlist, auth, or corporate CA). Don’t “solve” by disabling TLS verification.
Task 12: Force HTTP/1.1 to rule out ALPN/HTTP2 weirdness
cr0x@server:~$ curl -v --http1.1 https://api.example.com/health
* ALPN: curl offers http/1.1
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
> GET /health HTTP/1.1
< HTTP/1.1 200 OK
ok
Meaning: HTTP/1.1 works. If --http2 fails, you’ve got an HTTP/2 or ALPN intolerance in the path (often a proxy or old load balancer config).
Decision: Short-term: pin to HTTP/1.1 for that environment. Long-term: fix the middlebox or server ALPN config.
Task 13: Force TLS version to isolate compatibility issues
cr0x@server:~$ curl -v --tlsv1.2 https://api.example.com/health
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
< HTTP/1.1 200 OK
ok
Meaning: TLS 1.2 works. If TLS 1.3 fails, something in the path is allergic to TLS 1.3.
Decision: If you control the server, enable/patch TLS 1.3 properly. If you don’t, consider pinning TLS 1.2 as a temporary mitigation and file the proper ticket to fix the path.
Task 14: See exactly which CA file curl uses
cr0x@server:~$ curl -v https://api.example.com/health 2>&1 | grep -iE 'CAfile|CApath'
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
Meaning: curl is using the system CA bundle as expected.
Decision: If it points somewhere else (custom bundle), align it with system trust or update that bundle specifically.
Task 15: Validate a corporate root CA is installed (when TLS inspection is present)
cr0x@server:~$ ls -l /usr/local/share/ca-certificates
total 4
-rw-r--r-- 1 root root 1874 Dec 28 12:10 corp-root-ca.crt
cr0x@server:~$ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Meaning: Your corporate root CA is now part of system trust.
Decision: Re-test the failing curl command. If it now works, your root cause was “untrusted MITM,” not “broken Internet.”
Task 16: Confirm the server returns the full certificate chain
cr0x@server:~$ openssl s_client -connect api.example.com:443 -servername api.example.com -showcerts </dev/null | awk '/BEGIN CERTIFICATE/{c++} END{print c}'
1
Meaning: Only one certificate was sent. That’s usually wrong unless the leaf is directly signed by a root (rare for public sites).
Decision: Fix server configuration to send intermediates. Don’t “fix” clients by shipping missing intermediates around like contraband.
Joke #2: If you ever “fix” TLS by copying random cert files from a coworker’s laptop, congratulations—you’ve reinvented malware distribution with better documentation.
Checklists / step-by-step plan (quick fixes that stick)
Checklist A: When curl fails with (35) or “handshake failure”
- Run
curl -vand capture the first TLS error line and anyCAfile/CApathlines. - Check time with
timedatectl. If not synchronized, fix NTP and retest. - Check DNS with
getent ahosts. If it resolves unexpectedly, fix resolver/VPN/split DNS. - Check proxies by printing
https_proxy/no_proxy. Retest using--noproxywhere appropriate. - Test SNI with
openssl s_client -servername. Confirm the leaf cert subject and SAN contain your hostname. - Check verification using
-verify_return_error. If issuer is untrusted, update CA store or install corporate root. - Isolate ALPN/TLS version with
--http1.1and--tlsv1.2. If forcing works, you’ve found a compatibility fault.
Checklist B: Permanent fixes you should prefer (instead of “curl -k”)
- Time: keep NTP on, and monitor drift on VMs that suspend/resume. Fix the platform, not the symptom.
- Trust: manage CA roots via configuration management. If you need a corporate root, install it once and rotate it deliberately.
- SNI: always use correct hostnames; don’t curl by IP unless you also control
--resolve/--connect-toand understand cert validation. - Proxies: explicitly define
no_proxyfor internal names. Don’t let environment variables surprise you in systemd services and CI jobs. - Server hygiene: serve full chains, use modern ciphers, and test with OpenSSL from the same client OS your automation uses.
Checklist C: When you suspect an enterprise TLS inspection proxy
- Confirm proxy variables are set (or enforced by network policy).
- Fetch and inspect the certificate issuer in the failing path. If issuer looks “corporate,” you need the corporate root CA on the host.
- Install the corporate root CA to
/usr/local/share/ca-certificatesand runupdate-ca-certificates. - Retest curl. If it works, update your golden images and CI runners so you don’t rediscover this next quarter.
Common mistakes: symptom → root cause → fix
1) Symptom: “certificate has expired” but the site is fine in a browser
Root cause: System time wrong (often VM resumed) or the client is seeing a different certificate chain (proxy interception) than the browser.
Fix: timedatectl + fix NTP; then inspect issuer with openssl s_client. If corporate issuer, install corporate root CA.
2) Symptom: “unable to get local issuer certificate” / “self-signed certificate in certificate chain”
Root cause: Missing CA root/intermediate locally, or TLS inspection proxy presenting a corporate-issued leaf without the corporate root installed.
Fix: Update ca-certificates; install required root CA under /usr/local/share/ca-certificates; run update-ca-certificates; retest.
3) Symptom: “wrong version number” from OpenSSL / curl
Root cause: You’re not speaking TLS to a TLS port. Common cases: hitting an HTTP endpoint on 443, proxy expects CONNECT, or load balancer listener mismatch.
Fix: Check proxy vars; use curl -v and look for proxy CONNECT; verify server listener config; test direct path with --noproxy.
4) Symptom: Works with --tlsv1.2 but fails with default
Root cause: TLS 1.3 intolerance in server/middlebox, or buggy inspection/IDS device.
Fix: Escalate to network/security teams to patch or bypass the device. Temporary mitigation: pin TLS 1.2 for that environment only (document it and put an expiry on the mitigation).
5) Symptom: Works with --http1.1 but fails normally
Root cause: ALPN/HTTP2 mishandling in proxy or load balancer.
Fix: Force HTTP/1.1 in the client as a stopgap; fix ALPN support in the path. Don’t globally disable HTTP/2 unless you enjoy slow regressions that nobody correlates.
6) Symptom: Handshake fails only for one hostname on the same IP
Root cause: SNI mismatch or wrong virtual host routing. Without correct SNI, server presents default cert.
Fix: Ensure clients use the hostname, not raw IP. Confirm with openssl s_client -servername. Fix server vhost configuration if needed.
7) Symptom: Works on one Ubuntu host but not another
Root cause: Different CA roots installed, different proxy environment, different OpenSSL/curl build features, or one host has stale CA store.
Fix: Compare curl -V, dpkg -l ca-certificates, proxy env, and update-ca-certificates output. Make your baseline consistent.
Three corporate mini-stories (anonymized, technically accurate)
Incident caused by a wrong assumption: “It’s just a cert renewal, harmless”
A team rotated a public-facing certificate on an API gateway late on a Friday. The change ticket said “renewal only,” and the on-call approved it because the CN stayed the same and the CA was reputable. Everything looked routine, and the smoke tests passed in a browser.
Within minutes, the Linux automation fleet started failing: curl-based health checks, package fetches from a private repo, and an internal deployment tool that used libcurl. The error was classic: unable to get local issuer certificate. Engineers assumed the CA bundle on clients was outdated. They started rolling apt-get install --reinstall ca-certificates across machines like it was a vaccine.
The real issue: the gateway was misconfigured to serve only the leaf certificate, not the intermediate. Browsers often cached intermediates, so it “worked on my laptop.” The headless clients did not have that intermediate cached and refused the chain.
The fix was boring and server-side: update the gateway TLS configuration to present the full chain. The lesson was sharper: “cert renewal” isn’t a semantic label; it’s a config change that can break chain presentation. Afterward, they added an OpenSSL-based chain count check in CI for the TLS config before promotion.
Optimization that backfired: “Let’s force HTTP/2 everywhere”
An enterprise platform group decided to standardize on HTTP/2 for internal service calls. The idea wasn’t bad: multiplexing reduces connections, improves latency under load, and plays well with modern TLS stacks. They pushed a change to a shared curl wrapper used in jobs and build agents: default to --http2 and celebrate.
Within a week, one region started seeing intermittent TLS handshake failures. Not timeouts—actual handshakes dying with alerts. Retries sometimes worked, sometimes didn’t. Everyone blamed “packet loss,” because that’s what we say when we don’t want to learn how ALPN works.
The culprit was a proxy cluster doing TLS interception and policy enforcement. It nominally supported HTTP/2 but had a configuration quirk: certain backend SNI values triggered a legacy path that couldn’t handle the h2 ALPN selection reliably. When the client offered h2, the proxy occasionally negotiated it and then broke the stream establishment, reporting a generic handshake failure upstream.
The practical mitigation was to force HTTP/1.1 for the affected domains via per-host configuration, not globally. The durable fix required the network team to patch and reconfigure the proxy. The postmortem action item was blunt: “Protocol upgrades are production changes. Roll them out like any other change, with canaries, metrics, and a rollback.”
Boring but correct practice that saved the day: “Golden images with managed trust”
A different org ran a mixed environment: public cloud, on-prem, and a handful of regulated networks with mandatory TLS inspection. They’d been burned before by “works on dev, fails in prod” handshake errors. So they did something unsexy: a single baseline for CA trust and proxy configuration, applied to every Ubuntu image and CI runner.
When Ubuntu 24.04 rolled out, they already had a pipeline step that validated TLS connectivity to critical endpoints using openssl s_client -servername and curl -v with known-good expectations: correct issuer, correct SAN, and system time sync. They also pinned where proxy variables could be set (systemd drop-ins for services, explicit CI environment configuration) rather than letting random shell profiles do it.
One morning, a security team rotated the corporate root CA used for inspection. That rotation can be a bloodbath if you discover it host-by-host. They didn’t. The new CA was already staged in the images and deployed via config management before the rotation completed, with monitoring on certificate verification errors to catch stragglers.
The save wasn’t magic; it was habit. The “boring” practice—managed trust store, reproducible images, and preflight TLS checks—turned what could have been a multi-team incident into a non-event plus one tidy change record.
FAQ
1) Why does curl fail but my browser works?
Browsers cache intermediates, ship their own trust logic, and may use different proxy settings. Curl typically relies on system CA store and environment variables. Test the chain with openssl s_client -showcerts and count certs; verify what issuer you’re actually seeing.
2) Is curl -k ever acceptable?
As a temporary diagnostic on a non-production host to confirm “it’s a trust issue,” sure. As a fix, no. If you need to skip verification, you’re not using TLS for security—you’re using it for vibes.
3) How do I tell if SNI is the problem?
If connecting by IP works only with -servername (OpenSSL) or if you see a default/unrelated certificate without SNI, that’s your answer. Run openssl s_client -connect IP:443 -servername hostname and compare with the result without -servername.
4) What’s the fastest way to confirm a corporate TLS inspection proxy?
Check proxy environment variables, then inspect the issuer of the presented certificate. If the issuer is your company (or a security appliance CA) rather than a public CA, you’re being intercepted. Install the corporate root CA into system trust.
5) Why does forcing TLS 1.2 “fix” it?
It doesn’t fix it; it sidesteps incompatibility. Some middleboxes mishandle TLS 1.3. Forcing TLS 1.2 is a diagnostic and a stopgap while you get the path upgraded.
6) What files does Ubuntu 24.04 use for trusted CAs?
Typically /etc/ssl/certs/ca-certificates.crt (bundle) and the hashed cert directory /etc/ssl/certs. You add local CAs to /usr/local/share/ca-certificates and run update-ca-certificates.
7) Can DNS problems look like TLS handshake failures?
Absolutely. If you resolve a hostname to the wrong IP (wrong VIP, wrong region, captive portal, misrouted split DNS), the server will present a certificate that doesn’t match your hostname or will terminate TLS differently. Always verify resolution and routing early.
8) How do I debug when a proxy requires authentication?
With curl -v, look for proxy CONNECT responses (407 Proxy Authentication Required). Either configure proxy credentials securely (not in shell history), or ensure no_proxy excludes the internal destination.
9) What if the server doesn’t send intermediates?
Fix the server. Serving a complete chain is the server’s job. Client-side hacks (shipping intermediates) create inconsistent behavior and brittle automation.
10) Why do containers fail when the host works?
Containers may have a different CA bundle, no CA bundle at all, or different time sync behavior. Ensure the container image includes ca-certificates and that it uses the host’s correct time (most do, unless you’re doing something exotic).
Conclusion: next steps you can actually do today
If curl TLS handshakes are failing on Ubuntu 24.04, stop treating it like a mystery. Run the fast playbook:
- Confirm time sync (
timedatectl,chronyc sources). - Confirm DNS and path (
getent ahosts,ip route get). - Check proxy involvement (
env | grep -i proxy, thencurl --noproxy). - Verify SNI and chain (
openssl s_client -servername, inspect SAN/issuer/dates). - Only then isolate protocol quirks (force HTTP/1.1, TLS 1.2) and escalate the right team with evidence.
Then do the grown-up follow-through: standardize CA trust in images, keep NTP healthy, and treat protocol “optimizations” like production changes. Your future self will still be on call. Make their night quieter.