Ubuntu 24.04 APT is slow: cache/proxy tricks that speed up office updates

Was this helpful?

If you run Ubuntu desktops or servers in an office, you already know the particular flavor of pain: 50 machines all deciding to update at 9:02 AM, saturating the same Internet link, and then “APT is slow” becomes a helpdesk ticket category.

This isn’t mystical. It’s usually a boring bottleneck—DNS, proxy behavior, mirror selection, TLS overhead, or your WAN link being treated like an all-you-can-eat buffet. The fix is equally boring and wonderfully effective: caching, sane proxying, and a fast diagnosis routine that tells you what’s actually slow.

Fast diagnosis playbook

When people say “APT is slow,” they might mean any of these:

  • Metadata fetch is slow (apt update hangs at “Connecting”)
  • Package downloads are slow (apt upgrade crawls)
  • Unpack/configure is slow (CPU/disk constrained, nothing to do with network)
  • Everything is slow because a proxy is “helping”

First: decide if it’s network, mirror, or local I/O

  1. Measure download speed to the mirror from one client and from one “known good” network host. If both are bad, it’s WAN/mirror/proxy. If only clients are bad, it’s client/proxy/DNS.
  2. Check DNS latency. Slow DNS makes APT look like it’s stuck, because it is—waiting.
  3. Separate fetch vs install by downloading only (apt-get -d) and timing unpack separately.

Second: check proxy behavior

  1. If you’re behind a corporate proxy, verify whether it’s doing TLS interception, content scanning, or “caching” badly.
  2. Confirm APT is actually using the proxy you think it is. Half the time it’s not configured on servers, and the other half it’s configured twice.

Third: decide the fix class

  • Office with 10–300 Ubuntu machines: deploy apt-cacher-ng on the LAN. It’s the sweet spot.
  • Strict compliance + stable package set: consider a full mirror or repository manager (more ops, more control).
  • Single machine on bad Wi‑Fi: pick a better mirror and fix DNS; don’t build a caching empire.

Why APT “feels slow” in offices

APT performance is shaped by a stack of small latencies. In a data center, those latencies are low and predictable. In an office, they’re chaotic: Wi‑Fi retransmits, DNS forwarders, transparent proxies, “security” appliances, and a WAN link shared with video calls and someone syncing a photo library the size of a small moon.

APT also has a specific traffic profile:

  • Many small metadata files during apt update (Release files, index lists).
  • Then fewer, larger package downloads during install/upgrade.
  • Then local disk I/O and CPU during unpack and post-install scripts.

That first phase is where offices suffer. APT may fetch dozens of files across multiple repos. If each HTTPS connection has a slow handshake or each DNS lookup takes 200 ms, it adds up quickly. A cache/proxy on the LAN collapses that latency into one fast hop.

Opinionated guidance: If you have more than a handful of Ubuntu systems on the same site, stop letting each one download the same packages from the Internet. That’s not “decentralized.” It’s just waste.

One short joke, as a treat: APT isn’t slow; it’s just patiently waiting for your proxy to finish its thoughts.

Interesting facts and context

  • APT debuted in the late 1990s as a higher-level package tool for Debian, designed to handle dependencies reliably across repositories.
  • “Acquire” is a whole subsystem inside APT; most “APT is slow” complaints are really Acquire config, DNS, and transport issues—not dpkg.
  • APT historically used multiple connections carefully because mirrors and bandwidth used to be scarce; modern networks flipped the constraints, but APT still optimizes for correctness first.
  • Ubuntu’s mirror ecosystem is globally distributed; the “best” mirror for an office can change with ISP peering, not geography.
  • HTTPS became the default expectation over time; it improves integrity and privacy but increases handshake overhead and makes some “transparent caching” appliances useless.
  • Package indexes are compressed (often .gz), which is great for bandwidth but means CPU can matter on tiny systems or overloaded VMs.
  • apt-cacher-ng became popular because it’s low ceremony: it can cache without you running a full mirror and without rewriting repository structures.
  • In many offices, DNS is the hidden bottleneck because internal resolvers forward upstream with filtering, logging, and rate limits.

One paraphrased idea, because it’s worth remembering in operations: Werner Vogels has often pushed the idea that “everything fails, so design for failure” (paraphrased idea). APT slowness is usually a soft failure: not broken enough to page someone, just broken enough to waste everyone’s time.

Practical tasks: commands, outputs, what it means, and the decision

These are field tasks. Run them on one affected client first, then on your prospective cache/proxy host. Don’t guess. Measure.

Task 1: Confirm what part is slow (update vs download vs install)

cr0x@server:~$ sudo time apt update
Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease
Get:2 http://security.ubuntu.com/ubuntu noble-security InRelease [110 kB]
Fetched 110 kB in 8s (13.2 kB/s)
Reading package lists... Done

real    0m9.214s
user    0m0.391s
sys     0m0.103s

What it means: You’re spending seconds fetching tiny metadata. That smells like DNS, proxy handshake, or per-request latency.

Decision: Focus on DNS/proxy/caching, not disk or CPU.

Task 2: Measure only download time for upgrades (skip install time)

cr0x@server:~$ sudo time apt-get -y -d dist-upgrade
Reading package lists... Done
Building dependency tree... Done
The following packages will be upgraded:
  openssl openssh-client
Need to get 4,812 kB of archives.
Get:1 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 openssl amd64 3.0.13-0ubuntu3.2 [1,402 kB]
Get:2 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 openssh-client amd64 1:9.6p1-3ubuntu13.4 [3,410 kB]
Fetched 4,812 kB in 2s (2,406 kB/s)

real    0m3.011s
user    0m0.233s
sys     0m0.116s

What it means: If this is fast but the full upgrade is slow, your bottleneck is local install/unpack/scripts.

Decision: If download is fast, stop blaming mirrors. Look at disk and dpkg locks.

Task 3: Check for dpkg/apt locks and stuck processes

cr0x@server:~$ ps aux | egrep 'apt|dpkg' | head
root        1421  0.0  0.1  18220  4800 ?        Ss   09:01   0:00 /usr/lib/apt/apt.systemd.daily
root        1507  0.2  0.3  56340 12420 ?        S    09:02   0:01 /usr/bin/dpkg --configure -a

What it means: systemd’s daily job may already be updating or configuring packages.

Decision: Don’t fight it. Let it finish, or schedule office updates away from business hours.

Task 4: Verify the configured repositories and transports (HTTP vs HTTPS)

cr0x@server:~$ grep -R --no-filename -E '^(deb|deb-src)\s' /etc/apt/sources.list /etc/apt/sources.list.d/*.list 2>/dev/null | head -n 8
deb http://archive.ubuntu.com/ubuntu noble main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu noble-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu noble-security main restricted universe multiverse

What it means: You’re using HTTP here (common for Ubuntu main repos). HTTPS can be slower through inspection appliances; HTTP can be blocked in some environments.

Decision: Pick the transport that works cleanly with your network policy; caching works best when the path is stable.

Task 5: Check which mirror you actually hit (and whether it’s being redirected)

cr0x@server:~$ apt-cache policy | sed -n '1,60p'
Package files:
 100 /var/lib/dpkg/status
     release a=now
 500 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 Packages
     release v=24.04,o=Ubuntu,a=noble-updates,n=24.04,l=Ubuntu,c=main,b=amd64
     origin archive.ubuntu.com

What it means: Confirms origin and repo priorities.

Decision: If you later deploy a cache, you’ll validate that origin changes to your cache host.

Task 6: Diagnose DNS latency (often the real culprit)

cr0x@server:~$ resolvectl status | sed -n '1,40p'
Global
       Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Current DNS Server: 10.20.0.10
       DNS Servers: 10.20.0.10 10.20.0.11

What it means: You’re using internal DNS. That’s fine—until it’s slow or filtering aggressively.

Decision: If internal DNS is overloaded, fix it or add a local caching resolver per site.

Task 7: Time DNS lookups for the mirrors

cr0x@server:~$ time getent ahosts archive.ubuntu.com | head -n 3
91.189.91.83    STREAM archive.ubuntu.com
91.189.91.83    DGRAM
91.189.91.83    RAW

real    0m0.215s
user    0m0.004s
sys     0m0.004s

What it means: 215 ms for a lookup is not catastrophic, but multiply by dozens of requests, plus retries, plus proxy lookups, and it gets ugly.

Decision: If you see >100 ms repeatedly inside the LAN, treat DNS as a production dependency and cache it locally.

Task 8: Measure raw TCP/TLS connection setup time

cr0x@server:~$ curl -I -s -o /dev/null -w 'dns:%{time_namelookup} connect:%{time_connect} tls:%{time_appconnect} ttfb:%{time_starttransfer} total:%{time_total}\n' http://archive.ubuntu.com/ubuntu/
dns:0.021 connect:0.034 tls:0.000 ttfb:0.102 total:0.103

What it means: For HTTP, no TLS. If you run the same against an HTTPS repo and see huge tls, a proxy or inspection box may be involved.

Decision: Latency is fixable with a LAN cache because the slow handshakes happen once per object otherwise.

Task 9: Check MTU issues (classic “works but slow”)

cr0x@server:~$ ip link show dev eth0 | sed -n '1,3p'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff

What it means: MTU 1500 is normal. If you’re on VPN/VXLAN/PPPoE and have mismatched MTU, you can get retransmits and stalls.

Decision: If downloads stall randomly, test path MTU and fix network edge, not APT.

Task 10: See if APT is using a proxy (and which one)

cr0x@server:~$ apt-config dump | egrep -i 'Acquire::(http|https)::Proxy' | head
Acquire::http::Proxy "http://proxy.office.local:3128/";
Acquire::https::Proxy "http://proxy.office.local:3128/";

What it means: APT is explicitly configured to use a proxy.

Decision: If that proxy is remote (HQ) and your office is not, you just added unnecessary RTT. Put caching closer or bypass for Ubuntu repos.

Task 11: Confirm proxy reachability and whether it’s the choke point

cr0x@server:~$ curl -s -o /dev/null -w 'proxy_connect:%{time_connect} total:%{time_total}\n' http://proxy.office.local:3128/
proxy_connect:0.003 total:0.004

What it means: LAN proxy is reachable quickly. If this takes seconds, your “office proxy” isn’t local or is overloaded.

Decision: Fix proxy locality/capacity before you start rewriting APT configs.

Task 12: Inspect APT’s own retry/timeout behavior (symptoms of loss)

cr0x@server:~$ sudo apt -o Debug::Acquire::http=true update 2>&1 | egrep -i 'Connecting|Waiting|Retrying' | head -n 20
Connecting to archive.ubuntu.com (91.189.91.83)
Waiting for headers
Connecting to security.ubuntu.com (185.125.190.39)
Waiting for headers

What it means: “Waiting for headers” delays indicate server/proxy latency, congestion, or packet loss more than raw bandwidth limits.

Decision: Deploy a cache on the LAN to reduce external round trips; if it persists, investigate WAN loss and proxy inspection overhead.

Task 13: Validate you have enough disk space for caching on the cache host

cr0x@server:~$ df -h /var/cache | tail -n 1
/dev/sda2        200G   38G  153G  20% /var

What it means: Plenty of headroom for a package cache. apt-cacher-ng is effective even with modest storage, but it should not be starved.

Decision: If space is tight, allocate a dedicated volume for cache and set eviction policy.

Task 14: Confirm your cache is actually serving hits once deployed

cr0x@server:~$ sudo tail -n 8 /var/log/apt-cacher-ng/apt-cacher.log
2025-12-29 10:12:41|O|171|archive.ubuntu.com|ubuntu|pool/main/o/openssl/libssl3_3.0.13-0ubuntu3.2_amd64.deb
2025-12-29 10:13:02|H|612|archive.ubuntu.com|ubuntu|pool/main/o/openssl/libssl3_3.0.13-0ubuntu3.2_amd64.deb

What it means: O is an origin fetch (miss). H is a cache hit. Hits are where the speedup and bandwidth savings happen.

Decision: If you only see origin fetches, your clients are not pointing to the cache, or caching is disabled for the repo path.

Caching and proxy options (what to deploy)

There are three main strategies for offices. Each one has a place. Two of them are easy to overcomplicate.

1) apt-cacher-ng on the LAN (best default)

What it is: An HTTP proxy that understands Debian/Ubuntu repository patterns and caches packages and index files as clients request them.

Why it works: The second client gets packages at LAN speed. The 50th client barely tickles the WAN.

Tradeoffs: It’s a shared dependency. If you make it fragile, you’ll hear about it. If you run it boringly, nobody will notice—which is the highest compliment in ops.

2) Generic HTTP proxy (Squid or corporate proxy)

What it is: A general proxy that may or may not cache content effectively, depending on HTTPS behavior and cache rules.

When it works: If most of your repo traffic is HTTP and your proxy caching policy is sane.

When it’s pain: HTTPS repos plus interception plus scanning can turn APT into a slow-motion documentary about TLS handshakes.

3) Full local mirror / repo management

What it is: You mirror Ubuntu repositories locally (or run a repository manager that syncs selected content).

When it’s worth it: Big fleets, strict change control, air-gapped-ish networks, or you need deterministic availability during upstream outages.

Downside: More storage, more ops, more “why is this repo not syncing” tickets.

Deploy apt-cacher-ng (recommended for most offices)

This is the office win: one small VM or mini-PC per site, a few config lines, and suddenly updates stop being a weekly network incident.

Server setup on Ubuntu 24.04

cr0x@server:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease
Reading package lists... Done

Meaning: Baseline is healthy enough to install packages.

cr0x@server:~$ sudo apt install -y apt-cacher-ng
Reading package lists... Done
Building dependency tree... Done
The following NEW packages will be installed:
  apt-cacher-ng
Setting up apt-cacher-ng (3.7.4-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/apt-cacher-ng.service → /usr/lib/systemd/system/apt-cacher-ng.service.

Meaning: Service is installed and enabled.

Decision: Decide where clients will reach it (DNS name), and whether you need it to listen on all interfaces.

Confirm it’s listening

cr0x@server:~$ sudo ss -lntp | egrep ':3142'
LISTEN 0      4096         0.0.0.0:3142      0.0.0.0:*    users:(("apt-cacher-ng",pid=1824,fd=7))

Meaning: Default port 3142 is open on all interfaces. If that’s too permissive for your network, restrict it with firewall rules.

Basic firewall (restrict to office subnets)

cr0x@server:~$ sudo ufw allow from 10.20.0.0/16 to any port 3142 proto tcp
Rule added

Meaning: Only that subnet can use the cache.

Decision: If you have multiple VLANs, add explicit rules per subnet; don’t just open it to the world and hope.

Client configuration: point APT at the cache

On each Ubuntu client, create a small config snippet. Use DNS for the cache host so you can move it later without visiting every machine again.

cr0x@server:~$ echo 'Acquire::http::Proxy "http://aptcache.office.local:3142";' | sudo tee /etc/apt/apt.conf.d/02proxy
Acquire::http::Proxy "http://aptcache.office.local:3142";

Meaning: HTTP acquisitions will use your cache.

Decision: If your sources are HTTPS, you can still proxy HTTPS via an HTTP proxy, but test with your network security stack; some environments block CONNECT.

Validate that clients are hitting the cache

Run update on the client, then check logs on the cache server.

cr0x@server:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease
Get:2 http://archive.ubuntu.com/ubuntu noble-updates InRelease [126 kB]
Fetched 126 kB in 1s (148 kB/s)
Reading package lists... Done

Meaning: The client output won’t always show the proxy explicitly. The truth is in the cache logs.

cr0x@server:~$ sudo tail -n 5 /var/log/apt-cacher-ng/apt-cacher.log
2025-12-29 11:03:11|O|250|archive.ubuntu.com|ubuntu|dists/noble-updates/InRelease
2025-12-29 11:03:12|O|411|archive.ubuntu.com|ubuntu|dists/noble-updates/main/binary-amd64/Packages.gz
2025-12-29 11:04:02|H|7|archive.ubuntu.com|ubuntu|dists/noble-updates/InRelease

Meaning: First request fetched from origin, second client (or second run) hit cache.

Decision: If you never see hits, your clients may be bypassing the proxy due to existing config, environment variables, or network policy.

Operational hygiene that keeps it fast

  • Put cache data on a disk that isn’t contending with heavy logs or databases.
  • Monitor disk usage and inode usage (many small files happen).
  • Plan for reboots: make sure the service starts quickly and the firewall rules persist.
  • Keep it simple. The cache is not a place to express creativity.

Using a corporate HTTP(S) proxy correctly

Some offices already have a corporate proxy. If it’s local, healthy, and configured to not sabotage package downloads, you may be able to reuse it. If it’s remote (hairpin to HQ), it’s often the reason APT is slow in branch offices.

Decide if you should bypass the proxy for Ubuntu repos

If policy permits, bypassing the proxy for known update mirrors can be a big improvement. The correct approach depends on how your organization does egress control. If security requires proxy visibility, then deploy an on-site cache/proxy and let it egress in a controlled way.

Configure proxy explicitly (don’t rely on environment variables)

cr0x@server:~$ sudo tee /etc/apt/apt.conf.d/02proxy >/dev/null <<'EOF'
Acquire::http::Proxy "http://proxy.office.local:3128/";
Acquire::https::Proxy "http://proxy.office.local:3128/";
EOF

Meaning: APT will use the proxy regardless of the user shell environment.

Decision: Centralize this via configuration management. Hand-edited proxy settings are how you get “works on my laptop” in sysadmin form.

Watch for TLS interception side-effects

If your proxy does TLS interception, you can see:

  • Long TLS handshake times (client waits for proxy inspection)
  • Random stalls on large downloads (buffering/content scanning)
  • Occasional hash mismatch errors if the path is tampered with (rare, but it happens)

APT is integrity-focused. That’s good. It also means “helpful” boxes that rewrite content are not your friends.

Second short joke (and the last one): The only thing worse than a slow proxy is a slow proxy that insists it’s accelerating your traffic.

Full local mirror: when it’s worth it (and when it’s just a hobby)

A full mirror is the grown-up option for large environments. It gives you:

  • Independence from upstream outages or mirror flakiness
  • Repeatable content (what you tested is what you deploy)
  • Better compliance stories (what was available when)

It also gives you chores:

  • Sync schedules, storage planning, and corruption detection
  • Key management and signing flows if you rehost or curate
  • More moving parts to page you at 2 AM

Use this rule: if the office problem is “too many machines download the same stuff,” apt-cacher-ng solves it with minimal ceremony. If the problem is “we need controlled, curated repositories,” then build a mirror/repo manager and accept the ops load.

Client-side tuning that actually helps

Client tuning is the part where people waste time. A cache/proxy gives the big win. Still, a few client adjustments can reduce pain.

Pick a better mirror (but don’t turn it into a sport)

Ubuntu’s defaults are usually okay, but “usually” is not a performance guarantee. The right mirror for your ISP and site can differ.

Practical method: test two or three mirrors from the office during business hours and after hours. If a mirror is fast at 1 AM and slow at 10 AM, it’s probably congestion/peering, not your config. In offices, you want consistent, not “best case.”

Limit surprises from unattended upgrades

Unattended upgrades are good. Uncoordinated unattended upgrades across a site are how you discover that your WAN link is not infinite.

Stagger them. Or have them pull through a site cache so the WAN hit happens once.

Don’t “optimize” by disabling signature checks

If anyone suggests bypassing verification to make updates “faster,” decline the suggestion the way you’d decline a used parachute. Integrity checks aren’t optional; they’re the entire point of a package manager.

Corporate mini-stories from production

Incident: the wrong assumption (“it’s the mirror”) that burned a week

A mid-size company had a branch office where Ubuntu updates took 20–40 minutes. The team assumed the Ubuntu mirror was the issue, because a traceroute looked “long” and a speed test to some random website was fine. They rotated mirrors three times and even tried pinning to a specific regional endpoint.

Nothing changed. Some mornings it was passable; afternoons were awful. Classic intermittent misery.

The real culprit was DNS. The branch’s resolver forwarded all queries back to HQ, through a security stack that logged and filtered. During peak hours, that path had latency spikes and occasional timeouts. APT’s “many small requests” pattern made it look like a mirror problem because the mirror name resolution kept stalling before any bytes moved.

They fixed it by adding a local caching DNS resolver in the branch and then deploying apt-cacher-ng. Mirror changes stopped being a topic. The ticket category disappeared. The lesson was painfully simple: don’t blame the service you can see until you’ve measured the dependency you can’t.

Optimization that backfired: “one proxy for all sites”

Another organization consolidated everything behind a single central proxy for “better control.” Branch offices were configured to use it for APT, browsers, everything. The proxy had plenty of CPU and memory, and its dashboard was green. So far, so corporate.

But branches were separated by high-latency links. APT traffic became a parade of CONNECT tunnels and small fetches, each paying the RTT tax. Worse, the proxy performed content scanning that buffered downloads before releasing them, which is fantastic for malware, less fantastic for a package manager pulling hundreds of megabytes.

They tried tuning APT concurrency and timeouts. That made it noisier, not faster. Some clients started timing out more often, because they were now more aggressive about establishing connections that were doomed to be slow.

The fix was to stop treating branch offices like thin clients of HQ. They deployed a small cache/proxy per site (still controlled, still logged at the egress), and allowed the site cache to talk outbound. WAN utilization dropped, user complaints dropped, and the central proxy got to do what it was good at: policy enforcement without being a long-distance performance anchor.

Boring but correct practice that saved the day: “cache host as cattle”

A team rolled out apt-cacher-ng to dozens of sites. They did something deeply unsexy: standardized the cache VM build, used the same port, the same firewall rules, the same disk layout, and managed the client proxy snippet via configuration management.

They also treated the cache as disposable. No artisanal tuning on the box. If it got weird, they redeployed it. Logs shipped centrally; metrics were basic (disk usage, service up, request rate). The cache content itself was not precious. It could be rebuilt by usage.

Then a site had a power event that corrupted the cache disk. Users started reporting slow updates again, but nothing else was broken. The cache VM was rebuilt from the standard template in under an hour, DNS pointed to the new instance, and clients carried on. No heroics, no overnight incident call. Just the reward for being boring.

Common mistakes: symptom → root cause → fix

1) apt update “hangs” on Connecting/Waiting for headers

Symptom: It looks stuck, then eventually continues.

Root cause: DNS slowness, proxy handshake overhead, packet loss, or a security box buffering responses.

Fix: Time DNS lookups; measure connect/TTFB with curl; deploy a LAN cache; fix WAN loss; avoid remote hairpin proxies.

2) Downloads start fast, then slow to a crawl

Symptom: First few MB are fine, then it stalls or becomes erratic.

Root cause: MTU/PMTUD problems, proxy content scanning/buffering, or Wi‑Fi retransmits.

Fix: Verify MTU and VPN overhead; test on wired; bypass or localize the proxy; keep cache on LAN to reduce WAN sensitivity.

3) “Hash Sum mismatch” or repository signature errors appear intermittently

Symptom: APT refuses to use downloaded metadata.

Root cause: Transparent proxy altering content, broken caching layer serving partial files, or mirror sync window inconsistencies.

Fix: Disable transparent caching for repo traffic; clear proxy caches for affected objects; switch to a stable mirror; ensure cache server disk is healthy.

4) Cache deployed, but no speedup

Symptom: Clients still download from the Internet; cache logs show mostly misses.

Root cause: Clients not configured, or HTTPS repos bypass cache, or another proxy overrides settings.

Fix: Check apt-config dump; standardize /etc/apt/apt.conf.d/ proxy snippet; validate with cache logs and a controlled test client.

5) apt-cacher-ng disk fills up

Symptom: Cache host runs out of space, then updates fail.

Root cause: No eviction/cleanup plan, too many repos (including PPAs), or tiny disk sizing.

Fix: Give it real disk; reduce repo sprawl; set retention/cleanup; monitor /var/cache/apt-cacher-ng usage.

6) “403 Forbidden” or “Proxy Authentication Required” from APT

Symptom: APT fails immediately via proxy.

Root cause: Proxy requires auth, blocks CONNECT, or blocks certain domains/paths.

Fix: Use an unauthenticated site cache; configure proxy auth in APT only if policy allows and you can manage credentials safely; whitelist repo domains.

7) Updates are fast, but installs are slow

Symptom: Downloads complete quickly; then “Unpacking…” takes forever.

Root cause: Slow disk, exhausted IOPS on shared storage, CPU-throttled VMs, or dpkg triggers/postinst scripts doing work.

Fix: Separate download vs install timing; move VMs to better storage; investigate heavy postinst scripts; avoid running updates during peak load.

Checklists / step-by-step plan

Step-by-step: go from “APT is slow” to fast updates in one office

  1. Pick one representative client (wired if possible) and run Task 1 and Task 2 to separate metadata/download/install slowness.
  2. Measure DNS latency (Task 6 and Task 7). If DNS is slow, fix DNS first; a cache won’t fix name resolution delays.
  3. Check if a proxy exists (Task 10). If it’s remote, plan to replace hairpin behavior with a local cache.
  4. Deploy apt-cacher-ng on a small VM at the site (Task: install, verify listening, firewall).
  5. Point 3–5 test clients to the cache via /etc/apt/apt.conf.d/02proxy.
  6. Validate cache hits from logs (Task 14). Don’t declare victory until you see hits.
  7. Roll out client config via management tooling (Ansible, Puppet, Chef, Intune scripts, whatever you use). Manual edits don’t scale and don’t revert cleanly.
  8. Stagger update schedules or keep unattended upgrades but ensure they pull through the cache.
  9. Add basic monitoring: service up, disk usage, and a simple log-based hit/miss ratio trend.
  10. Write the rollback plan: one-line removal of proxy snippet, DNS change back, and how to redeploy cache VM.

Checklist: what “good” looks like after the fix

  • apt update completes in seconds on repeat runs.
  • Cache logs show a healthy mix of origin fetches and hits; hits dominate after the first day.
  • WAN utilization during patch windows is flatter (not a sawtooth of repeated downloads).
  • No clients are hard-coded to random external mirrors without reason.
  • Proxy configuration is consistent and centrally managed.
  • Cache host has disk headroom and isn’t paging itself to death.

Checklist: what to avoid (because it creates new problems)

  • Running a single centralized proxy for all sites unless every site has low-latency links to it.
  • Transparent caching appliances for repo traffic, especially with HTTPS in the mix.
  • Dozens of PPAs across the fleet. Each one adds metadata requests and cache churn.
  • “Optimizations” that disable checks or relax APT security settings.

FAQ

1) Do I really need a cache server for just 20 machines?

If those 20 machines update regularly and share the same WAN link, yes—especially if the office has latency, limited bandwidth, or expensive egress. apt-cacher-ng is small and pays for itself in reduced waiting and fewer link spikes.

2) Will apt-cacher-ng work with HTTPS repositories?

Often, yes, via proxying (CONNECT). Whether it works well depends on your environment’s proxy policies and TLS interception. Test with a couple of clients and verify hits in logs. If your security stack breaks CONNECT or forces interception, consider a full mirror inside the network boundary.

3) Is a full mirror always faster than apt-cacher-ng?

Not always. A mirror can be very fast, but it’s heavier operationally. apt-cacher-ng is request-driven: it caches what your fleet actually uses. For most offices, that’s the right trade.

4) Why is apt update slow but apt upgrade download speed looks fine?

Because apt update is “lots of small files” and is sensitive to DNS and connection setup latency. A cache on the LAN helps a lot because it turns those remote fetches into local ones.

5) Should I increase APT parallel downloads?

Only after you’ve fixed caching and mirror selection. Increasing parallelism can make a congested proxy or flaky WAN behave worse, not better. Measure first; don’t spray connections at the problem.

6) Can I share one cache across multiple offices?

You can, but it’s rarely optimal unless inter-office latency is low and the network is reliable. The whole point is to remove WAN RTT from the hot path. Put caches close to clients.

7) What about snaps—do they benefit from APT caching?

No. Snap downloads are separate from APT and use different infrastructure. Solve APT first, then evaluate snap proxying/caching separately if snaps are also a pain point.

8) How do I know if the proxy is the bottleneck?

Compare timings with and without the proxy on a test host (or by temporarily moving proxy config aside). Also time proxy connect vs mirror connect, and look for “Waiting for headers” in APT debug output.

9) Is it safe to cache Ubuntu packages?

Yes, if you’re caching correctly and not altering content. Packages are signed and validated via repository metadata. Don’t use “transparent” devices that rewrite responses. Keep the cache host patched and restricted to your network.

10) What’s the minimum monitoring I should add for the cache server?

Service liveness (systemd), disk usage, and log volume. If you want one extra metric, track cache hit ratio over time; it tells you whether clients are really using it.

Conclusion: next steps you can do this week

If Ubuntu 24.04 updates are slow in your office, treat it like any other production performance problem: isolate the phase, measure the dependencies, then fix the highest-leverage bottleneck. For offices, that bottleneck is often per-request latency and duplicated downloads, not raw bandwidth.

Do this, in order:

  1. Run the fast diagnosis playbook on one client: separate metadata vs download vs install.
  2. Measure DNS and proxy behavior. If DNS is slow, fix DNS first.
  3. Deploy apt-cacher-ng on the LAN and point clients to it via a managed APT config snippet.
  4. Verify cache hits in logs and watch WAN usage flatten during patch windows.
  5. Keep it boring: standard build, basic monitoring, and a simple redeploy story.

The end state you want is not “APT is fast on my machine.” It’s “updates are a non-event for the whole office.” That’s the kind of performance win you can live with.

← Previous
WordPress emails go to spam: SPF/DKIM/DMARC explained and configured
Next →
When Reference Beats Custom: The Myth That’s Sometimes True

Leave a comment