You mounted a Windows share on Ubuntu 24.04, kicked off a copy, and now you’re watching 12 MB/s on a 10 GbE link while your patience leaks out of your ears. Meanwhile, CPU is bored, disks are bored, and your ticket queue is not bored.
This is the part where people blame “SMB being slow.” Sometimes it is. More often it’s the default client behavior, a security feature you didn’t mean to pay for, or a latency problem wearing a throughput costume.
A few facts that actually matter
Context doesn’t make packets move faster, but it stops you from trying the wrong knobs. Here are some short, concrete facts and historical bits that show up in real investigations:
- SMB is older than your last three “modern” storage stacks. It traces back to the 1980s; today’s SMB3 is the same family, with layers of security and performance features added over time.
- Linux CIFS isn’t a userspace FUSE toy. The modern Linux client is a kernel module (
cifs.ko) with decades of tuning, but it inherits SMB semantics (chatty metadata, locks, ACLs). - SMB1 is dead for performance and security reasons. SMB2/3 reduced chattiness and improved pipelining, but you can still make SMB3 slow by enabling costly features unintentionally.
- SMB signing and encryption are not free. They add per-packet CPU and can disable certain NIC offloads. Great for hostile networks. Terrible for “why is this 200 MB/s slower than yesterday?”
- Latency hurts SMB more than people expect. SMB can pipeline, but metadata-heavy workloads still get gated by round trips. A 2 ms jump can crush directory scans and small-file copies.
- Directory operations are a different sport than streaming reads. “I can read a big file at 900 MB/s” and “a million tiny files takes hours” can both be true.
- Mount defaults are conservative. Linux defaults tend to prefer correctness and compatibility. That’s fine—until you want performance and you know your environment.
- SMB Multichannel exists, but isn’t magic. It can use multiple NICs/queues, but only if the server, client, and network stack are aligned (and you’re not pinned to one TCP flow by accident).
- “Slow CIFS” is frequently “slow DNS/Kerberos.” Authentication delays and name resolution timeouts can make mounts and reconnects painful, even if steady-state throughput is okay.
One quote that still holds up in operations: Hope is not a strategy.
— General Gordon R. Sullivan.
Fast diagnosis playbook (find the bottleneck fast)
If you do only one thing from this article, do this sequence. It’s designed to stop “random mount options bingo” and instead put a name to the bottleneck.
First: confirm what “slow” means
- Slow for large sequential I/O? Suspect signing/encryption, single TCP flow limits, MTU issues, NIC offloads, server limits.
- Slow for small files / metadata? Suspect latency, attribute caching policy, server-side AV/scanning, ACL lookups, change notify, directory enumeration behavior.
- Slow only sometimes? Suspect DFS referrals, reconnect behavior, Wi-Fi roaming, TCP congestion, background scanning, server contention, or a firewall doing “helpful inspection.”
Second: isolate network vs storage vs protocol overhead
- Network baseline: run an
iperf3test to the same server (or same subnet). If the network can’t do the speed, CIFS can’t either. - Server local baseline: if you can, test disk throughput on the server (or at least CPU load during SMB transfers). If the server is pegged on crypto, you’ve found your villain.
- Protocol confirmation: check the negotiated SMB dialect, signing, encryption, and whether multichannel is in use.
Third: change one thing, measure, keep receipts
- Start by disabling expensive security features only if your security posture allows it (signing/encryption). If not, plan for CPU and consider AES-NI, NIC offloads, or faster cores.
- Then tune caching and attribute behavior based on workload (CI/CD artifact store vs shared spreadsheets vs home directories).
- Finally, address workload shape: parallelism, packaging small files, robocopy/rsync flags, and server-side indexing/AV exclusions.
Joke #1: SMB performance tuning is like dieting—most of the time the fix is “stop snacking on per-file metadata calls.”
Baseline first: prove it’s CIFS and not your network
Ubuntu 24.04 didn’t wake up and decide to ruin your day. It’s doing what you asked, with defaults that are safe but not always fast. Before we touch mount options, verify these three basics:
- Path: are you mounting the right target (DFS can send you somewhere else)?
- Transport: are you on wired Ethernet, the right VLAN, and the right MTU?
- Server: is the SMB server healthy, or is it quietly CPU-bound on signing/encryption/AV?
SMB throughput complaints are often cross-team: network says it’s storage, storage says it’s network, security says “working as designed,” and the application team is already writing a Slack thread titled “Linux is slow.” Your job is to turn vibes into metrics.
Mount options that usually fix throughput
Let’s talk about the options that repeatedly move the needle on Ubuntu 24.04 clients. Not “every flag I found in a forum,” but the ones that address common bottlenecks: dialect negotiation, signing/encryption overhead, caching, and request sizing.
Start with a sane, modern baseline
Use SMB3 unless you have a very specific legacy reason not to. On Ubuntu 24.04, your kernel and cifs-utils are modern enough to do this cleanly.
cr0x@server:~$ sudo mount -t cifs //fs01.corp.example/engineering /mnt/eng \
-o vers=3.1.1,username=alice,domain=CORP,uid=1000,gid=1000,dir_mode=0770,file_mode=0660,soft,nounix,serverino
This is not yet the “fast” mount. It’s the “predictable” mount:
- vers=3.1.1: pins dialect; avoids downgrade weirdness.
- nounix: prevents POSIX extensions assumptions that can bite interoperability.
- serverino: stable inode numbers from server; helps some tooling and reduces surprises.
- soft: returns errors faster on server issues (decide carefully; see mistakes section).
Throughput wins: reduce per-I/O overhead (when allowed)
If your environment currently enforces SMB signing or encryption, don’t break policy in production because you read a blog. But you still need to know the cost, because you’ll otherwise chase phantom mount flags for days.
Option: disable signing (only if policy allows)
cr0x@server:~$ sudo mount -t cifs //fs01.corp.example/engineering /mnt/eng \
-o vers=3.1.1,username=alice,domain=CORP,seal=no,sign=no,cache=none,actimeo=1
What this does: reduces CPU spent on message integrity and/or encryption. On some networks it’s a bad idea. On trusted internal storage networks it’s sometimes acceptable with compensating controls (segmentation, host firewalling).
Option: keep security, but ensure CPU isn’t the limit
If you must have signing/encryption, treat it like TLS: you need CPU headroom and ideally hardware acceleration (AES-NI). If a VM has weak vCPU allocation or noisy neighbors, throughput will look like a protocol problem.
Metadata-heavy workloads: caching options that matter
The default caching semantics are cautious. That’s good for multi-client correctness. It can also make every build or unpack operation feel like it’s going through molasses.
cache=strict vs cache=none
- cache=none: fewer surprises; often slower for repeated reads and metadata.
- cache=strict: can speed up reads while remaining relatively correct, but can still be limited by metadata calls.
actimeo (attribute cache timeout)
For workloads that repeatedly stat() files (build systems, dependency scanners), bumping attribute caching can be transformative.
cr0x@server:~$ sudo mount -t cifs //fs01.corp.example/engineering /mnt/eng \
-o vers=3.1.1,username=alice,domain=CORP,cache=strict,actimeo=30
Trade-off: you’re accepting that attributes may be stale for up to 30 seconds. For “shared home directories” this can be annoying. For “read-mostly artifact cache” it’s usually fine and fast.
Small-file performance: you can’t mount-option your way out of physics
SMB has to perform multiple operations per file: create/open, query info, read/write, close, update metadata. Multiply by 500,000 files and you’ve built a latency benchmark.
Mount options help a bit (caching, fewer round trips), but the big wins are often: reduce file count (tar it), increase parallelism carefully, or move the workload to object storage / local SSD scratch then sync.
“Big file” performance: watch the single-flow trap
Many “copy a big file” operations are effectively a single TCP flow. On high-bandwidth, moderate-latency links, a single flow can underutilize the pipe unless window scaling, congestion control, and buffer sizes are right. SMB Multichannel can help, but only when negotiated and supported.
Joke #2: The fastest SMB transfer is the one your security team didn’t make you encrypt—unfortunately, they usually notice.
Why your workload shape matters (large vs small files)
In production, “CIFS is slow” typically means one of these:
- Large sequential read/write is slow: you’re paying crypto/signing tax, you’re stuck on a single path, or the server’s storage is saturated.
- Small files are slow: latency and metadata operations are dominating; caching and server-side scanning matter more than raw throughput.
- Directory listing is slow: you’re enumerating huge directories, hitting ACL checks, or your client is doing extra lookups.
- Mount/connect is slow: DNS, Kerberos, time skew, or SMB session setup retries.
Pick the right tool. If you need to move millions of tiny files nightly, SMB can do it, but you need to treat it like a workload engineering problem, not a mount-line problem.
12+ practical tasks: commands, outputs, decisions
These are the real checks I run when someone says “CIFS is painfully slow.” Each task includes a command, realistic output, and the decision you make from it. Run them in order if you like; they also work à la carte.
Task 1: Confirm kernel, cifs-utils versions (you’re debugging a moving target)
cr0x@server:~$ uname -r
6.8.0-41-generic
cr0x@server:~$ dpkg -l | egrep 'cifs-utils|linux-modules-extra' | head
ii cifs-utils 2:7.0-2ubuntu0.1 amd64 Common Internet File System utilities
ii linux-modules-extra-6.8.0-41-generic 6.8.0-41.41 amd64 Linux kernel extra modules for version 6.8.0
Decision: If linux-modules-extra is missing, you can lose optional modules and tooling. If kernel is ancient (not your case on 24.04), upgrade before you tune.
Task 2: Verify mount details and negotiated SMB features
cr0x@server:~$ mount | grep 'type cifs'
//fs01.corp.example/engineering on /mnt/eng type cifs (rw,relatime,vers=3.1.1,cache=strict,username=alice,uid=1000,gid=1000,soft,serverino,actimeo=1)
Decision: Confirm you’re actually using SMB3.1.1, and see caching/actimeo. If you expected a tuning flag and it’s not present, you’re not testing what you think you’re testing.
Task 3: Pull SMB session settings from the kernel (dialect, signing, encryption)
cr0x@server:~$ sudo cat /proc/fs/cifs/DebugData | sed -n '1,120p'
CIFS Version 2.48
Features: DFS,FSCACHE,STAT,SMB_DIRECT,INODE64,ACL,SMB2,SMB3,OPLOCKS,SECURITY
Active VFS Requests: 0
Servers:
1) Name: fs01.corp.example
TCP status: connected
Dialect: 3.1.1
Security: signing required
Encryption: yes
Session User: CORP\alice
Shares:
1) \\fs01.corp.example\engineering
Mounts: 1
UNC: \\fs01.corp.example\engineering
Options: rw,relatime,soft,serverino,actimeo=1,cache=strict
Decision: If signing is required and encryption is on, expect CPU overhead. If throughput is low and CPU is high on either side, you likely found your limiter.
Task 4: Network baseline throughput with iperf3
cr0x@server:~$ iperf3 -c fs01.corp.example -P 4 -t 10
Connecting to host fs01.corp.example, port 5201
[SUM] 0.00-10.00 sec 9.45 GBytes 8.12 Gbits/sec sender
[SUM] 0.00-10.00 sec 9.41 GBytes 8.08 Gbits/sec receiver
Decision: Network can do ~8 Gbit/s, so a 100–200 Mbit/s SMB copy is not “the network.” Now you can stop arguing with the network team and start arguing with evidence.
Task 5: Confirm path stability (DNS, reverse lookups, and referrals)
cr0x@server:~$ getent hosts fs01.corp.example
10.20.30.40 fs01.corp.example
cr0x@server:~$ smbclient -L fs01.corp.example -U 'CORP\alice' -m SMB3 | head
Sharename Type Comment
--------- ---- -------
engineering Disk
IPC$ IPC IPC Service (SMB Server)
SMB1 disabled -- no workgroup available
Decision: If name resolution is slow or flaps between IPs, fix that first. If SMB1 is in play, you’re in legacy land and performance will be unpredictable.
Task 6: Measure real copy throughput for a single large file
cr0x@server:~$ dd if=/dev/zero of=/mnt/eng/test-10g.bin bs=16M count=640 oflag=direct status=progress
10737418240 bytes (11 GB, 10 GiB) copied, 33 s, 325 MB/s
640+0 records in
640+0 records out
10737418240 bytes copied, 33.0589 s, 325 MB/s
Decision: If large-file throughput is low, focus on signing/encryption, multichannel, NIC offloads, server CPU, and storage backing. If large-file is fine but small-file is awful, skip those and go to metadata/caching and workload changes.
Task 7: Measure metadata pain (create/stat/unlink loop)
cr0x@server:~$ time bash -c 'd=/mnt/eng/meta-test; mkdir -p "$d"; for i in $(seq 1 20000); do echo x > "$d/f$i"; done; sync'
real 2m3.412s
user 0m8.941s
sys 0m47.228s
Decision: If this is minutes for 20k tiny files, you’re latency/metadata bound. Consider actimeo, server-side AV exclusions, packaging files, or moving the workflow off SMB.
Task 8: Check client-side CPU saturation during transfers
cr0x@server:~$ mpstat -P ALL 1 5
Linux 6.8.0-41-generic (cr0x) 12/30/2025 _x86_64_ (8 CPU)
12:10:01 PM CPU %usr %nice %sys %iowait %irq %soft %steal %idle
12:10:02 PM all 12.10 0.00 22.55 0.20 0.00 1.10 0.00 64.05
12:10:02 PM 3 70.00 0.00 25.00 0.00 0.00 2.00 0.00 3.00
Decision: One core pegged while others idle often means per-connection crypto/signing work, a single flow, or a single-threaded copy tool. Consider multichannel, parallelism, or reducing security overhead (if allowed).
Task 9: Check NIC speed/duplex and errors (because reality is cruel)
cr0x@server:~$ sudo ethtool eno1 | egrep 'Speed|Duplex|Auto-negotiation|Link detected'
Speed: 10000Mb/s
Duplex: Full
Auto-negotiation: on
Link detected: yes
cr0x@server:~$ ip -s link show eno1 | sed -n '1,12p'
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
RX: bytes packets errors dropped missed mcast
9812134321 8123123 0 12 0 112233
TX: bytes packets errors dropped carrier collsns
8741123988 7012231 0 0 0 0
Decision: Errors/drops mean retransmits, which SMB will interpret as “why is everything sticky?” Fix physical/network issues before tuning protocol flags.
Task 10: Validate MTU end-to-end (PMTUD issues look like SMB slowness)
cr0x@server:~$ ping -c 3 -M do -s 1472 fs01.corp.example
PING fs01.corp.example (10.20.30.40) 1472(1500) bytes of data.
1480 bytes from 10.20.30.40: icmp_seq=1 ttl=62 time=0.412 ms
1480 bytes from 10.20.30.40: icmp_seq=2 ttl=62 time=0.396 ms
1480 bytes from 10.20.30.40: icmp_seq=3 ttl=62 time=0.401 ms
--- fs01.corp.example ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2040ms
rtt min/avg/max/mdev = 0.396/0.403/0.412/0.007 ms
Decision: If you get “Frag needed” or timeouts, your jumbo/PMTUD situation is broken. Fix that; SMB will stop stalling.
Task 11: Check for SMB multichannel and RDMA (if you expected it)
cr0x@server:~$ sudo grep -iE 'SMB_DIRECT|RDMA|Multi' /proc/fs/cifs/DebugData
Features: DFS,FSCACHE,STAT,SMB_DIRECT,INODE64,ACL,SMB2,SMB3,OPLOCKS,SECURITY
Decision: Seeing SMB_DIRECT in features doesn’t mean you’re using it. If you need RDMA/Direct, verify server support and that your NIC/driver stack is configured. Otherwise assume classic TCP.
Task 12: Spot “security tax” by toggling a test share policy (in a lab)
cr0x@server:~$ sudo mount -t cifs //fs01.corp.example/lab-nosign /mnt/lab \
-o vers=3.1.1,username=alice,domain=CORP,sign=no,seal=no,cache=strict,actimeo=30
cr0x@server:~$ dd if=/dev/zero of=/mnt/lab/test-5g.bin bs=16M count=320 oflag=direct status=progress
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 9 s, 594 MB/s
Decision: If performance doubles in a controlled test when signing/encryption are off, your “mystery” is policy cost. Escalate with numbers, not complaints: “encryption reduces throughput by ~45% on this client class.”
Task 13: Confirm the server isn’t throttling by SMB share settings (Windows example from Linux)
cr0x@server:~$ smbclient //fs01.corp.example/engineering -U 'CORP\alice' -m SMB3 -c 'get test-10g.bin /dev/null; quit'
getting file \test-10g.bin of size 10737418240 as /dev/null (312.4 MB/s) (average 312.4 MB/s)
Decision: If smbclient is fast but kernel mount is slow, your mount options/caching/client behavior are the issue. If both are slow, look server/network.
Task 14: Inspect dmesg for CIFS warnings (silent retries are slow)
cr0x@server:~$ sudo dmesg -T | egrep -i 'cifs|smb3' | tail -n 12
[Mon Dec 30 12:02:11 2025] CIFS: VFS: \\fs01.corp.example has not responded in 180 seconds. Reconnecting...
[Mon Dec 30 12:02:12 2025] CIFS: VFS: cifs_reconnect: reconnect to server fs01.corp.example
Decision: Any reconnects, timeouts, or “server not responding” means you’re debugging stability first. Throughput tuning on an unstable session is performance cosplay.
Task 15: Confirm time sync (Kerberos/auth weirdness can slow mounts/reconnects)
cr0x@server:~$ timedatectl
Local time: Mon 2025-12-30 12:12:44 UTC
Universal time: Mon 2025-12-30 12:12:44 UTC
RTC time: Mon 2025-12-30 12:12:44
Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
Decision: If time isn’t synchronized, fix it before you blame CIFS. Kerberos and secure session setup behave badly with skew.
Three corporate mini-stories from the trenches
Mini-story #1: The incident caused by a wrong assumption
The company: mid-sized SaaS, Windows-based file services for shared engineering artifacts. A team migrated build runners from Ubuntu 22.04 to 24.04. The first Monday after rollout, build times ballooned. People blamed “new Ubuntu kernel” and started pinning old images.
The wrong assumption was subtle: they assumed the share’s performance characteristics were stable because “the server didn’t change.” But the server had a security hardening change two weeks earlier—SMB encryption was turned on for that share only. The old runners were still hitting a different share path due to a stale DFS referral cache in their configs, effectively bypassing the encrypted path. The new runners resolved the referral correctly and landed on the encrypted target.
On paper this was “working as designed.” In reality it was a production incident: build capacity dropped, pipelines queued, and the on-call got to explain why “security improvements” had a cost no one benchmarked.
The fix wasn’t a magic mount option. It was a decision: either keep encryption and provision more CPU on the SMB servers (and ensure AES-NI on the VM class), or segment the network and allow unencrypted SMB on that internal path with compensating controls. They chose the former for sensitive repos, and carved out an unencrypted share for non-sensitive caches.
The lesson: performance baselines must include security posture. If you don’t measure it, you’ll “discover” it at 2 a.m.
Mini-story #2: The optimization that backfired
A finance org had an Ubuntu analytics cluster mounting a CIFS share for nightly CSV drops. Someone noticed the mount used cache=none and actimeo=1, and decided to “speed it up” by cranking actimeo=600 and enabling strict caching everywhere.
Throughput improved. Directory scans sped up. Everyone celebrated, briefly.
Then the data quality alerts started. A downstream job read files that had been replaced on the share, but the client kept cached attributes long enough that the pipeline thought “file unchanged” and skipped ingest. The caching change was correct for performance and wrong for semantics. The business didn’t care about SMB tuning; they cared that reports were stale.
They rolled it back and replaced it with a boring design: the producing system writes to a staging directory, then performs an atomic rename into a “ready” directory. Consumers only read from “ready” and use content checksums. With that workflow, actimeo=30 was safe and effective.
The lesson: don’t tune caching until you understand your correctness model. SMB isn’t your database. Your pipeline still needs rules.
Mini-story #3: The boring but correct practice that saved the day
A large enterprise had multiple Linux fleets mounting Windows shares for user profiles and application configs. Every quarter, someone would complain: “CIFS got slow again.” Instead of doing random tweaks, the SRE team had one boring habit: a repeatable benchmark and a single page of “known-good mount profiles.”
They maintained two profiles: Interactive (home dirs, correctness-first) and Bulk (read-mostly caches, throughput-first). Both were codified in config management, with a small canary that ran smbclient and a kernel-mounted dd test daily. Results were graphed next to server CPU, network drops, and authentication failures.
When a storage firmware update introduced intermittent packet loss on one top-of-rack switch, the SMB graphs dipped before users screamed. The canary also showed reconnect messages in dmesg. Networking initially shrugged because “links are up.” The data was annoyingly specific: errors on a single NIC port, on one rack, correlated with SMB reconnects and throughput collapse.
They fixed a physical layer issue and restored performance without touching a single mount option. The boring practice wasn’t heroics. It was instrumentation and a documented baseline.
The lesson: the fastest CIFS tuning is proving you don’t need CIFS tuning.
Common mistakes: symptom → root cause → fix
1) Symptom: Big files cap at 30–80 MB/s on 10 GbE
Root cause: SMB encryption/signing required; client or server CPU-bound; single-flow TCP limitations.
Fix: Verify signing/encryption via /proc/fs/cifs/DebugData. If policy allows, test seal=no,sign=no on a lab share. Otherwise add CPU, ensure AES-NI, consider SMB multichannel, and confirm NIC offloads are not disabled by policy.
2) Symptom: Millions of small files take forever, but big files are OK
Root cause: metadata round trips and latency; attribute caching too strict; server-side AV scanning each open/close.
Fix: Increase actimeo carefully (e.g., 10–30) for read-mostly workloads; package small files into archives; add parallelism; coordinate AV exclusions on the share for known-safe paths.
3) Symptom: Directory listing is painfully slow
Root cause: huge directories; ACL evaluation cost; client doing extra stat calls; DFS referral bouncing.
Fix: Avoid giant flat directories; shard by prefix/date/project. Confirm stable target and no DFS surprises. Use caching appropriate for your consistency needs.
4) Symptom: Mounts hang for 30–60 seconds, then work
Root cause: DNS timeouts, reverse lookup stalls, Kerberos retries, time skew, firewall inspection delays.
Fix: Fix name resolution; verify time sync; validate Kerberos config; check firewall logs; prefer stable FQDNs; avoid multi-A records unless you know how clients pick.
5) Symptom: Random “server not responding” and reconnects under load
Root cause: packet loss, MTU/PMTUD issues, flaky NIC/driver, or overloaded server hitting timeouts.
Fix: Check interface errors/drops; confirm MTU with ping -M do; look for CIFS reconnects in dmesg; fix transport stability before tuning.
6) Symptom: After “performance tuning,” apps see stale data
Root cause: aggressive attribute caching (actimeo) used on a correctness-sensitive share.
Fix: Reduce actimeo (1–5) or use the safe profile; redesign workflow using atomic renames and ready/staging directories.
7) Symptom: Linux clients are slower than Windows for the same share
Root cause: different negotiated features, different signing requirements per client policy, different caching defaults, or different path (DFS).
Fix: Compare negotiated settings on both sides. Don’t compare a Windows client on the same VLAN with a Linux VM across a firewall and call it science.
8) Symptom: Throughput collapses only on Wi‑Fi or VPN
Root cause: latency and loss; VPN MTU; single TCP flow; packet inspection.
Fix: Use split tunneling if allowed; correct MTU; use parallel transfers; consider moving bulk operations off SMB in remote scenarios.
Checklists / step-by-step plan
Step-by-step: from complaint to fix without guessing
- Classify the slowness: big file vs small file vs directory listing vs mount time.
- Prove network capacity with
iperf3(parallel streams). - Confirm negotiated SMB features via
/proc/fs/cifs/DebugData: dialect, signing, encryption. - Measure a controlled large-file transfer (direct I/O) to remove page cache illusions.
- Measure a controlled metadata test (create/stat/unlink loop).
- Check stability: NIC errors, MTU, CIFS reconnect logs.
- If security tax is suspected: test in a lab share with signing/encryption disabled; quantify delta.
- Pick a mount profile:
- Correctness-first:
cache=noneor cautiousactimeo. - Read-mostly/bulk:
cache=strict,actimeo=30(or higher with workflow guarantees).
- Correctness-first:
- Address workload shape: tar small files, parallelize, stage locally, avoid huge directories.
- Codify: put mount options in
/etc/fstab(or systemd mount units) with comments and an owner. - Regression-proof: keep a tiny benchmark/canary and alert on deltas.
Two mount profiles you can actually live with
Profile A: Interactive / correctness-first (home dirs, shared edits)
cr0x@server:~$ sudo mount -t cifs //fs01.corp.example/homes /home \
-o vers=3.1.1,sec=krb5,cache=none,actimeo=1,soft,serverino,uid=1000,gid=1000
Use when: multiple writers, people editing the same files, correctness beats speed.
Profile B: Bulk read-mostly (artifact cache, datasets, media)
cr0x@server:~$ sudo mount -t cifs //fs01.corp.example/artifacts /mnt/artifacts \
-o vers=3.1.1,username=buildbot,domain=CORP,cache=strict,actimeo=30,serverino,uid=1000,gid=1000
Use when: read-heavy, few writers, you can tolerate some attribute staleness.
FAQ
1) Why did CIFS get slower on Ubuntu 24.04 compared to 22.04?
Usually it didn’t “get slower” in isolation. What changes is the negotiated SMB features (dialect, signing, encryption), kernel behavior, or your environment (security policy, DNS, network path). Confirm with /proc/fs/cifs/DebugData and compare.
2) What mount option gives the biggest throughput boost?
If you’re CPU-bound on security features, the biggest “boost” comes from disabling signing/encryption—only if policy allows. If you can’t, the next biggest wins are ensuring enough CPU and using appropriate caching (cache=strict, reasonable actimeo) for read-mostly workloads.
3) Should I always set actimeo=30?
No. It’s great for read-mostly workloads and terrible for correctness-sensitive shares with frequent changes. Treat it like a knob that changes semantics, not a free speed upgrade.
4) Does rsize/wsize tuning still matter?
Less than it used to, because SMB2/3 and the kernel client handle sizing dynamically. It can still matter in edge cases (appliances, weird middleboxes), but it’s not my first lever on Ubuntu 24.04.
5) Why is directory listing slow even when throughput is fine?
Because listing isn’t throughput work; it’s metadata work. Every entry may require attribute queries and ACL checks, and huge directories amplify latency. Fix with better directory layout, caching where safe, and server-side scanning policy.
6) Is soft safe to use?
It depends. soft can cause I/O to error out instead of hanging forever, which may be desirable for batch jobs. For database-like workloads, unexpected I/O errors can be worse than waiting. Choose intentionally, not by cargo cult.
7) Should I use sec=krb5 or username/password?
Kerberos (sec=krb5) is usually cleaner for corporate environments and avoids password sprawl, but it’s more sensitive to DNS and time sync. If mounts are slow or flaky, validate NTP and Kerberos tickets.
8) Can SMB Multichannel fix my speed issues?
It can, especially on fast links where a single TCP flow underutilizes bandwidth. But it needs server support and correct network/NIC configuration. Don’t assume it’s enabled; verify and test.
9) Why does smbclient feel faster than the mounted filesystem?
smbclient is a different client path and workload. If it’s fast while the mount is slow, look at mount options, caching semantics, and the application’s metadata patterns (lots of stats/opens). If both are slow, look at server/network/policy.
10) What’s the fastest way to move a million tiny files?
Don’t. Package them (tar/zstd), move one large blob, then unpack locally. If you must keep them as individual files, use parallelism and accept that metadata will dominate. SMB is not a high-performance object store.
Conclusion: what to do next
When CIFS is painfully slow on Ubuntu 24.04, the fix is rarely “add random flags.” Do the disciplined thing:
- Run the fast diagnosis playbook. Identify whether you’re throughput-bound, metadata-bound, or stability-bound.
- Confirm negotiated SMB features. If signing/encryption are required, treat them as a capacity requirement, not a mystery.
- Pick a mount profile that matches the workload: correctness-first for interactive shares, caching for bulk read-mostly.
- Change workload shape if you’re moving tiny files: package, stage locally, parallelize carefully, avoid massive directories.
- Codify the mount options and keep a small benchmark canary so you notice regressions before users do.
If you want one practical next action: capture /proc/fs/cifs/DebugData, an iperf3 result, and a large-file dd test. With those three artifacts, you can have an adult conversation with network, storage, and security—without guessing.