Everything looks locked down. Your IPv4 scan is clean. The change ticket says “firewall enabled.” Then a customer emails a screenshot of your login prompt—over IPv6. That’s not a theoretical gap; that’s a real one, and it shows up exactly when you least want it: during audits, incidents, or a sleepy Sunday maintenance window.
Ubuntu 24.04 makes it easy to end up half-protected. Not because it’s broken, but because the defaults, tooling layers, and cloud realities create a perfect “I swear we blocked that” trap. This is the practical guide to close the hole properly—nftables, UFW, services, and the network edge—so dual-stack means “secure on both stacks,” not “surprise, we left a side door open.”
What actually goes wrong: the IPv6 “shadow exposure” pattern
The failure mode is boring and consistent:
- You deploy firewall rules that clearly block inbound IPv4.
- The host has a globally routable IPv6 address (or a /64 on the interface).
- Your firewall tooling either doesn’t apply to IPv6, applies differently, or is bypassed by another layer.
- A service binds to
::(all IPv6 interfaces), which also often means it accepts IPv4 via v6-mapped behavior depending on settings. - You test from an IPv4-only vantage point and declare victory.
Ubuntu 24.04 is “modern Linux.” That’s good. It also means netfilter is nftables-first, UFW is a compatibility layer, iptables may be running as nft under the hood, and cloud metadata plus auto-config can add addresses you didn’t expect. If you’re only thinking “iptables rules = firewall,” you’re already late.
One short joke, as a palate cleanser: A firewall that blocks IPv4 but not IPv6 is like locking your front door and leaving the garage open—except the garage has a neon sign saying “new protocol, who dis.”
Also: “disable IPv6” is not a strategy. It’s a last-resort workaround that breaks real stuff (package mirrors, modern CDNs, some corporate networks) and often returns later with an even weirder failure. What you want is policy: default deny inbound on both stacks, explicit allow for what you need, validated from both stacks, and enforced at more than one layer.
Interesting facts and context (because history repeats)
- IPv6 has been standardized since the late 1990s. The protocol isn’t new; operational habits are what lag.
- Early Linux firewalls were mostly iptables-centric. Many teams built muscle memory around IPv4 tables and treated IPv6 as “later.”
- IPv6 eliminates NAT as a default crutch. With global addressing, “it’s behind NAT so it’s fine” stops being a thing.
- UFW historically leaned on iptables/ip6tables. In nftables era, translation layers can surprise you when you assume “same rules, same behavior.”
- Some scanners and compliance checks still default to IPv4. You can pass a report while still being wide open on IPv6.
- IPv6 neighbor discovery replaces ARP. Different control-plane traffic means different failure modes when people over-block “weird ICMP.”
- ICMPv6 is not optional. Block it indiscriminately and you’ll break path MTU discovery and basic connectivity in delightful, intermittent ways.
- Dual-stack is often “enabled by accident.” Cloud images, router advertisements, or DHCPv6 can give you a globally reachable address without anyone filing a ticket.
- “Listening on ::” is a common default. Many daemons bind IPv6 wildcard and become reachable on v6 even if you only tested v4.
A correct mental model: packets, hooks, and who gets to say “deny”
On Ubuntu 24.04, “the firewall” is not a single thing. It’s a pipeline:
- The network edge (cloud security groups, router ACLs, on-prem firewall, load balancer listeners).
- The host kernel packet filter (nftables ruleset, potentially managed by UFW, firewalld, or custom automation).
- Local accept (a service listening on a socket, systemd socket activation, and app-level ACLs).
nftables is the engine. UFW is a manager. iptables commands might be a compatibility frontend that writes nft rules. You can have multiple managers, and they can step on each other. If you’re unlucky, you have rules for IPv4, empty policy for IPv6, and a service on ::. That’s the whole movie.
Reliability people tend to have a quote for this. Here’s one that holds up, presented as a paraphrased idea because exact wording varies: paraphrased idea — “Hope is not a strategy.”
— often attributed to engineering and operations leadership circles.
Translate that into firewall work: don’t hope that “UFW enabled” implies “IPv6 blocked.” Verify the kernel ruleset and verify with an IPv6-capable scan from outside the box.
Fast diagnosis playbook
If you suspect IPv6 exposure on Ubuntu 24.04, do this in order. It’s designed to find the bottleneck (or the missing control) quickly, not to satisfy curiosity.
1) Confirm the host is actually reachable on IPv6
First question: do you have a global IPv6 address? If yes, assume you are reachable unless proven otherwise. If no, your immediate risk might be lower, but don’t relax—interfaces and routes change.
2) Identify what is listening on IPv6
Find services bound to :: or specific v6 addresses. Listening sockets are the “why.” Firewall rules are the “how.”
3) Inspect the real ruleset (nftables), not just the tool status
UFW status is a hint. The nftables ruleset is the truth. Look for ip6 chains and policies.
4) Verify enforcement: counters, logging, and a real IPv6 probe
Counters tell you if rules match. A probe from a different host tells you if your mental model is correct. “Local curl” is not a security test.
5) Check the edge controls (cloud SG / router ACL)
If the edge allows IPv6 broadly, your host firewall is your last line. If the edge blocks, the host firewall is still necessary (for lateral movement and misrouted traffic), but your exposure might be contained.
Practical tasks (commands, outputs, decisions)
These are not “run them for fun.” Each task has: a command, a realistic snippet of output, what it means, and the decision you make. Do them in a terminal. Save outputs in your incident ticket. Future-you will thank you.
Task 1 — See if IPv6 is enabled and what addresses you have
cr0x@server:~$ ip -6 addr show
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 2001:db8:10:20::123/64 scope global dynamic
valid_lft 86390sec preferred_lft 14390sec
inet6 fe80::a00:27ff:fe4e:66a1/64 scope link
valid_lft forever preferred_lft forever
What it means: You have a global IPv6 address (2001:db8:...). This host is potentially reachable from the internet (depending on routing and edge rules).
Decision: Treat IPv6 exposure as real. Continue with listener and firewall checks.
Task 2 — Check if IPv6 is disabled via sysctl (don’t assume)
cr0x@server:~$ sysctl net.ipv6.conf.all.disable_ipv6 net.ipv6.conf.default.disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
What it means: IPv6 is enabled at the kernel level.
Decision: Keep IPv6 enabled; fix firewall and services instead of trying to “turn off the internet.”
Task 3 — List listening sockets, including IPv6
cr0x@server:~$ ss -lntup
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
tcp LISTEN 0 4096 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1123,fd=3))
tcp LISTEN 0 4096 [::]:22 [::]:* users:(("sshd",pid=1123,fd=4))
tcp LISTEN 0 4096 [::]:8080 [::]:* users:(("java",pid=2210,fd=99))
What it means: SSH and a Java service are listening on IPv6 wildcard. Even if your IPv4 firewall is tight, IPv6 could still reach these ports.
Decision: You must either (a) block inbound IPv6 for these ports at nftables/UFW, or (b) bind services to specific interfaces, or both.
Task 4 — Check UFW IPv6 setting (this is where people get fooled)
cr0x@server:~$ sudo grep -n '^IPV6=' /etc/default/ufw
13:IPV6=no
What it means: UFW will not manage IPv6 rules. Your “UFW enabled” stance only applies to IPv4.
Decision: If you use UFW, set IPV6=yes and reload, then validate nftables rules. If you don’t use UFW, ensure nftables has explicit ip6 filtering.
Task 5 — See UFW status, but treat it as a dashboard light
cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
22/tcp ALLOW IN 10.0.0.0/8
What it means: UFW is active for what it manages. Note that nothing here says IPv6 is covered.
Decision: Verify nftables for ip6. If UFW IPv6 is off, this output can be dangerously reassuring.
Task 6 — Inspect the actual nftables ruleset
cr0x@server:~$ sudo nft list ruleset
table inet ufw {
chain input {
type filter hook input priority filter; policy drop;
ct state established,related accept
iif "lo" accept
ip protocol icmp accept
tcp dport 22 ip saddr 10.0.0.0/8 accept
counter packets 1840 bytes 220800 drop
}
}
What it means: This ruleset is in an inet table, which can match both IPv4 and IPv6. But look closely: it accepts ip protocol icmp (IPv4 ICMP), and it allows SSH only with ip saddr (IPv4 source match). There is no explicit ICMPv6 allowance and no explicit IPv6 SSH allowance. With a default drop, IPv6 might be blocked—or it might be handled elsewhere if there are other tables/chains you didn’t see in the snippet.
Decision: Confirm there is a single authoritative input chain hooked for inet and that it includes IPv6 essentials (ICMPv6 neighbor discovery) and explicit allows.
Task 7 — Confirm what hooks exist (are you filtering IPv6 input at all?)
cr0x@server:~$ sudo nft list chains
table inet ufw
chain input
chain forward
chain output
What it means: You have chains, but you still need to confirm they are actually hooked with type filter hook input for the right family.
Decision: If you don’t see a hooked input chain in an inet/ip6 table, you might be running with “no IPv6 firewall” despite tool claims.
Task 8 — Verify IPv6 neighbor discovery isn’t being murdered
cr0x@server:~$ ip -6 neigh show dev enp0s3
fe80::1 lladdr 52:54:00:12:34:56 router REACHABLE
What it means: Neighbor discovery is functioning at least to the router. If ND breaks, you can have “firewall is secure” and “network is mysteriously flaky” at the same time.
Decision: When tightening IPv6 rules, explicitly allow required ICMPv6 types (or allow ICMPv6 broadly inbound if you don’t have a strong reason not to).
Task 9 — Check for iptables legacy confusion (nft backend vs legacy)
cr0x@server:~$ sudo update-alternatives --display iptables | sed -n '1,12p'
iptables - auto mode
link best version is /usr/sbin/iptables-nft
link currently points to /usr/sbin/iptables-nft
link iptables is /usr/sbin/iptables
What it means: iptables commands are using the nft backend. That’s fine, but it means “iptables-save” output may not match what you expect from old playbooks.
Decision: Standardize: manage rules with nftables directly or with one tool (UFW) and stop mixing iptables scripts with nft rules unless you like debugging at 2 a.m.
Task 10 — Confirm there is no second firewall manager (firewalld) competing
cr0x@server:~$ systemctl is-active firewalld; systemctl is-active ufw
inactive
active
What it means: Good. One manager is active.
Decision: If both are active, pick one. Disable the other. Mixed ownership creates gaps and “random” rule loss.
Task 11 — Identify whether systemd socket activation opened something you forgot
cr0x@server:~$ systemctl list-sockets --all | sed -n '1,12p'
LISTEN UNIT ACTIVATES DESCRIPTION
[::]:22 ssh.socket ssh.service OpenSSH Server Socket
[::]:631 cups.socket cups.service CUPS Scheduler
What it means: systemd is listening on IPv6 addresses for services. Even if you “stopped the service,” the socket may still accept connections and spawn it.
Decision: Disable sockets you don’t need (systemctl disable --now cups.socket) and ensure firewall rules cover the port anyway.
Task 12 — Check SSH daemon bind behavior explicitly
cr0x@server:~$ sudo sshd -T | egrep '^(addressfamily|listenaddress|port)'
addressfamily any
port 22
What it means: SSH listens on both IPv4 and IPv6 (“any”) unless constrained.
Decision: If you intend SSH only on a management subnet, enforce it in firewall rules for both families and consider binding SSH to that interface/address.
Task 13 — Confirm inbound path by probing from another machine (IPv6)
cr0x@server:~$ nc -6vz 2001:db8:10:20::123 22
Connection to 2001:db8:10:20::123 22 port [tcp/ssh] succeeded!
What it means: The port is reachable over IPv6. This is the “real hole” in plain text.
Decision: Block it at the edge and host firewall, or restrict to the right source ranges, then retest until it fails from unauthorized sources.
Task 14 — Watch nft counters while testing (proves which rule matched)
cr0x@server:~$ sudo nft -a list chain inet ufw input
table inet ufw {
chain input { # handle 3
type filter hook input priority filter; policy drop;
ct state established,related accept # handle 10
iif "lo" accept # handle 11
ip protocol icmp accept # handle 12
tcp dport 22 ip saddr 10.0.0.0/8 accept # handle 13
counter packets 1901 bytes 228120 drop # handle 14
}
}
What it means: You can see counters moving when traffic hits. If your IPv6 probe succeeds but counters don’t change here, your IPv6 traffic is not being filtered by this chain. That’s the smoking gun.
Decision: Find the chain that actually hooks IPv6 input, or create a proper inet filter chain covering both families.
UFW on Ubuntu 24.04: what it really does for IPv6
UFW is popular because it’s readable. That’s not the same as “complete.” The biggest operational trap is that IPv6 support is a toggle in /etc/default/ufw. If it’s off, you can have a clean, enforced IPv4 policy and an untouched IPv6 world.
Enable UFW IPv6 properly (if you’re using UFW)
Edit /etc/default/ufw and set IPV6=yes. Then reload.
cr0x@server:~$ sudo sed -i 's/^IPV6=.*/IPV6=yes/' /etc/default/ufw
cr0x@server:~$ sudo ufw reload
Firewall reloaded
What it means: UFW will now generate and apply IPv6 rules as well.
Decision: Immediately verify with sudo nft list ruleset and an external IPv6 probe. Do not trust the reload message.
Set sane defaults for dual-stack
cr0x@server:~$ sudo ufw default deny incoming
Default incoming policy changed to 'deny'
(be sure to update your rules accordingly)
cr0x@server:~$ sudo ufw default allow outgoing
Default outgoing policy changed to 'allow'
(be sure to update your rules accordingly)
What it means: You are default-denying inbound for both stacks (once IPv6 is enabled) and allowing outbound. That’s the baseline for servers.
Decision: Add explicit allows for required inbound ports, scoped by source where possible, in both IPv4 and IPv6 terms.
Allow SSH from a management prefix on IPv6 (example)
cr0x@server:~$ sudo ufw allow from 2001:db8:100::/56 to any port 22 proto tcp
Rule added
Rule added (v6)
What it means: You now have a v6-specific allow rule in addition to v4 rules. The “(v6)” line is what you want to see.
Decision: If you don’t see “(v6)”, your IPv6 toggle or rule syntax isn’t doing what you think.
Two opinions that will save you time
- Don’t rely on “allow 22/tcp” without source limits on internet-exposed boxes. On IPv6, address space doesn’t stop brute force. Attackers don’t care that scanning is “harder” when they already have your address from DNS or logs.
- Don’t over-block ICMPv6. You will break IPv6 in ways that look like random packet loss. Keep it simple: allow essential ICMPv6 inbound, then tighten if you have evidence and tests.
nftables reality check: tables, chains, priorities
If you want fewer surprises, manage nftables directly and keep UFW out of it—or commit to UFW and verify its generated nft rules. What you must not do is have three systems “helping.” That’s how you get a server that’s both open and intermittently broken.
Use an inet table for dual-stack filtering
The inet family lets one chain match both IPv4 and IPv6. That’s usually the simplest way to avoid “we forgot ip6.” A minimal pattern:
- Default drop on input
- Allow established/related
- Allow loopback
- Allow ICMP + ICMPv6 (or at least the needed ICMPv6 types)
- Allow specific services from specific source ranges
Example: create a simple dual-stack input chain (illustrative—adapt to your environment, don’t paste blindly into prod):
cr0x@server:~$ sudo nft add table inet filter
cr0x@server:~$ sudo nft 'add chain inet filter input { type filter hook input priority 0; policy drop; }'
cr0x@server:~$ sudo nft add rule inet filter input ct state established,related accept
cr0x@server:~$ sudo nft add rule inet filter input iif "lo" accept
cr0x@server:~$ sudo nft add rule inet filter input ip protocol icmp accept
cr0x@server:~$ sudo nft add rule inet filter input ip6 nexthdr ipv6-icmp accept
cr0x@server:~$ sudo nft add rule inet filter input tcp dport 22 ip saddr 10.0.0.0/8 accept
cr0x@server:~$ sudo nft add rule inet filter input tcp dport 22 ip6 saddr 2001:db8:100::/56 accept
What it means: One chain covers both stacks. You explicitly allow IPv4 ICMP and IPv6 ICMP, and you scope SSH by source for each family.
Decision: Decide if you want to manage rules with nft directly. If yes, disable UFW to avoid dueling chains, then persist rules via your configuration management approach.
Persisting nftables rules: be explicit about ownership
Ubuntu systems vary by how nftables persistence is handled in your environment. The key is not the mechanism; it’s that the mechanism is singular and audited. If you write rules interactively and forget to persist, you’ll “fix” the issue until the next reboot—classic paper shield.
If you use UFW: let UFW persist them. If you use nftables directly: persist via your chosen method and test on reboot as part of change validation.
Second short joke, and then back to work: IPv6 exposure is the kind of bug that hides so well you start thinking it’s a feature—until the auditor finds it and suddenly it’s “legacy behavior.”
It’s not only firewall: services that listen on IPv6 by default
Even with a perfect firewall, don’t let services listen broadly unless they need to. Defense in depth isn’t just a slogan; it’s how you survive misconfigurations, future changes, and the occasional “temporary” rule someone forgets to remove.
Bind to specific addresses where practical
Some services can be configured to bind to a management interface only, or to localhost behind a reverse proxy. Examples:
- Admin dashboards should bind to
127.0.0.1and::1, and be accessed via SSH tunnel or a VPN. - Internal APIs should bind to an internal VLAN interface, not the public interface.
- Anything “temporary” should bind to localhost by default.
Beware systemd sockets
systemd can keep ports open even when you think a service is stopped. That’s not a bug; it’s a feature for on-demand services. It’s also a foot-gun if you’re not checking sockets.
If you find a socket listening on IPv6 that you don’t want, disable it:
cr0x@server:~$ sudo systemctl disable --now cups.socket
Removed "/etc/systemd/system/sockets.target.wants/cups.socket".
What it means: The socket unit will not listen after this, and it won’t be started on boot.
Decision: Do this for any unwanted socket-activated listener, then confirm with ss -lntup.
IPv6 and “localhost” are not the same thing
Engineers sometimes bind an app to “localhost” and assume it’s local. But “localhost” can mean IPv4 only (127.0.0.1), IPv6 only (::1), or both depending on configuration. Be explicit in service configs. Write both addresses if you mean both. Test both.
Cloud and edge: security groups, NACLs, load balancers, and why host firewalls aren’t enough
In corporate systems, the most common IPv6 failure is split-brain policy:
- Network team locks down IPv4 security groups carefully.
- IPv6 gets turned on later (“we need it for compliance” / “modernization” / “because the provider checkbox exists”).
- IPv6 inbound rules are left permissive or default-allow in a place nobody checks daily.
- The host firewall is assumed to block it, but UFW IPv6 is off or nft rules don’t hook.
Host firewall is necessary, but not sufficient. If you have the ability to block inbound IPv6 at the edge, do it. Then still enforce host policy. The two layers catch different classes of mistakes:
- Edge controls protect against direct internet noise and reduce exposure surface.
- Host firewall protects against lateral movement, misrouted traffic, internal threats, and “someone opened a port on the instance” moments.
Also remember load balancers: you might have an IPv6 listener on the LB, forwarding to instances over IPv4. Or the reverse. Dual-stack at the edge does not guarantee dual-stack on the host. That mismatch is where monitoring lies to you.
Three corporate mini-stories from the trenches
1) Incident caused by a wrong assumption: “UFW is on, so we’re safe”
A mid-size company rolled out Ubuntu 24.04 images for internal tooling. They had a nice, consistent hardening script: install UFW, set default deny inbound, allow SSH from the corporate IPv4 ranges, enable logging. It passed their standard “port scan” step, which—predictably—ran from an IPv4-only runner inside a CI job.
Three months later, a red-team exercise flagged multiple hosts with reachable SSH over IPv6. The team’s first reaction was denial, then confusion, then a flurry of “but UFW is active” screenshots. The clue was in plain sight: IPV6=no in /etc/default/ufw. That setting was inherited from an older baseline created when their environment had no IPv6 routing. It persisted like a fossil.
The hosts had acquired global IPv6 addresses through a networking change that wasn’t framed as “this changes exposure.” Router advertisements made addresses appear automatically. No one thought to treat it as a security boundary change. That’s the wrong assumption: “address assignment is a networking detail.” In dual-stack, it’s a policy event.
Fixing it wasn’t just flipping IPV6=yes. They had to add proper ICMPv6 allowances, or connectivity broke in strange ways. They also discovered that some hosts had services bound to :: “temporarily.” Once they forced service binding and enforced dual-stack rules, the red-team results became boring. Boring is good.
2) Optimization that backfired: “Drop all ICMP to reduce noise”
A different organization ran a high-volume API platform and prided itself on “minimal attack surface.” An engineer noticed a lot of ICMP traffic in logs and decided it was “unnecessary.” They tightened rules to drop ICMP across the board, including ICMPv6, because “we don’t need ping in production.” This was packaged as an optimization: fewer packets, fewer logs, fewer distractions.
Within days, the platform developed intermittent timeouts. Not a clean outage—worse. Some clients saw large payload requests fail, others didn’t. Retries helped. Latency looked fine until it didn’t. The on-call rotation started collecting superstitions. Someone blamed the CDN. Someone blamed TLS.
The real cause was IPv6 path MTU discovery getting broken. ICMPv6 “Packet Too Big” messages weren’t making it back. Connections fell back to fragmentation behavior that didn’t match expectations, and certain paths black-holed packets. The optimization backfired by creating a reliability nightmare and a debugging tax.
They fixed it by allowing necessary ICMPv6 and tuning logging so the noise was manageable. The lesson stuck: ICMPv6 isn’t “ping.” It’s plumbing. Turning it off because it’s loud is like removing the oil light because it’s annoying.
3) The boring but correct practice that saved the day: dual-stack validation gates
A finance-adjacent company had been burned by a previous IPv6 exposure, so they institutionalized a practice nobody loved: every new service and every new base image had to pass dual-stack reachability tests. Not “we ran nmap once,” but a small automated suite: check listening sockets, check nftables hooks, probe from an IPv6-enabled runner, and assert “only these ports are reachable.”
It was boring. Engineers complained that it slowed down provisioning. Security liked it. SREs tolerated it. The payoff arrived quietly: during a cloud migration, a networking change added IPv6 to subnets by default. Many teams didn’t even notice, because nothing broke. But the validation gate did: suddenly builds failed because IPv6 reachability tests found open ports.
Instead of discovering the issue via an incident, they discovered it in CI. The fixes were small and repeatable: enable UFW IPv6 (or correct nftables inet rules), restrict SSH by v6 prefix, and disable unused systemd sockets. The migration stayed on schedule. Nobody wrote a postmortem. That’s the dream.
Common mistakes: symptoms → root cause → fix
1) “IPv4 scan is clean, but auditors say ports are open”
Symptom: External report shows SSH/HTTP reachable; your internal scan shows closed.
Root cause: Auditor scanned IPv6; you validated only IPv4.
Fix: Perform dual-stack scanning. Confirm ip -6 addr, then probe with nc -6vz or equivalent from an IPv6 host. Enable IPv6 rules in UFW or nftables.
2) “UFW is active, but IPv6 connections still succeed”
Symptom: ufw status looks correct; IPv6 connections reach services anyway.
Root cause: IPV6=no in /etc/default/ufw, or UFW rules not hooked into nftables for inet/ip6 paths.
Fix: Set IPV6=yes, reload UFW, verify with nft list ruleset, and retest from an external IPv6 source.
3) “IPv6 dies randomly after hardening”
Symptom: Some connections hang, large transfers fail, weird intermittent drops.
Root cause: ICMPv6 blocked too aggressively (PMTUD and neighbor discovery issues).
Fix: Allow ICMPv6 (ip6 nexthdr ipv6-icmp accept) or at least required types. Test with real workloads, not just ping.
4) “Service is stopped, but port is still open”
Symptom: You stop a daemon; ss still shows listening.
Root cause: systemd socket activation keeps the port open via a .socket unit.
Fix: Disable the socket unit (systemctl disable --now name.socket) and recheck listeners.
5) “Rules exist, but IPv6 traffic isn’t hitting counters”
Symptom: IPv6 connections succeed; your expected nft chain counters don’t move.
Root cause: Your chain isn’t hooked for input, or a different table/chain has precedence, or you’re filtering ip family only.
Fix: Ensure you have an inet or ip6 input chain with a hook and correct priority. Simplify ownership: one manager.
6) “We allowed SSH from corp IPv4, but IPv6 still allows the world”
Symptom: SSH is restricted by IPv4 sources; IPv6 is open.
Root cause: Rule matches only ip saddr, not ip6 saddr.
Fix: Add IPv6 source-scoped rules (UFW allow from 2001:.../... or nft ip6 saddr rules).
7) “We disabled IPv6, but it keeps coming back / apps break”
Symptom: Someone toggles sysctl to disable IPv6; later it reappears or services fail.
Root cause: Disabling IPv6 is fragile across environments and can break dependencies; or settings applied inconsistently per interface.
Fix: Re-enable IPv6 and implement correct firewall + binding policy. If you must disable, do it consistently and document why, then schedule rework.
Checklists / step-by-step plan
Step-by-step: close the real hole on a single Ubuntu 24.04 host
- Discover IPv6 addresses and routes. If you have a global address, assume exposure and proceed.
- Inventory listeners. Identify what binds to
::and what should not. - Pick one firewall manager. UFW or nftables. Not both. Not “iptables scripts plus UFW.”
- Ensure default deny inbound for both stacks. In UFW, set
IPV6=yes. In nftables, useinetinput chain withpolicy drop. - Allow required ICMPv6. Keep IPv6 functional while securing it.
- Add explicit allows for required services. Prefer source scoping: management prefixes for SSH, LB subnets for app ports.
- Reduce listeners. Bind internal/admin services to localhost or internal interfaces. Disable unused systemd sockets.
- Validate from outside over IPv6. If you can’t test from outside, your change is unverified.
- Check counters/logging. Confirm your rules are actually being used.
- Persist configuration. Ensure rules survive reboot and config management drift.
Ops checklist: what to capture in the ticket (makes audits survivable)
- Output of
ip -6 addrandip -6 route(proves addressing and default route). - Output of
ss -lntup(proves what could be reached). - Output of
nft list ruleset(proves actual enforcement). - UFW status +
/etc/default/ufwIPv6 toggle (proves the manager scope). - External IPv6 probe results before/after (proves closure).
FAQ
1) Why did this show up specifically on Ubuntu 24.04?
Ubuntu 24.04 sits firmly in the nftables era and dual-stack networking is common by default in many environments. The “gotcha” isn’t unique to 24.04, but the mix of tooling layers and modern defaults makes half-configurations more likely to slip through.
2) If my server has no AAAA DNS record, am I safe from IPv6 exposure?
No. DNS is a discovery mechanism, not an access control. IPv6 addresses leak through logs, configs, certificates, telemetry, and cloud inventories. Also, internal attackers don’t need DNS.
3) Is “IPv6 scanning is hard” a meaningful defense?
Not really. Attackers don’t need to scan the whole /64 if they already know your address, which they often do. And many services are on predictable addresses in cloud environments.
4) Can I just disable IPv6 to make this go away?
You can, but it’s usually a short-term bandage with collateral damage. It breaks some networks, complicates troubleshooting, and tends to be undone later. Proper dual-stack firewalling is the durable fix.
5) Do I need separate rules for IPv4 and IPv6?
You need coverage for both. With nftables, an inet table can apply one chain to both families. With UFW, you must enable IPv6 and confirm rules are generated for v6.
6) What ICMPv6 should I allow?
At minimum, allow the ICMPv6 traffic required for neighbor discovery and PMTUD. If you don’t have time to be clever, allow ICMPv6 inbound and tighten later with real tests and a reason.
7) Why do I see services listening on [::]:port when I only configured IPv4?
Many daemons bind to IPv6 wildcard by default because it can cover both stacks, or because the distribution defaults prefer it. That’s normal; it just means you must enforce firewall policy for IPv6 too.
8) How do I know whether UFW is actually enforcing IPv6?
Check /etc/default/ufw for IPV6=yes, then inspect the nftables ruleset and test from an external IPv6 host. Tool status alone isn’t proof.
9) What’s the quickest way to prove we fixed it?
From a host outside your allowed ranges, try connecting over IPv6 to the previously exposed port (for example nc -6vz). It must fail. Then confirm nft counters show drops for that traffic.
10) If we have cloud security groups, why bother with a host firewall?
Because edge policies drift, internal traffic exists, and misrouting happens. Host firewall protects you when the perimeter isn’t perfect—and it never is.
Conclusion: next steps you can actually schedule
Close the hole the way you close any real operational gap: prove it exists, fix it in one authoritative place, then add a guardrail so it doesn’t come back.
- Today: run
ip -6 addr,ss -lntup, andsudo nft list ruleseton one representative host. If you see global IPv6 + open listeners + weak rules, you’ve found your issue. - This week: pick a firewall ownership model (UFW with IPv6 enabled, or native nftables). Enforce default deny inbound on both stacks. Allow ICMPv6 correctly. Scope inbound allows by source.
- This month: add a dual-stack validation gate: external IPv6 probe + listener inventory + nft hook verification. Make it boring. Make it automatic.
Dual-stack is not optional anymore. The good news: fixing it is straightforward once you stop pretending IPv6 is “later.”