Ubuntu 24.04: IPv6 firewall forgotten — close the real hole (not just IPv4) (case #72)

Was this helpful?

You locked down the server. At least you think you did. IPv4 looks clean, UFW says “active,” and the change ticket is closed. Then a scanner hits your IPv6 address and finds SSH, a metrics endpoint, and something you forgot even existed.

This is the quiet failure mode of dual-stack: you harden the old internet and accidentally leave the new one on the porch with a spare key under the mat. Ubuntu 24.04 is not uniquely “bad” here. It’s just modern enough that IPv6 is routinely present, while the operational habits around it are still stuck in 2012.

The real problem: “firewall is on” is not a statement

Firewalls aren’t binary. They’re a pile of rules across layers:

  • Kernel packet filter (nftables on Ubuntu 24.04; legacy iptables may still lurk).
  • Host firewall wrapper (UFW is common, but it’s not magic).
  • Service binding behavior (listening on 0.0.0.0 vs :: matters).
  • systemd sockets that start daemons on demand.
  • Cloud security groups / network ACLs / edge firewalls that may treat IPv6 differently.
  • Actual IPv6 addressing (global, temporary, privacy addresses, SLAAC, RA).

When teams say “we enabled UFW,” they often mean “we enabled IPv4 filtering for the ports we remembered.” Meanwhile, the host has a globally reachable IPv6 address, and your IPv6 policy is either missing, permissive, or simply not being applied to the correct table/hook.

Opinionated guidance: if you run a public-facing Ubuntu 24.04 host, you must treat IPv6 as first-class. Either explicitly secure it or explicitly disable it with a plan. “Ignore it” is just a slower version of “get owned.”

Facts & context: why IPv6 keeps surprising teams

Some history helps, because a lot of today’s operational mistakes are yesterday’s design decisions meeting today’s default settings.

  1. IPv6 went standard in the late 1990s (RFC 2460 era), but broad operational rollout took decades, so many teams built muscle memory on IPv4-only controls.
  2. IPv6 restored end-to-end addressing as a “normal” pattern; NAT is not a design requirement. That means “NAT as a security boundary” disappears.
  3. Most Linux services treat :: as “bind all interfaces”, which includes IPv6 and often IPv4-mapped behavior depending on sysctls and app defaults.
  4. Dual-stack is not a transition detail; it’s the steady state in many enterprises. Disabling IPv6 often breaks internal services (SSO, telemetry, package mirrors) in surprising ways.
  5. nftables replaced iptables as the modern Linux firewall framework; wrappers and migrations can leave “rules that exist” but don’t match the traffic you think they do.
  6. Cloud providers commonly assign IPv6 by default or make it one checkbox away. Security groups may have separate IPv6 sections that nobody fills in.
  7. IPv6 has multiple address scopes (link-local, ULA, global). Link-local is always there; global may appear automatically via Router Advertisements or cloud config.
  8. Privacy/temporary IPv6 addresses exist and can rotate, which breaks “pin this host to a known IP” assumptions for clients and monitoring.
  9. Some scanning tools and compliance scripts still default to IPv4. That creates “green dashboards” while the real exposure lives on v6.

One quote that belongs on every operations wall, because it’s basically the IPv6 firewall story in one line: “Hope is not a strategy.” — Gene Kranz

Threat model in plain English: how the hole happens

The most common setup I see in audits looks like this:

  • UFW enabled, rules added for IPv4 ports: 22, 80, 443, maybe 9100 locked down.
  • IPv6 enabled by default (because it is), and the host has a global IPv6 address.
  • Either:
    • UFW IPv6 is disabled, so it never writes v6 rules, or
    • UFW writes v6 rules but nftables is not actually enforcing them the way you expect (less common, but real), or
    • You have an allow-all “temporary” v6 rule that became permanent.
  • Services bind to :: and are reachable on v6 even if you tested only v4.

Attackers don’t need creativity. They need you to be predictable. And “we secured IPv4” is predictable.

Joke #1: IPv6 is like a second front door you didn’t know your house had—except it comes with a neon “WELCOME” sign.

Fast diagnosis playbook (do this first)

When you suspect “IPv6 firewall forgotten,” you want answers in minutes, not a philosophical debate about networking. Here’s the order that finds the bottleneck fastest.

First: prove the host actually has reachable IPv6

  • Does the server have a global IPv6 address?
  • Is there a default route for IPv6?
  • Can you reach the internet over IPv6?

Second: list what’s listening on IPv6

  • Identify sockets bound to :: or a global IPv6.
  • Check systemd socket units that can spawn listeners even when “service is stopped.”

Third: verify the packet filter is enforcing IPv6 policy

  • UFW status for v6, and whether its rules exist.
  • nftables ruleset: is there an ip6 family chain for input? Is default drop in place?
  • Look at counters while generating traffic (packets should increment on the expected rule).

Fourth: test externally, dual-stack, from a real network

  • Scan the IPv6 address from a host outside your network perimeter.
  • Confirm that “closed” actually means filtered/blocked, not just “no service on IPv4.”

Practical tasks: commands, expected output, and decisions (dual-stack)

Below are real operator tasks. Each includes the command, what the output means, and what decision you make. Do them in order if you’re diagnosing; cherry-pick if you already know where the rot is.

Task 1: Confirm IPv6 addresses and scope

cr0x@server:~$ ip -6 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 fe80::5054:ff:fe12:3456/64 scope link 
       valid_lft forever preferred_lft forever
    inet6 2001:db8:1234:5678:5054:ff:fe12:3456/64 scope global dynamic 
       valid_lft 86395sec preferred_lft 14395sec

Meaning: If you see a scope global address, you’re reachable on IPv6 assuming routing allows it. Link-local (fe80::) doesn’t mean internet exposure by itself.

Decision: If global exists and this is not intended, you either (a) lock down IPv6 firewall rules now, or (b) disable IPv6 properly (later section) with impact analysis.

Task 2: Check IPv6 routing and default route

cr0x@server:~$ ip -6 route show
2001:db8:1234:5678::/64 dev enp1s0 proto ra metric 100 pref medium
fe80::/64 dev enp1s0 proto kernel metric 256 pref medium
default via fe80::1 dev enp1s0 proto ra metric 100 pref medium

Meaning: A default via route means the host can send IPv6 traffic out. If there’s no default route, inbound might still work in some environments, but usually not from the internet.

Decision: Default route present → treat exposure as real and immediate.

Task 3: Verify IPv6 connectivity

cr0x@server:~$ ping -6 -c 2 ipv6.google.com
PING ipv6.google.com(2607:f8b0:4005:80a::200e) 56 data bytes
64 bytes from 2607:f8b0:4005:80a::200e: icmp_seq=1 ttl=117 time=12.3 ms
64 bytes from 2607:f8b0:4005:80a::200e: icmp_seq=2 ttl=117 time=12.1 ms

--- ipv6.google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms

Meaning: Outbound IPv6 works. In many real networks, if outbound works, inbound is only one firewall misconfig away.

Decision: Proceed to listener and firewall audits; don’t assume upstream blocks you.

Task 4: List listening sockets with IPv6 visibility

cr0x@server:~$ sudo ss -lntup
Netid State  Recv-Q Send-Q Local Address:Port    Peer Address:Port Process
tcp   LISTEN 0      4096   0.0.0.0:22           0.0.0.0:*       users:(("sshd",pid=1123,fd=3))
tcp   LISTEN 0      4096   [::]:22              [::]:*          users:(("sshd",pid=1123,fd=4))
tcp   LISTEN 0      4096   [::]:9100            [::]:*          users:(("node_exporter",pid=1337,fd=3))
tcp   LISTEN 0      4096   127.0.0.1:8080       0.0.0.0:*       users:(("grafana",pid=1450,fd=9))
tcp   LISTEN 0      4096   [::]:80              [::]:*          users:(("nginx",pid=1200,fd=6))

Meaning: Any [::]:PORT listener is reachable on IPv6 (subject to firewall). Note that 127.0.0.1:8080 is loopback-only and not exposed.

Decision: For each [::] listener, decide: should it be public, private, or local-only? If it’s not intended to be public, fix binding and firewall.

Task 5: Check UFW IPv6 setting

cr0x@server:~$ sudo grep -n '^IPV6=' /etc/default/ufw
7:IPV6=yes

Meaning: If IPV6=no, UFW won’t manage v6 rules. If it’s yes, UFW will generate both v4 and v6 rulesets.

Decision: If it’s no and you need IPv6: change to yes, then reload UFW and validate. If you want IPv6 disabled entirely, don’t just flip UFW; handle sysctls/netplan and application bindings.

Task 6: Inspect UFW status with verbosity

cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    203.0.113.0/24
80/tcp                     ALLOW IN    Anywhere
443/tcp                    ALLOW IN    Anywhere

22/tcp (v6)                ALLOW IN    2001:db8:feed::/48
80/tcp (v6)                ALLOW IN    Anywhere (v6)
443/tcp (v6)               ALLOW IN    Anywhere (v6)

Meaning: The presence of “(v6)” rules is good; it means UFW is at least emitting IPv6 policy. But “Anywhere (v6)” is a real choice, not a default you should accept blindly.

Decision: If you didn’t intend public IPv6 on 80/443/22, tighten rules now. If UFW shows no v6 entries at all, treat that as a red alert.

Task 7: Verify nftables rules are present and have IPv6 coverage

cr0x@server:~$ sudo nft list ruleset
table inet ufw {
	chain input {
		type filter hook input priority filter; policy drop;
		iif "lo" accept
		ct state established,related accept
		ip protocol icmp accept
		ip6 nexthdr ipv6-icmp accept
		tcp dport 22 ip saddr 203.0.113.0/24 accept
		tcp dport 22 ip6 saddr 2001:db8:feed::/48 accept
		tcp dport { 80, 443 } accept
		counter packets 12345 bytes 987654
		reject with icmpx type admin-prohibited
	}
	chain forward {
		type filter hook forward priority filter; policy drop;
	}
	chain output {
		type filter hook output priority filter; policy accept;
	}
}

Meaning: A single table inet with both ip and ip6 matches is a good pattern. Note the default policy drop on input. Also note explicit ICMP/ICMPv6 allowances.

Decision: If you only see table ip rules and nothing for IPv6, you’re probably filtering IPv4 only. Fix by enabling IPv6 in UFW or writing explicit nftables rules for ip6 / inet.

Task 8: Check nftables counters while generating IPv6 traffic

cr0x@server:~$ sudo nft -a list chain inet ufw input
table inet ufw {
	chain input {
		type filter hook input priority filter; policy drop;
		iif "lo" accept
		ct state established,related accept
		ip6 nexthdr ipv6-icmp accept
		tcp dport 22 ip6 saddr 2001:db8:feed::/48 accept # handle 14
		tcp dport { 80, 443 } accept # handle 15
		reject with icmpx type admin-prohibited # handle 16
	}
}

Meaning: The # handle lets you watch counters if you add counter statements or if your chain already includes them. If you don’t see counters incrementing when you test, you may be looking at the wrong ruleset or the traffic is bypassing (rare, but possible with policy routing or other hooks).

Decision: If counters don’t move during a known inbound attempt, confirm which firewall backend is active and whether another system manages nftables.

Task 9: Confirm which firewall backend UFW is using

cr0x@server:~$ sudo ufw version
ufw 0.36.2
Copyright 2008-2023 Canonical Ltd.

Backend: nf_tables

Meaning: If it says Backend: nf_tables, UFW is generating nftables rules. If you see legacy iptables in play, you may have split-brain filtering.

Decision: Standardize: pick nf_tables and remove/stop legacy scripts that still call iptables directly.

Task 10: Detect iptables legacy rules that give false confidence

cr0x@server:~$ sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

Meaning: If legacy iptables policies are ACCEPT while you think you “deny by default,” it’s not necessarily wrong—because nftables may be doing the real filtering. But it’s a clue that you’re one migration mistake away from an incident.

Decision: If you use nftables, stop relying on iptables output for assurance. If anything still installs iptables rules, either migrate it or remove it.

Task 11: Check IPv6 sysctls that affect binding and acceptance

cr0x@server:~$ sysctl net.ipv6.conf.all.disable_ipv6 net.ipv6.conf.default.disable_ipv6 net.ipv6.bindv6only
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.bindv6only = 0

Meaning: IPv6 is enabled. net.ipv6.bindv6only=0 means IPv6 sockets may accept IPv4-mapped traffic in some cases; it varies by application and socket options.

Decision: Don’t “fix” exposure by toggling sysctls unless you’ve tested app behavior. Prefer firewall + binding corrections.

Task 12: Identify services that bind to all interfaces and should not

cr0x@server:~$ systemctl status node_exporter.service --no-pager
● node_exporter.service - Prometheus Node Exporter
     Loaded: loaded (/lib/systemd/system/node_exporter.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-12-30 10:18:42 UTC; 2h 11min ago
   Main PID: 1337 (node_exporter)
      Tasks: 4 (limit: 9382)
     Memory: 9.8M
        CPU: 2.1s
     CGroup: /system.slice/node_exporter.service
             └─1337 /usr/bin/node_exporter --web.listen-address=:9100

Meaning: --web.listen-address=:9100 is “bind to all,” which includes IPv6 on many systems. That is frequently not what you want on an internet-facing host.

Decision: Bind exporters to a management VLAN IP, localhost with a reverse proxy, or firewall it to your scrapers—on both IPv4 and IPv6.

Task 13: Confirm systemd socket activation isn’t creating surprise listeners

cr0x@server:~$ systemctl list-sockets --all --no-pager
LISTEN                         UNIT                     ACTIVATES
[::]:22                        ssh.socket               ssh.service
127.0.0.53%lo:53               systemd-resolved.socket   systemd-resolved.service
/var/run/dbus/system_bus_socket dbus.socket              dbus.service

Meaning: If you see a socket unit listening on [::], the kernel is accepting connections even if the service appears stopped; systemd will spawn it on demand.

Decision: If you intended to shut a port, disable the socket unit, not just the service.

Task 14: External IPv6 scan from a different host (operator reality check)

cr0x@server:~$ nmap -6 -Pn -p 22,80,443,9100 2001:db8:1234:5678:5054:ff:fe12:3456
Starting Nmap 7.94 ( https://nmap.org ) at 2025-12-30 12:35 UTC
Nmap scan report for 2001:db8:1234:5678:5054:ff:fe12:3456
Host is up (0.021s latency).

PORT     STATE    SERVICE
22/tcp   open     ssh
80/tcp   open     http
443/tcp  open     https
9100/tcp filtered jetdirect

Nmap done: 1 IP address (1 host up) scanned in 3.11 seconds

Meaning: open means reachable and answering. filtered means a firewall is blocking (good, if intended). If you expected 22 to be restricted, seeing it open is the whole problem.

Decision: Treat external scan results as truth. Now align firewall rules and service bindings until the scan matches your intended exposure.

Task 15: Quick check of listening processes and owning packages

cr0x@server:~$ sudo lsof -nP -iTCP -sTCP:LISTEN | head
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd       1123 root    3u  IPv4  32249      0t0  TCP *:22 (LISTEN)
sshd       1123 root    4u  IPv6  32251      0t0  TCP *:22 (LISTEN)
nginx      1200 root    6u  IPv6  32999      0t0  TCP *:80 (LISTEN)
node_expo  1337 node    3u  IPv6  33301      0t0  TCP *:9100 (LISTEN)

Meaning: lsof gives you process name and PID; no guessing which service is behind which port.

Decision: If a listener is unrecognized, don’t just firewall it. Identify it, remove it, or isolate it—mystery daemons are how “temporary debugging” becomes “permanent incident.”

UFW, nftables, and the Ubuntu 24.04 reality

Ubuntu 24.04 lives in the nftables world. That’s good. nftables is more consistent, supports the inet family (single ruleset for v4+v6), and is what you want for modern hosts.

UFW is still fine—if you treat it as a policy compiler and validate the output. Where people go wrong is assuming UFW status output equals enforcement. It usually does, but “usually” is not an SLO.

Make IPv6 policy explicit (even if you allow nothing)

If you use UFW, do not leave IPv6 as an implied default. In practice, you want three things:

  • IPV6=yes in /etc/default/ufw so UFW emits v6 rules.
  • Default deny incoming for both stacks.
  • Explicit allow rules for the ports and sources you intend on both stacks.

Prefer “allow from” rules over global allows

On IPv6, the temptation is to allow widely because “it’s complicated.” It’s not complicated; it’s just unfamiliar.

If SSH is for admins, then SSH is for admin networks. That applies to IPv6 too. If you don’t have stable admin IPv6 ranges, solve that by using a VPN/bastion, not by opening SSH to the planet.

ICMPv6 is not optional plumbing

People love blocking ICMP. They hear “ping” and think “recon.” With IPv6, blocking ICMPv6 breaks real protocol mechanisms: Path MTU discovery, Neighbor Discovery, Router Advertisements in some environments. Your “stealth” posture becomes “random outages.”

Joke #2: Blocking ICMPv6 to “be safe” is like cutting your brake lines to prevent speeding.

When to write nftables directly

If you have a complex environment—containers, multiple interfaces, policy routing, strict segmentation—you may outgrow UFW. That’s not a moral failure. It’s maturity.

But if you write nftables directly, own the whole lifecycle: rules deployment, atomically loading rules, backups, and CI checks. “Hand-editing live firewall rules at 2 a.m.” is a tradition we should end.

systemd sockets: the “service isn’t running” lie

On Ubuntu, systemd can listen on a port on behalf of a service and spawn that service only when a connection arrives. That’s socket activation. It’s efficient. It’s also a source of confusion during security audits.

Failure mode: someone stops ssh.service and thinks SSH is off. But ssh.socket is still listening. The port remains reachable. The scanner remains unimpressed.

Operational rule: if a port is open, it’s open. “But the service was stopped” is not a defense, it’s an admission you didn’t check the sockets layer.

Use systemctl list-sockets, identify listeners on [::], and disable them if the port should not exist.

Cloud and upstream filtering: don’t outsource your thinking

In the corporate world, IPv6 exposure often comes from upstream controls being split-brain:

  • Security group has IPv4 rules set correctly.
  • IPv6 rules are empty (which sometimes means “allow all” depending on platform defaults or attached policies), or set by a different team, or simply forgotten.
  • Host firewall assumes upstream blocks things; upstream assumes host firewall blocks things.

The only stable strategy is defense in depth with explicit v6 controls at each layer. Host firewall should be correct even if upstream is wrong, because upstream eventually will be wrong.

Also: if you use managed load balancers, check how they handle IPv6 termination and backend connectivity. You can have a perfectly hardened host and still expose an admin service through an IPv6 listener on the wrong interface if your internal network is dual-stack.

Three corporate mini-stories (anonymized, painfully plausible)

1) Incident caused by a wrong assumption: “We don’t use IPv6 here”

The company ran a fleet of Ubuntu VMs behind an edge firewall. Everything was “IPv4-only” according to the network diagram. The security posture was built around it: UFW rules were IPv4-centric, and compliance scans checked only A records.

During a routine pen test, the tester asked for the IPv6 ranges. The response was a shrug. “We don’t use IPv6.” The tester didn’t need ranges; they needed one hostname and a resolver that returned AAAA records.

It turned out the cloud VPC had IPv6 enabled months earlier for a separate project. Some subnets were assigning IPv6 addresses automatically. Nobody told the ops team because the change “didn’t affect production.” It didn’t—until someone looked.

The findings were mundane and brutal: SSH open to the world on IPv6, a staging admin panel bound to ::, and a metrics endpoint without auth. None of it was exploited; it didn’t need to be. The report was enough.

The fix was not heroic. They enabled UFW IPv6, mirrored the allowlists, tightened service bindings, and added dual-stack scanning to CI. The hard part was cultural: admitting that “we don’t use IPv6” was never a fact—just an untested belief.

2) Optimization that backfired: “Let’s simplify firewall rules”

A platform team wanted fewer moving parts. They were tired of UFW being “yet another abstraction,” so they migrated to a minimal nftables ruleset. One table, one chain, default accept. They’d rely on upstream security groups. Clean. Elegant. Fast.

Then they hit a production incident: sporadic connectivity failures for a subset of clients using IPv6. The on-call engineer assumed it was a load balancer problem. It wasn’t. The firewall was allowing inbound, but outbound return traffic was being impacted by a separate policy routing change, and ICMPv6 needed for PMTU discovery was being dropped upstream.

The “optimization” removed local logging and removed the habit of looking at counters. The team had no quick way to prove what the host was doing. They had to debug blind, from packet captures, under pressure.

They eventually reverted to a stricter host firewall: default drop inbound, explicit allows, explicit ICMPv6 handling, and logging at a sane rate. The ironic lesson: fewer rules didn’t mean less complexity. It just moved complexity to a place with worse visibility.

3) Boring but correct practice that saved the day: “We scan both stacks, every time”

A different org had a boring habit: every build pipeline ran an external scan against the candidate host, on IPv4 and IPv6, from a runner outside the production VPC. It wasn’t fancy. It was consistent.

One day, a base image update changed a service’s default bind address from 127.0.0.1 to ::. The service was supposed to be internal-only, scraped through a sidecar proxy. On IPv4, it still looked okay because of internal routing. On IPv6, it was suddenly reachable from places it shouldn’t be.

The dual-stack scan failed the build. The ticket never reached production. Nobody had to write an incident report or explain to leadership why “internal-only” was on the public internet.

That’s the value of boring practices: they don’t prevent every failure, but they make failures cheap. You want cheap failures.

Common mistakes: symptoms → root cause → fix

1) Symptom: “IPv4 ports are closed, but IPv6 scan shows them open”

Root cause: IPv6 rules missing or permissive (UFW IPv6 disabled, or nftables lacks ip6/inet coverage).

Fix: Enable IPv6 in UFW (IPV6=yes), reload, and verify nftables has table inet or explicit ip6 chains with default drop. Re-scan externally over IPv6.

2) Symptom: “UFW is active but IPv6 still reachable”

Root cause: Split-brain firewall tooling; UFW manages nftables but another tool loads a permissive ruleset later, or you’re checking iptables while nftables is enforcing.

Fix: Inspect nft list ruleset. Ensure your rules load at boot and nothing overrides them. Standardize on one manager.

3) Symptom: “I stopped the service, but the port is still open”

Root cause: systemd socket activation; the socket unit is still listening.

Fix: Disable the .socket unit: systemctl disable --now name.socket. Verify with ss and external scan.

4) Symptom: “After tightening IPv6 firewall, random connections hang”

Root cause: ICMPv6 blocked too aggressively, breaking PMTU discovery or Neighbor Discovery.

Fix: Allow essential ICMPv6 types. If using UFW, ensure it permits IPv6-ICMP. If writing nftables, explicitly accept ip6 nexthdr ipv6-icmp and consider more granular rules later.

5) Symptom: “Service is bound to 0.0.0.0, why is it on IPv6?”

Root cause: The service separately binds to IPv6 :: or uses a single dual-stack socket depending on runtime and sysctls.

Fix: Confirm with ss -lntup. Set explicit listen addresses in the service config (v4 and v6 as needed), or bind to a specific interface/IP.

6) Symptom: “Compliance scan is green, but external researcher emailed us about IPv6 exposure”

Root cause: Scanning tooling only tested IPv4 (or only DNS A records).

Fix: Add AAAA-aware asset discovery and IPv6 scanning to the control. Treat dual-stack as baseline.

7) Symptom: “We disabled IPv6, and now apt or internal services break”

Root cause: Your environment uses IPv6 for some paths (mirrors, proxies, SSO, service discovery). Disabling IPv6 is not always “safe.”

Fix: Prefer firewalling over disabling. If you must disable, test in staging with the same DNS and proxy configuration.

Checklists / step-by-step plan

Checklist A: Emergency response (you found IPv6 exposure today)

  1. Identify the exposed IPv6 address(es): ip -6 addr. Decide which are global.
  2. List listeners: ss -lntup. Note [::]:PORT.
  3. Block inbound IPv6 at host firewall immediately: if you’re unsure, default deny inbound and allow only SSH from your admin ranges.
  4. Confirm enforcement: nft list ruleset and external scan with nmap -6.
  5. Fix bindings: reconfigure services to listen only on intended interfaces.
  6. Leave logging on (moderate): enough to see if you’re blocking legitimate traffic.

Checklist B: Correct hardening (make it stay fixed after reboots and updates)

  1. Pick one firewall manager: UFW on nftables, or nftables directly. Don’t run two “owners.”
  2. Make dual-stack rules symmetrical: every inbound allow should exist for IPv4 and IPv6 unless you intentionally diverge (rare).
  3. Keep ICMPv6 functional: allow it appropriately.
  4. Audit systemd sockets: systemctl list-sockets; disable unexpected listeners.
  5. Add dual-stack scanning to your pipeline: external vantage point, store results, fail builds on unexpected opens.
  6. Document intended exposure: port, protocol, source ranges, and justification. Treat it like an API contract.

Checklist C: If you insist on disabling IPv6 (do it like an adult)

  1. Inventory dependencies: DNS AAAA usage, internal services, proxies, monitoring. Do not guess.
  2. Disable at sysctl with persistence: set net.ipv6.conf.all.disable_ipv6=1 and default in /etc/sysctl.d/, then reboot-test.
  3. Verify it’s actually disabled: ip -6 addr should show only ::1 or none on interfaces.
  4. Re-verify services: some apps behave differently when IPv6 disappears; test health checks and client connectivity.

My bias: firewalling is usually safer than disabling. Disabling IPv6 is a big hammer that tends to hit your own thumbs later.

FAQ

1) Why does my server have IPv6 at all? I never configured it.

Because modern networks often provide it automatically (cloud settings, Router Advertisements, DHCPv6). Ubuntu will happily use it when available.

2) If I don’t publish AAAA records, am I safe?

No. Attackers can discover IPv6 addresses via many paths: logs, certificates, neighbor discovery on internal networks, cloud metadata, or simply scanning known prefixes in some environments.

3) Does UFW block IPv6 by default?

Only if IPv6 support is enabled in UFW and rules are generated for v6. Check /etc/default/ufw and ufw status verbose for “(v6)” rules.

4) Should I mirror my IPv4 rules exactly for IPv6?

Almost always, yes. Divergence is a conscious architecture choice. If you can’t explain why IPv6 is more open than IPv4, it’s an accident.

5) Why do some ports show “filtered” on IPv6 scans?

That typically indicates the firewall is dropping packets (good for “blocked”). It can also mean upstream filtering. Validate by checking host firewall counters and logs.

6) Can I just set IPV6=no in UFW to fix exposure?

That usually makes it worse: you’re telling UFW to stop managing IPv6, not disabling IPv6 networking. You’ll still have IPv6 addresses and listeners; you’ll just have fewer protections.

7) What’s the simplest safe IPv6 posture for a public web server?

Default deny inbound. Allow 80/443 from anywhere on both stacks. Allow SSH only from admin ranges or via a VPN/bastion. Allow essential ICMPv6.

8) I’m behind a cloud load balancer. Do I still need a host firewall?

Yes. Load balancers get misconfigured, and internal paths bypass them. Host firewalls are cheap insurance, and they catch “oops we bound to ::” mistakes.

9) How do I know if an app is listening on IPv6?

Use ss -lntup and look for [::]:PORT or a specific IPv6 address. Don’t trust application docs; trust the socket list.

10) Is blocking ICMPv6 ever acceptable?

Not as a blanket policy. You can restrict some types later with care, but you must preserve core functionality or you’ll create outages that look like “random network flakiness.”

Next steps you can actually execute

Do three things this week, and you’ll eliminate most “forgotten IPv6 firewall” incidents:

  1. Audit listeners with ss -lntup and kill or rebind anything that shouldn’t be public on [::].
  2. Make firewall policy dual-stack: confirm UFW IPv6 is enabled (or write nftables rules in inet), and ensure inbound default drop exists for both IPv4 and IPv6.
  3. Validate from outside with a real IPv6 scan and store the result. If you don’t test it externally, you don’t know it.

Ubuntu 24.04 isn’t trying to trick you. It’s just doing what modern systems do: shipping with IPv6 ready. Your job is to make “ready” mean “secure,” not “surprisingly reachable.”

← Previous
MySQL vs TiDB: MySQL Compatibility vs Operational Complexity—What You’re Signing Up For
Next →
Proxmox “qemu-img: Could not create”: permissions, paths, and filesystem fixes that actually work

Leave a comment