Debian 13 minimal firewall profile: what to allow and what to drop (no paranoia)

Was this helpful?

Firewalls don’t usually fail loudly. They fail like a polite saboteur: your deploy “succeeds,” your monitoring goes gray, and your team starts blaming DNS out of sheer habit.

Debian 13 gives you all the plumbing you need for a clean, minimal, boring firewall posture. This guide is about letting the right traffic through, dropping the rest, and diagnosing breakage fast—without turning your server into a museum exhibit that no one can operate.

Principles: minimal, stateful, and observable

A “minimal firewall profile” is not “block everything and pray.” It’s an explicit policy that matches how the machine is used, and it’s implemented with a stateful firewall so you don’t handcraft every response packet like it’s 1999.

1) Default-drop inbound, default-allow established

Inbound traffic should be denied unless there’s a reason to accept it. But once you accept a connection, you keep accepting the related response traffic. That’s what state tracking is for. If you try to be clever and block half of a TCP handshake, you’ll just learn new ways to suffer.

2) Outbound is where “no paranoia” becomes “no surprises”

Most minimal profiles default-allow outbound. That’s fine for many servers—until you’re cleaning up a compromised process that can happily exfiltrate over 443 to anywhere. A sane middle ground is: allow outbound broadly, but log and optionally restrict “weird” destinations or ports once you know the box’s job.

3) Logging should answer a question, not generate art

If you log every dropped packet, your logs become an expensive way to store botnet weather. Log with rate limits, and only on the edges you actually investigate: new inbound drops, unexpected outbound, and blocked ICMP types that break path MTU discovery.

4) If it can’t be tested, it’s not a policy

A ruleset that only exists in someone’s head is a bedtime story. Your firewall should be testable from the host and from the network. You need commands that verify “SSH works from admin subnet,” “monitoring can scrape,” and “DNS resolution isn’t silently broken.”

One quote, because it still holds up: “Hope is not a strategy.”John C. Maxwell. Operations people repeat it because the alternative is writing incident reports about “we assumed.”

Short joke #1: A firewall is like a bouncer—if it can’t tell who’s on the guest list, it will eventually deny the DJ and admit the guy carrying a router.

Interesting facts and historical context (short, useful)

  • Netfilter landed in Linux 2.4 (early 2001 era), replacing older packet filtering code and enabling the modern conntrack model that makes “established/related” work.
  • iptables is an interface, not the engine. The underlying kernel hooks are netfilter; tools changed over time because the interface got messy and hard to extend.
  • nftables was introduced to unify iptables variants (iptables, ip6tables, arptables, ebtables) into one framework with better rule evaluation and data structures.
  • conntrack is shared state, not magic. It consumes memory; under load or attack, poor tuning can cause dropped connections that look like “random network flakiness.”
  • ICMP isn’t optional for healthy TCP. Blocking all ICMP breaks PMTUD, causing stalls on certain paths and MTUs. People rediscover this every few years.
  • Stateful firewalls aren’t just for inbound. Outbound policies often rely on state to allow return traffic without opening inbound ports.
  • Debian has long favored “do one thing well.” It doesn’t force a firewall manager; you get nftables, iptables-nft compatibility, and you choose how opinionated your wrapper is.
  • Cloud security groups changed expectations. Many teams now treat host firewalls as secondary. That works until someone migrates a VM to a different network boundary or misconfigures a security group.

Threat model you can actually use

A minimal firewall profile isn’t a compliance checklist. It’s a map of what traffic should exist. Start with what you’re defending against:

  • Internet scanning and opportunistic exploitation. If you expose SSH or a web port, expect constant background noise and a steady stream of bad ideas.
  • Lateral movement inside your own network. Internal networks are not magically trustworthy. They’re just where the attacks are quieter.
  • Accidental exposure. A developer binds a debug server to 0.0.0.0, or a package opens a port you didn’t remember you installed.
  • Data exfiltration and command-and-control. Outbound rules are the only place you can meaningfully slow this down at the host level.
  • Self-inflicted outages. Most firewall incidents are not hackers. They’re you, at 2am, applying “one small change.”

Minimal doesn’t mean fragile. The goal is a ruleset that’s small enough to reason about and strict enough to remove surprises.

What to allow vs what to drop (typical Debian 13 server)

Inbound: default drop, then add only what you can name

Inbound is where you win by being boring. Start with a default-drop stance, then explicitly allow:

  • SSH (TCP/22) from an admin subnet, bastion, or VPN range. If you must allow from anywhere, add rate limiting and strong auth.
  • HTTP/HTTPS (TCP/80, TCP/443) only if the host is actually a web endpoint. Many boxes don’t need these ports open at all.
  • Monitoring ingress (varies): e.g., node exporter (9100), SNMP (161/udp), custom agents. Prefer “pull from monitoring subnet” rather than global exposure.
  • Service-specific ports for databases and brokers only to the application subnets that need them. Databases should almost never accept connections from “anywhere.”
  • ICMP selectively: allow echo-request from trusted networks, and allow ICMP errors needed for PMTUD (at least “fragmentation needed” style messages).

Outbound: start permissive, then tighten based on reality

Outbound is where people either do nothing or go full bunker. The pragmatic approach:

  • Allow DNS (UDP/TCP 53) to your resolvers only, or allow to local stub resolver if you run one. Random outbound DNS is often a sign of trouble.
  • Allow NTP (UDP/123) to your time source(s). Time drift causes authentication failures and confusing logs.
  • Allow package updates (TCP/80/443) to the world if you must, or to your corporate proxy/mirror if you’re mature enough to have one.
  • Allow your business egress: databases, message buses, object stores, APIs. Document them; otherwise you’ll “optimize” yourself into an outage later.

What to drop without guilt

  • Everything inbound that is not explicitly permitted.
  • All inbound UDP by default unless you run a UDP service (DNS server, NTP server, VPN endpoint, syslog receiver, etc.). UDP exposure invites weirdness.
  • All inbound SMB/NFS/RPC unless the host is a file server and you’ve scoped it tightly. These protocols are great—inside the right boundary.
  • All “management ports” (Docker API, kubelet, random dashboards) unless you can state who needs them and from where.

Short joke #2: If you “temporarily open all ports,” your firewall becomes a motivational poster: it looks protective, but it mostly inspires optimism.

A minimal nftables profile (with rationale)

Debian 13 will happily run nftables as your primary firewall. Keep the ruleset readable, comment it, and resist the urge to build a second networking stack inside your firewall.

Design choices

  • Single inet table for IPv4+IPv6 symmetry. If you only write v4 rules, v6 becomes the side door you didn’t mean to install.
  • Input default drop, forward default drop. Output default accept (initially), with options to lock down later.
  • Allow loopback always. Breaking localhost traffic is a great way to invent brand new failure modes.
  • Allow established/related early for performance and sanity.
  • Allow ICMP/ICMPv6 essentials rather than “all” or “none.”
  • Log drops with rate limit. You want signal, not an aquarium of packets.

Minimal ruleset example

cr0x@server:~$ sudo tee /etc/nftables.conf > /dev/null <<'EOF'
#!/usr/sbin/nft -f

flush ruleset

table inet filter {
  set admin_v4 {
    type ipv4_addr
    flags interval
    elements = { 192.0.2.0/24, 198.51.100.10 }
  }

  chain input {
    type filter hook input priority 0; policy drop;

    # Always allow loopback
    iif "lo" accept

    # Allow established/related traffic
    ct state established,related accept

    # Drop invalid packets early
    ct state invalid drop

    # ICMP/ICMPv6: keep the network healthy
    ip protocol icmp icmp type { echo-request, echo-reply, destination-unreachable, time-exceeded, parameter-problem } accept
    ip6 nexthdr icmpv6 icmpv6 type { echo-request, echo-reply, destination-unreachable, time-exceeded, packet-too-big, parameter-problem } accept

    # SSH from admin networks only
    ip saddr @admin_v4 tcp dport 22 ct state new accept

    # Example: HTTPS if this host is a web endpoint
    tcp dport 443 ct state new accept

    # Log & drop everything else (rate-limited)
    limit rate 10/second burst 20 packets log prefix "nft-in-drop " level info
    drop
  }

  chain forward {
    type filter hook forward priority 0; policy drop;
  }

  chain output {
    type filter hook output priority 0; policy accept;

    # Optional: tighten later once you know your egress
    # Example patterns shown in the tasks section.
  }
}
EOF

This is intentionally plain. It doesn’t attempt to solve DDoS. It does not micro-manage every outbound connection. It does stop accidental exposure, reduces the blast radius of “oops I installed a daemon,” and makes inbound access auditable.

Two operational notes that save pain:

  • IPv6 is real. If your host has IPv6 enabled (common), but your firewall only covers IPv4, you have an unintentional bypass.
  • Don’t forget persistence. A ruleset applied manually at 3pm and gone after reboot is not a firewall; it’s a performance art piece.

Practical tasks (commands, outputs, decisions)

These are the things you actually do on a Debian 13 box when you want a minimal firewall that doesn’t generate tickets. Each task includes: command, what output means, and the decision you make.

Task 1: Confirm what firewall backend you’re really using

cr0x@server:~$ sudo update-alternatives --display iptables
iptables - auto mode
  link best version is /usr/sbin/iptables-nft
  link currently points to /usr/sbin/iptables-nft
  link iptables is /usr/sbin/iptables
/usr/sbin/iptables-legacy - priority 10
/usr/sbin/iptables-nft - priority 20

What it means: You’re using the nftables backend via iptables compatibility. That’s fine, but mixing raw iptables rules and nftables rulesets can get confusing fast.

Decision: Prefer native nftables rules in /etc/nftables.conf. If legacy iptables is in use, plan a migration window.

Task 2: See if nftables is active and persistent

cr0x@server:~$ systemctl status nftables
● nftables.service - nftables
     Loaded: loaded (/lib/systemd/system/nftables.service; enabled)
     Active: active (exited) since Mon 2025-12-30 09:11:10 UTC; 2min ago
       Docs: man:nft(8)

What it means: The service is enabled and has loaded your ruleset. “active (exited)” is normal; it loads rules then exits.

Decision: If it’s disabled/inactive, enable it after validating rules: sudo systemctl enable --now nftables.

Task 3: List the active ruleset (trust, but verify)

cr0x@server:~$ sudo nft list ruleset
table inet filter {
	set admin_v4 {
		type ipv4_addr
		flags interval
		elements = { 192.0.2.0/24, 198.51.100.10 }
	}

	chain input {
		type filter hook input priority filter; policy drop;
		iif "lo" accept
		ct state established,related accept
		ct state invalid drop
		ip protocol icmp icmp type { echo-request, echo-reply, destination-unreachable, time-exceeded, parameter-problem } accept
		ip6 nexthdr icmpv6 icmpv6 type { echo-request, echo-reply, destination-unreachable, time-exceeded, packet-too-big, parameter-problem } accept
		ip saddr @admin_v4 tcp dport 22 ct state new accept
		tcp dport 443 ct state new accept
		limit rate 10/second burst 20 packets log prefix "nft-in-drop " level info
		drop
	}
}

What it means: The kernel currently has exactly what you think you loaded. That’s the ground truth.

Decision: If you see unexpected tables/chains, you have other tooling writing rules (cloud agent, container platform, old scripts). Decide who owns the firewall.

Task 4: Validate the config syntax before applying (avoid self-lockout)

cr0x@server:~$ sudo nft -c -f /etc/nftables.conf

What it means: No output is good; syntax check passed. If it prints an error, it will tell you the line number and token.

Decision: Never apply unvalidated firewall configs over SSH unless you have an out-of-band console or a safety net.

Task 5: Find what services are actually listening (stop guessing)

cr0x@server:~$ sudo ss -lntu
Netid State  Recv-Q Send-Q Local Address:Port  Peer Address:Port Process
tcp   LISTEN 0      4096   0.0.0.0:22         0.0.0.0:*
tcp   LISTEN 0      4096   0.0.0.0:443        0.0.0.0:*
udp   UNCONN 0      0      127.0.0.53:53      0.0.0.0:*

What it means: Only SSH and HTTPS are exposed on all interfaces; DNS is bound to the local stub resolver only. Good baseline.

Decision: If you see a surprise port listening on 0.0.0.0 or :::, either close it at the service (preferred) or explicitly firewall it until you understand.

Task 6: Identify which process owns a port

cr0x@server:~$ sudo ss -lntup 'sport = :443'
Netid State  Recv-Q Send-Q Local Address:Port Peer Address:Port Process
tcp   LISTEN 0      4096   0.0.0.0:443       0.0.0.0:*     users:(("nginx",pid=1187,fd=6))

What it means: nginx is the listener. That’s useful when you’re deciding whether to open 443 at all.

Decision: If it’s not a service you want internet-facing, fix the service bind address or disable it.

Task 7: Check default route and interfaces (firewall rules are interface-sensitive)

cr0x@server:~$ ip -br addr
lo               UNKNOWN        127.0.0.1/8 ::1/128
ens3             UP             203.0.113.20/24 2001:db8:1234::20/64

cr0x@server:~$ ip route
default via 203.0.113.1 dev ens3
203.0.113.0/24 dev ens3 proto kernel scope link src 203.0.113.20

What it means: ens3 is the real interface; you have IPv6 too. If your firewall doesn’t cover inet, you’re missing half the story.

Decision: Use table inet rules unless you have a deliberate reason not to.

Task 8: Confirm you’re not accidentally forwarding packets

cr0x@server:~$ sysctl net.ipv4.ip_forward net.ipv6.conf.all.forwarding
net.ipv4.ip_forward = 0
net.ipv6.conf.all.forwarding = 0

What it means: This host is not acting as a router. Good for typical servers.

Decision: If forwarding is enabled unintentionally, disable it or ensure your forward chain policy is explicit and safe.

Task 9: Watch for drops live (prove the firewall is the problem)

cr0x@server:~$ sudo journalctl -f -k | grep -E 'nft-in-drop|IN='
Dec 30 09:15:22 server kernel: nft-in-drop IN=ens3 OUT= MAC=... SRC=198.51.100.77 DST=203.0.113.20 LEN=60 TOS=0x00 PREC=0x00 TTL=49 ID=4242 DF PROTO=TCP SPT=51234 DPT=22 WINDOW=64240 SYN

What it means: Someone at 198.51.100.77 tried to SYN to SSH and got dropped by your rules (not accepted by admin_v4 set).

Decision: If it’s legitimate admin access, add the source IP/range to the allowlist. If not, ignore and keep logs rate-limited.

Task 10: Verify conntrack state is working (stateful rules depend on it)

cr0x@server:~$ sudo conntrack -S
cpu=0 found=412 new=37 invalid=2 ignore=0 delete=12 delete_list=12 insert=37 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0

What it means: conntrack is functioning; low drop/early_drop is healthy.

Decision: If you see many drop or early_drop, you may be exhausting conntrack table size under load or attack. Consider tuning and reducing exposure.

Task 11: Check conntrack table capacity (classic “random connection failures” culprit)

cr0x@server:~$ sysctl net.netfilter.nf_conntrack_count net.netfilter.nf_conntrack_max
net.netfilter.nf_conntrack_count = 1832
net.netfilter.nf_conntrack_max = 262144

What it means: You’re nowhere near capacity. If count approaches max, state tracking becomes a bottleneck.

Decision: If you’re near max during normal load, raise nf_conntrack_max and ensure you have memory headroom; also reduce unnecessary inbound exposure.

Task 12: Confirm DNS works with your firewall posture

cr0x@server:~$ resolvectl status
Global
       Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Current DNS Server: 192.0.2.53
       DNS Servers: 192.0.2.53 192.0.2.54

cr0x@server:~$ dig +short debian.org
151.101.2.132

What it means: The system resolver is using your internal DNS, and lookups succeed.

Decision: If DNS fails after tightening outbound rules, allow UDP/TCP 53 to the listed resolvers (not “anywhere”).

Task 13: Test inbound reachability from the host itself (sanity check)

cr0x@server:~$ nc -vz 127.0.0.1 22
Connection to 127.0.0.1 22 port [tcp/ssh] succeeded!

cr0x@server:~$ nc -vz 127.0.0.1 443
Connection to 127.0.0.1 443 port [tcp/https] succeeded!

What it means: Services are up locally. If remote access fails, it’s firewall/routing/security-group territory, not the daemon.

Decision: Don’t “fix” nginx when the problem is a dropped SYN.

Task 14: Apply rules safely with an automatic rollback guard

cr0x@server:~$ sudo bash -c 'nft -c -f /etc/nftables.conf && (sleep 20; nft flush ruleset) & nft -f /etc/nftables.conf'

What it means: This applies the rules, but schedules a flush in 20 seconds. If you lock yourself out, the rules self-destruct.

Decision: Use a guard like this when editing over SSH. If everything still works, cancel the rollback by killing the background sleep process (or re-apply properly and restart nftables).

Task 15: Make the rules persistent and controlled by systemd

cr0x@server:~$ sudo systemctl enable --now nftables
Synchronizing state of nftables.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable nftables

What it means: Rules will load on boot; you’ve moved from “handcrafted runtime state” to a managed service.

Decision: From here on, changes go through config management or at least version control.

Task 16: Tighten outbound without breaking updates (example)

This is where teams get scared and do nothing. Here’s a workable pattern: keep output policy accept, but explicitly allow the “known necessary” ports and log the rest for a while before flipping to drop.

cr0x@server:~$ sudo nft add rule inet filter output ct state established,related accept
cr0x@server:~$ sudo nft add rule inet filter output oif "lo" accept
cr0x@server:~$ sudo nft add rule inet filter output udp dport 53 ip daddr { 192.0.2.53, 192.0.2.54 } accept
cr0x@server:~$ sudo nft add rule inet filter output tcp dport 53 ip daddr { 192.0.2.53, 192.0.2.54 } accept
cr0x@server:~$ sudo nft add rule inet filter output udp dport 123 ip daddr 192.0.2.123 accept
cr0x@server:~$ sudo nft add rule inet filter output tcp dport { 80, 443 } accept

What it means: You’ve enumerated DNS, NTP, and web egress. If you later change output policy to drop, these rules keep the box functional.

Decision: Collect what else you need (metrics backends, object storage endpoints, mail relays) before enforcing drop.

Three corporate mini-stories from the trenches

Incident: a wrong assumption (IPv6 was “not in use”)

A team ran a neat little Debian fleet behind a cloud load balancer. They wrote a clean IPv4 firewall: default drop inbound, allow 443, allow SSH from the VPN range. They congratulated themselves and moved on. The change window ended early, which is always suspicious.

A week later, security flagged inbound traffic that shouldn’t have been possible. Not because the firewall was “broken,” but because it was irrelevant. The hosts had global IPv6 addresses assigned automatically, and the rules only covered IPv4. The service was reachable over IPv6 directly, bypassing the load balancer path and bypassing the intended policy perimeter.

It wasn’t an exotic attack. It was the internet doing what it does: scanning anything with a routable address. The logs were thin because they were logging only IPv4 drops. Meanwhile, the application team chased ghosts in the load balancer config.

The fix was not heroic. They moved the ruleset to an inet table, explicitly allowed what they meant to allow for v6, and dropped the rest. Then they audited addressing: either keep IPv6 and own it, or disable it intentionally. The real lesson wasn’t “IPv6 is scary.” It was “if you didn’t write a rule for it, you don’t control it.”

Optimization that backfired (dropping ICMP to “reduce noise”)

An enterprise platform team got tired of “ping storms” and random ICMP logs. Someone proposed a tidy policy: drop all ICMP inbound and outbound. Less noise, fewer “attack surface” arguments, more time for important things—like meetings.

For a while it looked fine. Then some clients started reporting intermittent timeouts on large responses. Not all clients. Not all paths. Just enough to make it a rotating headache. The first response was to blame the application. The second was to blame the network. The third was to increase timeouts, which is a time-honored way to hide a problem until it becomes unhideable.

The actual issue: path MTU discovery. Certain routes required fragmentation handling, and the servers needed to receive ICMP “packet too big” (IPv6) or related unreachable messages (IPv4). With ICMP blocked, connections would stall with black-holed packets. Small requests worked. Big ones died slowly.

They rolled back the ICMP blanket drop and replaced it with “allow essentials, rate-limit echo.” The incident wasn’t expensive in hardware or vendor support. It was expensive in attention. The backfire wasn’t the optimization itself; it was optimizing the wrong thing.

Boring but correct practice that saved the day (staged rollout + logged drops)

A company with a moderately sane SRE team wanted host firewalls across hundreds of Debian servers. The key phrase here is “moderately sane”: they didn’t try to do it perfectly in one go. They wrote a minimal inbound default-drop policy, then deployed it in observe-first mode with rate-limited drop logs and a dashboard that summarized top blocked ports by subnet.

Week one was boring. Bots hit SSH; the firewall dropped them. A handful of legitimate admin IPs were missing from the allowlist; they fixed it. A few legacy monitoring checks were still scraping from old networks; they migrated those checks. Nobody panicked because the rollout was staged, and because they had a way to see what was being dropped.

Week two was where the “boring” work paid off. During a separate incident—an app misconfiguration that caused outbound connection floods—the team could quickly prove it wasn’t an inbound security event. The firewall logs were calm. The conntrack table was stable. The network was fine. They focused on the right layer and resolved it faster.

This is the operational cheat code: a minimal firewall plus observability, rolled out gradually, is less exciting than a grand redesign. It also works.

Fast diagnosis playbook

When “the network is broken,” you want a sequence that narrows the search in minutes, not an hour-long tour of everyone’s opinions.

First: is the service actually listening?

  • Run ss -lntu. If nothing is listening on the expected port, stop blaming the firewall.
  • Run systemctl status <service>. If it’s crashed or bound to localhost, the firewall is a bystander.

Second: is traffic reaching the host?

  • Check interface/address: ip -br addr.
  • From a remote client, attempt a TCP connect. On the server, watch with tcpdump on the interface. If SYNs never arrive, it’s routing/security group/NACL upstream.

Third: is the firewall dropping it?

  • Check logs: journalctl -k | grep nft-in-drop.
  • List rules and counters (if you use them): nft list ruleset. If you’re not counting, start; counters turn guessing into math.

Fourth: if it’s “intermittent,” check conntrack and ICMP

  • conntrack -S and sysctl net.netfilter.nf_conntrack_count for capacity symptoms.
  • Check ICMP policy. If you blocked packet-too-big/time-exceeded, you bought yourself a weird outage.

Fifth: validate outbound dependencies

  • DNS: resolvectl status and dig.
  • Time: timedatectl and your NTP client status.
  • Updates/proxies: attempt apt-get update and watch drops.

Common mistakes: symptom → root cause → fix

1) “SSH worked before, now I’m locked out”

Symptom: SSH times out after applying rules.

Root cause: You allowed SSH only from a subnet that doesn’t include your current IP, or you forgot IPv6 SSH access paths.

Fix: Use an out-of-band console or rollback guard. Add your admin source range to an nft set; apply via nft -c first. Ensure rules are in table inet so v6 is handled.

2) “HTTPS is open but clients hang on large responses”

Symptom: Small requests succeed; large downloads hang or reset unpredictably.

Root cause: ICMP “packet too big” (IPv6) or related PMTUD messages blocked; fragmentation issues get black-holed.

Fix: Allow essential ICMP/ICMPv6 types, not just echo. Keep logs rate-limited.

3) “Monitoring suddenly sees nothing”

Symptom: Metrics scrape fails or agents can’t connect after firewall changes.

Root cause: Monitoring source IPs changed (new collectors, NAT, new subnet) and weren’t in allowlists.

Fix: Use nft sets for monitoring subnets and keep them updated. Prefer allowing from monitoring networks, not “any.”

4) “DNS randomly fails only on this host”

Symptom: Some resolves time out; others succeed.

Root cause: Outbound UDP/53 allowed, but TCP/53 blocked; large DNS responses or DNSSEC fallback needs TCP.

Fix: Allow both UDP and TCP 53 to your resolvers.

5) “Connections are flaky under load”

Symptom: Random outbound or inbound connection failures during spikes.

Root cause: conntrack table exhaustion or heavy churn; stateful firewall can’t allocate entries.

Fix: Tune nf_conntrack_max and consider reducing exposure or using SYN proxying upstream. Also verify you’re not tracking traffic unnecessarily (e.g., for local-only interfaces).

6) “I allowed port 443, but it’s still blocked”

Symptom: Remote clients can’t connect; local connect works.

Root cause: Upstream security group/NACL blocks it, or the service is bound to localhost, or you’re testing IPv6 while only allowing IPv4.

Fix: Confirm bind address via ss -lntup. Confirm packets arrive via tcpdump. Use inet rules for v4/v6 parity.

7) “Docker/Kubernetes changes my firewall behind my back”

Symptom: Ruleset differs from what’s in /etc/nftables.conf; unexpected accept rules appear.

Root cause: Container tooling manipulates netfilter to implement NAT/forwarding.

Fix: Decide ownership: either let the platform manage specific tables/chains, or isolate your host policy in a dedicated table and avoid clobbering platform chains. Validate after deploy.

Checklists / step-by-step plan

Checklist A: Minimal inbound policy for a typical server

  1. Inventory listeners: run ss -lntu. If a service doesn’t need to be public, fix the service first (bind to localhost or disable).
  2. Decide admin access path: VPN/bastion/subnet. Write it down. No, really.
  3. Write nftables rules in an inet table: loopback accept, established/related accept, invalid drop.
  4. Allow SSH only from admin networks: use a set so changes don’t require rewriting rules.
  5. Allow only required service ports: 443 for web, maybe nothing else.
  6. Allow essential ICMP/ICMPv6 types: keep PMTUD functional.
  7. Rate-limit drop logs: otherwise you’ll turn off logging entirely later.
  8. Validate syntax: nft -c -f /etc/nftables.conf.
  9. Apply with rollback guard if remote: schedule a flush if you’re nervous.
  10. Enable nftables service: persistent across reboots.

Checklist B: Sane outbound posture without self-sabotage

  1. Measure first: log unexpected outbound for a week before enforcing drop.
  2. Allow DNS to resolvers: UDP/TCP 53 to your known servers.
  3. Allow NTP to time sources: don’t let time drift invent auth issues.
  4. Allow updates: TCP 80/443 to proxy or mirror; otherwise you’ll “temporarily open it” forever.
  5. Allow business dependencies: databases, message queues, object storage endpoints.
  6. Only then consider output policy drop: and keep an emergency break-glass path.

Checklist C: Change management that won’t create an incident

  1. Stage the rollout: canary a small percentage of hosts.
  2. Have an out-of-band path: console access, serial, or at least a rollback timer.
  3. Use counters and logs: otherwise you’re debugging via vibes.
  4. Record the intent: a comment per rule is cheaper than a postmortem.
  5. Test from the right place: admin network, monitoring network, and a hostile external network (or at least a different subnet).

FAQ

1) Should I use nftables directly or a wrapper tool?

If you run production servers and you want predictable behavior, use nftables directly. Wrappers can be fine, but they often hide the real rules and complicate debugging.

2) Is “default deny inbound” enough security?

It’s a strong baseline, not a full security program. It reduces accidental exposure and limits opportunistic scans. You still need patching, sane auth, least privilege, and monitoring.

3) Do I need to firewall outbound traffic?

Not always on day one. But you should at least understand your outbound dependencies and log unusual egress. Tight outbound rules are most valuable once the host’s role is stable.

4) Why allow any ICMP at all?

Because networks use ICMP for operational correctness, not just “ping.” PMTUD and error signaling rely on it. Block echo if you must, but don’t break packet-too-big and friends.

5) What’s the minimal inbound set for a headless server?

Loopback, established/related, essential ICMP/ICMPv6, and SSH from an admin network. That’s it unless the host serves traffic.

6) How do I avoid locking myself out over SSH?

Validate with nft -c, apply with a rollback timer, and keep an out-of-band console option. Also, don’t test new rules for the first time on the only host you can’t afford to reboot.

7) What about IPv6—disable it or firewall it?

Firewall it. Disabling IPv6 can be acceptable in some environments, but it’s often a whack-a-mole game with tooling and defaults. If the host has IPv6, your policy should cover it.

8) Should I open database ports to the world if the password is strong?

No. Strong auth is good; unnecessary exposure is still bad. Scope DB ports to application subnets or private networks. If you need remote access, use a bastion or VPN.

9) Can I rely on cloud security groups instead of host firewalls?

You can, until you can’t. Host firewalls are a local safety net and a guardrail against accidental listeners. Use both when possible; at least use host firewalls on high-value systems.

10) How much logging should I enable for dropped packets?

Enough to diagnose: rate-limited logs for new inbound drops, and optionally unexpected outbound. If your logs turn into a botnet diary, you’ll stop reading them—and then they’re useless.

Next steps that won’t ruin your weekend

Do three things in this order, and you’ll get 80% of the value with 20% of the risk:

  1. Inventory listeners with ss -lntu and shut down anything that shouldn’t be exposed. This reduces the amount of firewall policy you even need.
  2. Deploy the minimal inbound nftables profile (inet table, loopback, established/related, essential ICMP/ICMPv6, SSH from admin subnet, service ports you actually serve). Enable persistence with systemd.
  3. Add observability: rate-limited drop logs, and a fast diagnosis habit (listen state → packet arrival → firewall decision → conntrack/ICMP).

Once that’s stable, you can decide how far to go on outbound restrictions. Start with logging and allowlisting DNS/NTP/update egress. Tighten only when you’ve observed real traffic. “No paranoia” doesn’t mean “no policy.” It means your policy is grounded in what your servers do, not what your anxiety imagines.

← Previous
MySQL vs PostgreSQL Disk-Full Incidents: Who Recovers Cleaner and Faster
Next →
Ubuntu 24.04: Docker + UFW = Surprise Open Ports — Close the Hole Without Breaking Containers

Leave a comment