You flipped UFW on (or “tightened it up”), your SSH session dropped, and now port 22 looks like it’s on a silent retreat. You can still reach the server via a hypervisor console, cloud serial console, KVM/iDRAC, or whatever out-of-band lifeline you’ve got. Good. That’s enough.
This is the console-only, production-safe way to get SSH back without turning your firewall into wet tissue paper. We’ll diagnose first, change one thing at a time, and leave the system more reliable than when you started. Because getting back in is only half the job; not repeating the incident is the other half.
Ground rules: don’t make it worse
When you’re locked out, the urge is to “just disable the firewall.” Sometimes that’s the right emergency lever. But in production you want to be deliberate, because the same misstep that blocked SSH can also accidentally open databases, admin panels, or internal-only services to the internet.
Here’s the rule set I use:
- Console is your control plane. Keep one console session open until you have a confirmed new SSH session. Don’t close your only parachute mid-jump.
- Prefer additive changes. Add a narrowly-scoped allow rule before removing deny rules or resetting UFW.
- Validate the network path. The firewall might be innocent. Wrong interface, wrong IP, sshd down, routing issues, cloud security groups—plenty of suspects.
- Make changes reversible. Use comments, note the time, keep a copy of before/after output.
One quote I keep taped to the mental dashboard: “Hope is not a strategy.”
— General Gordon R. Sullivan. It applies to firewall work more than anyone wants to admit.
Interesting facts and historical context (yes, it matters)
Firewalls are old tech with new wrappers. Knowing what’s underneath UFW helps you avoid superstition-based debugging.
- UFW is a front-end, not a firewall. It writes rules to the kernel packet filter—historically iptables, increasingly nftables on modern Ubuntu.
- iptables used to be the default for years. Many “classic” tutorials assume iptables chains and outputs that don’t match nft-backed systems.
- nftables arrived to unify IPv4/IPv6 and simplify rule management. It’s not “new” anymore, but plenty of ops runbooks still pretend it is.
- SSH’s default port (22) is a convention, not a law. Security-through-port-hopping is mostly a log-reduction tactic, not real access control.
- UFW default policies are a loaded gun. “Default deny incoming” is correct for servers, but it will drop existing traffic patterns you forgot you had.
- IPv6 lockouts are common. People allow IPv4 on 22 and forget IPv6; clients that prefer IPv6 will fail “mysteriously.”
- Cloud firewalls predate UFW in your request path. Security groups/NACLs can block you long before the packet reaches your kernel.
- Stateful filtering is why existing sessions sometimes survive. Conntrack can keep an established SSH session alive while new ones fail—right up until you disconnect and regret it.
- UFW can be enabled with rules missing. The tool doesn’t read your mind; it will happily enforce “deny” while you’re still typing “allow.”
Fast diagnosis playbook
This is the quickest route to the bottleneck. Follow it in order. The point is to avoid changing the wrong layer.
First: confirm sshd is alive and listening where you think
- Is
sshdrunning? - Is it listening on the expected port and on the expected addresses (0.0.0.0 vs a single IP)?
- Did you accidentally bind it to localhost or an internal interface?
Second: confirm the host network is sane
- Correct IP on the correct interface?
- Default route present?
- Can you reach your gateway?
Third: confirm the firewall layer that’s actually dropping packets
- Cloud security group / provider firewall changes?
- UFW status and active rules?
- Underlying nftables/iptables rules consistent with UFW output?
Fourth: fix with minimal blast radius
- Add a temporary allow from your management IP (or subnet) to SSH.
- Verify you can open a new SSH session.
- Then clean up and harden.
Dry-funny truth #1: The firewall never “hates you personally”; it just enforces what you told it, including the part you didn’t mean.
Console recovery: restore SSH access safely
We’re assuming you have console access (physical, IPMI/iDRAC, hypervisor console, cloud serial console). You’re logged in as a user with sudo or as root.
The safest approach is:
- Confirm sshd is running and listening.
- Confirm what address/port you intend to allow (often port 22, sometimes a non-standard port).
- Add the narrowest possible UFW allow rule for SSH from a trusted source.
- Verify with logs and a real SSH attempt.
- Only then remove emergency allowances or broader rules.
If you don’t know your management IP (it happens), use a temporary broad allow on your internal network or VPN range, not 0.0.0.0/0, and set a reminder to tighten it. “Temporary” without follow-up is how security tickets are born.
Practical tasks (commands, outputs, decisions)
Below are real, runnable commands. Each task includes: what you run, what the output means, and what decision you make next. Run them from the console on the affected host.
Task 1: Confirm you are on the host you think you’re on
cr0x@server:~$ hostnamectl
Static hostname: server
Icon name: computer-server
Chassis: server
Machine ID: 7e3c0f0d3a1c4b3a9c1d2f3a4b5c6d7e
Boot ID: 2b1d2c3e4f5a6b7c8d9e0f1a2b3c4d5e
Operating System: Ubuntu 24.04.1 LTS
Kernel: Linux 6.8.0-36-generic
Architecture: x86-64
Meaning: Confirms OS version and kernel; Ubuntu 24.04 implies UFW may be using an nftables backend depending on packaging/config.
Decision: Proceed assuming modern nft tooling might be relevant; don’t blindly paste 2016 iptables incantations.
Task 2: Check whether SSH is running
cr0x@server:~$ systemctl status ssh --no-pager
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/usr/lib/systemd/system/ssh.service; enabled; preset: enabled)
Active: active (running) since Sat 2025-12-28 09:07:12 UTC; 21min ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 1123 (sshd)
Tasks: 1 (limit: 18763)
Memory: 3.6M (peak: 4.2M)
CPU: 72ms
CGroup: /system.slice/ssh.service
└─1123 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
Meaning: sshd is running. If it was inactive/failed, the firewall wouldn’t be your primary problem.
Decision: If inactive, fix sshd first (config, keys, port). If active, continue to listening sockets.
Task 3: Verify what port/address sshd is listening on
cr0x@server:~$ ss -ltnp | grep -E '(:22|:2222)\s'
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1123,fd=3))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1123,fd=4))
Meaning: SSH listens on port 22 on both IPv4 and IPv6. If you see only 127.0.0.1:22 or a specific interface IP, remote access may fail regardless of UFW.
Decision: If listening looks correct, focus on firewall and upstream filtering. If not, fix /etc/ssh/sshd_config (e.g., ListenAddress, Port) and reload.
Task 4: Confirm your IP and default route
cr0x@server:~$ ip -br addr
lo UNKNOWN 127.0.0.1/8 ::1/128
ens3 UP 203.0.113.10/24 2001:db8:1234:5678::10/64
cr0x@server:~$ ip route
default via 203.0.113.1 dev ens3 proto static
203.0.113.0/24 dev ens3 proto kernel scope link src 203.0.113.10
Meaning: Interface is up, IP exists, default route exists.
Decision: If you’re missing a default route, fix networking first; UFW won’t fix “no route to host.”
Task 5: See whether UFW is even enabled
cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp DENY IN Anywhere
22/tcp (v6) DENY IN Anywhere (v6)
Meaning: UFW is active and explicitly denying SSH. That’s your lockout.
Decision: Don’t reset everything. Add a narrow allow rule above this deny, or delete the deny if it was accidental (after you’ve validated what created it).
Task 6: Inspect numbered rules so you can surgically fix them
cr0x@server:~$ sudo ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] 22/tcp DENY IN Anywhere
[ 2] 22/tcp (v6) DENY IN Anywhere (v6)
Meaning: You can delete by rule number without guessing syntax.
Decision: If the deny is obviously wrong, delete it. If you’re not sure, add a management-IP allow first, then delete.
Task 7: Add a temporary allow from your management IP (recommended)
cr0x@server:~$ sudo ufw allow from 198.51.100.25 to any port 22 proto tcp comment 'TEMP: restore SSH from mgmt IP'
Rule added
Meaning: This creates a more specific allow rule. UFW typically orders rules so specific rules can take precedence.
Decision: Attempt SSH from 198.51.100.25. If it works, you’ve restored access with minimal exposure.
Task 8: Verify rule ordering and effective policy
cr0x@server:~$ sudo ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] 22/tcp ALLOW IN 198.51.100.25
[ 2] 22/tcp DENY IN Anywhere
[ 3] 22/tcp (v6) DENY IN Anywhere (v6)
Meaning: The allow is evaluated before the deny for IPv4. IPv6 is still denied globally, which is fine if your management path is IPv4-only.
Decision: If your clients connect over IPv6, you must also allow IPv6 from your management prefix or disable IPv6 SSH preference on the client (temporary) while you fix rules.
Task 9: Check UFW logging for dropped SSH packets
cr0x@server:~$ sudo journalctl -k -g 'UFW BLOCK' --since '10 minutes ago' --no-pager | tail -n 5
Dec 28 09:26:03 server kernel: [UFW BLOCK] IN=ens3 OUT= MAC=aa:bb:cc:dd:ee:ff SRC=203.0.113.50 DST=203.0.113.10 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=54321 DF PROTO=TCP SPT=51514 DPT=22 WINDOW=64240 RES=0x00 SYN URGP=0
Meaning: Packets are reaching the host and being blocked by UFW. That’s great news: upstream routing and provider firewalls likely aren’t the bottleneck.
Decision: Fix UFW rules, not cloud security groups.
Task 10: Confirm what backend is active (nftables vs iptables) and inspect raw rules
cr0x@server:~$ sudo ufw show raw | head -n 25
*filter
:ufw-user-input - [0:0]
:ufw-user-output - [0:0]
:ufw-user-forward - [0:0]
:ufw-before-input - [0:0]
:ufw-before-output - [0:0]
:ufw-before-forward - [0:0]
:ufw-after-input - [0:0]
:ufw-after-output - [0:0]
:ufw-after-forward - [0:0]
:ufw-logging-deny - [0:0]
:ufw-logging-allow - [0:0]
:ufw-reject-input - [0:0]
:ufw-reject-output - [0:0]
:ufw-reject-forward - [0:0]
-A ufw-user-input -p tcp --dport 22 -s 198.51.100.25 -j ACCEPT
-A ufw-user-input -p tcp --dport 22 -j DROP
COMMIT
Meaning: You see the actual accept then drop logic. That’s the truth on disk.
Decision: If the raw view doesn’t match ufw status, you may have partial application or another firewall manager interfering. Investigate before adding more rules.
Task 11: Remove the accidental deny rule (surgical fix)
cr0x@server:~$ sudo ufw delete 2
Deleting:
allow 22/tcp
Proceed with operation (y|n)? y
Rule deleted
Meaning: You deleted rule #2 from the current numbering. (Yes, numbering changes as you add/remove. Always re-check.)
Decision: If you intend to allow SSH generally, replace the deny with an allow to your admin CIDR, VPN, or bastion—not “Anywhere” unless you have a good reason and other controls.
Task 12: Set a sane SSH policy (choose your poison, but choose one)
Option A: allow SSH only from a management subnet (best for most production servers):
cr0x@server:~$ sudo ufw allow from 198.51.100.0/24 to any port 22 proto tcp comment 'SSH from mgmt subnet'
Rule added
Option B: allow SSH from anywhere (acceptable for bastions with strong auth and monitoring):
cr0x@server:~$ sudo ufw allow 22/tcp comment 'SSH'
Rule added
Meaning: You’re expressing intent. UFW is happiest when rules reflect real access patterns.
Decision: If you pick “Anywhere,” you must compensate with strong SSH configuration (keys only, no root login, possibly fail2ban-like controls, and good logs).
Task 13: Handle IPv6 explicitly (because your client will)
cr0x@server:~$ sudo ufw status verbose | grep -E 'IPv6|v6'
22/tcp (v6) DENY IN Anywhere (v6)
cr0x@server:~$ sudo ufw allow from 2001:db8:9999::/48 to any port 22 proto tcp comment 'SSH from mgmt IPv6'
Rule added
Meaning: IPv6 rules are separate entries. If you use IPv6, treat it as first-class.
Decision: If you don’t use IPv6 at all, don’t leave half-configured policies. Either disable IPv6 at the OS/network level intentionally (with a rollout plan) or lock it down consistently.
Task 14: Validate with a real SSH connection attempt, then verify logs
cr0x@server:~$ sudo journalctl -u ssh --since '10 minutes ago' --no-pager | tail -n 8
Dec 28 09:31:44 server sshd[1890]: Connection from 198.51.100.25 port 55312 on 203.0.113.10 port 22 rdomain ""
Dec 28 09:31:45 server sshd[1890]: Accepted publickey for admin from 198.51.100.25 port 55312 ssh2: ED25519 SHA256:Zk1...
Dec 28 09:31:45 server sshd[1890]: pam_unix(sshd:session): session opened for user admin(uid=1000) by (uid=0)
Meaning: The connection reached sshd and authenticated. You’re back in business.
Decision: Only now should you remove temporary exceptions and finalize policy.
Task 15: If you must, do a controlled UFW disable/re-enable
cr0x@server:~$ sudo ufw disable
Firewall stopped and disabled on system startup
cr0x@server:~$ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
Meaning: This is the “big lever.” It can get you out of jail, but it can also open the entire host briefly, depending on other layers.
Decision: Use only when you understand the exposure and have compensating controls (cloud firewall, private network, maintenance window). Prefer targeted rule fixes.
Task 16: Confirm persistence and systemd integration
cr0x@server:~$ systemctl is-enabled ufw
enabled
cr0x@server:~$ sudo ufw status
Status: active
Meaning: UFW will be applied on boot and is active now.
Decision: If it’s disabled, decide whether that’s an intentional posture. “We disabled it to recover and forgot” is not a strategy, it’s a future incident.
Three corporate mini-stories from the trenches
1) Incident caused by a wrong assumption: “Allow SSH” didn’t mean what they thought
A mid-sized company ran a fleet of Ubuntu servers split across public and private subnets. The team had a habit: keep SSH open only to a corporate VPN range. Reasonable. One day, during a routine OS upgrade wave, an engineer decided to “standardize firewall rules” and rolled out UFW changes through automation.
The change looked safe in code review: it added ufw allow 22/tcp and then a deny-all rule. The assumption was that “deny-all” would only affect unlisted ports, and SSH would remain open. What was missed: an older role also added a deny 22/tcp rule to hosts that were meant to be accessed only through a bastion. Two sources of truth. One friendly, one hostile.
During the rollout, existing SSH sessions stayed alive because of state tracking, so no one noticed. Then a maintenance script closed idle sessions. Suddenly, new sessions failed across a chunk of the fleet. The NOC saw “SSH down” and treated it like a network outage. It wasn’t. It was perfectly functioning software enforcing contradictory policy.
The recovery was slow because the team tried to fix it from their laptops first. They couldn’t get in. Eventually, they used cloud provider consoles to add an allow rule from a management IP, exactly like we did above. The actual fix was boring: remove the duplicate deny rule, add comments, and make a single module own SSH rules. The postmortem’s lesson was even more boring: “assumptions are not tests.”
2) Optimization that backfired: shrinking the ruleset without understanding evaluation
A different org had a security team that disliked “messy rules.” They asked the platform team to “reduce the number of firewall entries” and remove “redundant” IPv6 allowances. The platform team complied quickly. Too quickly.
They removed explicit IPv6 SSH allows on the idea that “nobody uses IPv6” and to reduce audit noise. But a subset of corporate laptops had IPv6 preferred, and their VPN client leaked IPv6 routes in a way that made the clients attempt v6 first. The result: intermittent SSH failures depending on client OS, network, and DNS response ordering. The tickets were a masterpiece of confusion.
Then came the “optimization”: to keep things simple, someone replaced a specific allow-from rule with a broad allow and relied on SSH rate-limiting at the application layer. It reduced rule count. It also increased exposure and log volume, and it made brute-force attempts more visible to everyone else downstream.
The eventual fix was also not glamorous: restore explicit IPv6 rules, keep allow-lists tight, and use UFW’s logging at low/medium plus centralized log aggregation. The team learned that “fewer rules” is not inherently better. Correct rules are better.
3) Boring but correct practice that saved the day: console access and staged rollouts
A financial services shop had a habit I wish everyone copied: every production server had tested out-of-band access, and every firewall change was staged in three steps. Step one: add the new allow rule. Step two: verify connectivity from at least two networks. Step three: remove the old rule. The sequencing sounds pedantic. It’s also why they slept.
One night, an engineer tightened UFW defaults on a group of hosts and accidentally removed a management subnet from the allow-list. They noticed within minutes because their smoke test (a simple SSH connect probe) failed in staging. The same automation would have hit production next.
Even if it had hit production, the playbook required keeping the console session open and having a rollback command ready. They didn’t have to use it, because the staged rollout stopped the blast radius at “a few test boxes.” No heroics. No panic bridge. No executive asking why “a firewall change can take down revenue.”
They fixed the config, reran the test, and moved on. The most impressive part was how uninteresting the incident was. That’s the goal.
Checklists / step-by-step plan
Recovery checklist (from console, minimal risk)
- Keep console open. Don’t close it until you have a new SSH session.
- Confirm sshd running:
systemctl status ssh. - Confirm listening:
ss -ltnp | grep sshd. - Confirm host network:
ip -br addr,ip route. - Check UFW status:
ufw status verbose. - Add narrow allow:
ufw allow from <mgmt-ip> to any port 22 proto tcp. - Verify rule ordering:
ufw status numbered. - Attempt SSH from mgmt IP.
- Confirm in logs:
journalctl -u sshand optionally kernel UFW blocks. - Remove accidental deny rules.
- Decide final SSH policy: allow-list vs anywhere; include IPv6 decision.
- Remove TEMP allowances and add comments to permanent rules.
Stabilization checklist (after you’re back in)
- Record “before/after” UFW status output in your ticket.
- Confirm cloud firewall/security group rules match your intent.
- Confirm SSH hardening: keys-only, disable root login, sensible auth methods.
- Ensure you can reach the box via console out-of-band (and that credentials work).
- Schedule a follow-up to tighten any temporary broad allows.
Dry-funny truth #2: Nothing says “enterprise” like a firewall rule named TEMP from two quarters ago that everyone is afraid to delete.
Common mistakes: symptoms → root cause → fix
-
Symptom: SSH times out; UFW shows “active” but “allow 22/tcp” exists.
Root cause: You allowed IPv4 but are connecting via IPv6 (or vice versa), or you allowed the wrong interface/IP range.
Fix: Add explicit v6 allow rules for your management prefix, or confirm client uses IPv4; verify withssandufw status verbose. -
Symptom: SSH “No route to host” or immediate failure; UFW logs show nothing.
Root cause: Upstream block (cloud security group, NACL), routing issue, or wrong public IP/DNS.
Fix: Verify provider firewall and instance IP; checkip routeand provider console networking. Don’t keep editing UFW when packets never arrive. -
Symptom: Existing SSH sessions worked until they disconnected; new sessions fail.
Root cause: Stateful firewall allowed established connections; new inbound is blocked by policy.
Fix: Add an allow rule for new connections. Don’t confuse “I’m still connected” with “it’s fine.” -
Symptom: UFW says rule is present, but behavior doesn’t change.
Root cause: Another firewall manager overwrote rules, or UFW didn’t apply cleanly, or you’re debugging the wrong host/namespace.
Fix: Inspectufw show raw, check for other services (containers, orchestration), and confirm you’re on the right machine (hostnamectl). -
Symptom: SSH reaches host but auth fails after UFW changes.
Root cause: Coincidental sshd config changes, key permissions, or PAM issues; UFW is blamed because it was the last change.
Fix: Look atjournalctl -u ssh, validate/etc/ssh/sshd_config, and test locally withssh localhost. -
Symptom: You allowed port 22, but SSH is on 2222 (or vice versa).
Root cause: Mismatch between sshd configuration and firewall rules; sometimes caused by hardening guides.
Fix: Confirm listening port withss -ltnp; update UFW accordingly:ufw allow 2222/tcp. -
Symptom: You’re certain you allowed your office IP, but you still can’t connect.
Root cause: Your source IP changed (home ISP, mobile hotspot, VPN egress).
Fix: Confirm your current egress IP from a known service or from corporate tooling; then allow that IP temporarily. Prefer VPN/bastion for stable egress. -
Symptom: UFW enable warns about disrupting SSH, and then it does.
Root cause: You enabled UFW before adding the allow rule, or default incoming deny kicked in.
Fix: From console, add allow rules first; then enable. For remote changes, schedule a maintenance window or use a timed rollback mechanism.
Prevent the next lockout (without getting cute)
Once you’ve recovered access, take five minutes to reduce the chance of a repeat. The goal isn’t maximal security theater; it’s controlled access that doesn’t break during routine work.
1) Make SSH access explicit and narrow
Pick one of these patterns:
- Bastion model: Only bastion hosts accept SSH from the internet; all other servers allow SSH only from bastion subnet.
- VPN model: Servers allow SSH only from VPN address pool.
- Management subnet model: Dedicated admin network (on-prem or cloud) has access; everything else doesn’t.
Don’t mix patterns casually. Mixed patterns are how you end up with contradictory allow/deny rules and a late-night console session.
2) Treat IPv6 as real
If your host has a global IPv6 address, the internet can reach it over IPv6 unless upstream filters say otherwise. Decide deliberately:
- If you use IPv6, write v6 allow rules for the same sources as IPv4.
- If you don’t use IPv6, either disable it intentionally (with testing) or keep it default-deny and ensure your clients don’t prefer it unexpectedly.
3) Keep UFW rules readable
Use comments. Future-you is a stranger who doesn’t deserve suffering.
cr0x@server:~$ sudo ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] 22/tcp ALLOW IN 198.51.100.0/24 # SSH from mgmt subnet
[ 2] 80/tcp ALLOW IN Anywhere # HTTP
[ 3] 443/tcp ALLOW IN Anywhere # HTTPS
Decision: If your rules don’t tell a story, rewrite them until they do.
4) Add a rollback method for remote firewall changes
In mature setups, firewall changes are applied with an automatic rollback if connectivity tests fail. You can approximate this even without fancy tooling by staging and verifying from a separate session, or by using job schedulers to revert if not canceled. If you’re running production, invest in a real change mechanism rather than relying on courage.
FAQ
1) Should I just run ufw reset?
Only if you’re okay losing your entire firewall policy and rebuilding it. It’s a blunt instrument. Prefer deleting or overriding the specific rule blocking SSH.
2) Why did my existing SSH session stay up when UFW blocked SSH?
Stateful filtering. Established connections can remain allowed while new inbound connections are denied. It’s helpful until you disconnect and discover you can’t get back in.
3) I allowed 22/tcp but still can’t connect. What’s the usual culprit?
IPv6 mismatch, wrong source IP, upstream cloud firewall rules, or sshd not listening on the interface you’re using. Validate in that order.
4) How do I know if packets are reaching the server at all?
Check kernel logs for UFW blocks. If you see blocks, packets are arriving and UFW is dropping them. If you see nothing, suspect upstream filtering or routing before UFW.
5) Is it safe to allow SSH from anywhere temporarily?
Sometimes it’s the only practical move, but it’s not “safe.” If you do it, compensate: keys-only auth, no root login, and remove the broad allow immediately after recovery.
6) Does Ubuntu 24.04 use nftables or iptables with UFW?
UFW is an interface; the underlying mechanism may vary. Don’t assume. Use ufw show raw and system tooling to understand what is actually applied.
7) Why does UFW warn that enabling it may disrupt existing SSH connections?
Because it can. If default incoming becomes deny and you haven’t allowed SSH properly, UFW will happily block new connections and potentially disrupt existing flows depending on rule changes.
8) What’s the safest permanent setup for SSH on internet-facing servers?
A bastion or VPN model with allow-listed source ranges, keys-only authentication, and tight logging. If SSH must be public, treat it like a public API and monitor it accordingly.
9) I fixed IPv4 but IPv6 still fails. Is that a problem?
If your users or automation prefer IPv6, yes. Many clients will try IPv6 first. Either allow it correctly or disable IPv6 intentionally with a tested plan.
Conclusion: next steps you should actually do
You recovered SSH the right way: from console, with narrow allowances, and with evidence from logs instead of vibes. Now finish the job.
- Remove any TEMP allow rules you added during recovery and replace them with your intended policy (VPN/bastion/mgmt subnet).
- Make IPv6 a deliberate decision—either support it properly or lock it down consistently.
- Write down the minimal runbook for your team: the fast diagnosis order, the exact UFW commands, and where console access lives.
- Stage firewall changes in the future: add allow → verify from two networks → remove old rules. Boring works.