You changed the SSH port. It felt responsible. Then your terminal froze on connect, your monitoring lit up, and you realized you might have just
taught a server to ignore you. The worst part: everything looks “fine” from the inside.
Case #87 is a classic: sshd is configured for the new port, but the firewall and boot ordering disagree. Or sshd never bound the socket you think it did.
The fix isn’t “restart and pray.” It’s a controlled migration with verifiable checkpoints and a rollback plan that survives latency, human error, and systemd.
What actually changed (and what didn’t)
Changing the SSH port is two separate problems that people love to blend into one:
- sshd must listen on the new port (and not fail its config validation, bind to the wrong interface, or get blocked by another service).
- the network must permit the new port (local firewall rules, cloud security groups, upstream ACLs, jump hosts, NAT, and any “helpful” corporate edge gear).
Debian 13 doesn’t “magically” block your new port, but it does ship with modern defaults: systemd socket activation in some setups, nftables as the
default firewall backend for many admin tools, and increasingly opinionated hardening policies. The order services start at boot can matter more than
you’d like, especially if firewall rules are applied late, flushed unexpectedly, or replaced by a tool that thinks it owns the world.
Here’s the core truth: if you can’t prove what is listening and what is allowed, you’re guessing. Guessing is how people end up opening tickets
that begin with “It was working yesterday” and end with “We attached a rescue ISO.”
Interesting facts and short history (the kind that changes how you troubleshoot)
- Port 22 wasn’t always the default. Early SSH deployments were more fluid; port conventions hardened as tooling and documentation standardized.
- Changing the port does not “secure SSH” in the cryptographic sense. It mostly reduces noise from commodity scans. It can still be operationally useful.
- systemd made service startup deterministic… and also easier to accidentally reorder. If you add drop-ins casually, you can create boot-time races that only appear after reboots.
- nftables replaced iptables as the Linux firewall direction of travel. Many systems still carry iptables compatibility layers; mixed tooling can create “rules look right but aren’t active” situations.
- OpenSSH has long supported multiple listening ports. You can run port 22 and 2222 in parallel for a migration window and remove the old port later.
- Cloud firewalls are separate from host firewalls. You can open nftables and still be blocked by a security group, or vice versa.
- sshd can fail silently from the perspective of remote clients. A bad config can prevent binding; the service may restart-loop while clients see “Connection refused” or timeouts.
- Some environments use systemd socket activation for ssh. In that case, the “listening port” is controlled by a socket unit, not just sshd_config.
One quote I keep taped to my monitor, because it’s the whole job in a sentence:
“Hope is not a strategy.”
— Gene Kranz
Fast diagnosis playbook
When SSH breaks after a port change, you don’t need a philosophy. You need a sequence that finds the bottleneck fast.
First: determine what failure you’re seeing
- “Connection refused”: nothing is listening on that IP:port, or a firewall is actively rejecting.
- Timeout / hangs: a firewall is dropping packets, routing/NAT is wrong, or you’re hitting the wrong IP.
- Handshake but auth fails: sshd is reachable, but config (users/keys/AllowUsers/Match blocks) changed.
Second: prove sshd is listening where you think it is
- Check
ss -lntplocally. - Check
systemctl status sshand logs. - If socket activation is involved, check
systemctl status ssh.socket.
Third: prove the port is allowed end-to-end
- Host firewall: nftables/UFW/iptables state.
- Upstream firewall: cloud security group, NACL, corporate ACL, or router.
- From a remote host:
nc -vzorssh -vvvto see whether SYNs return.
Fourth: confirm boot ordering if it only fails after reboot
- Look for firewall services that flush/reapply rules after sshd starts.
- Check dependencies: sshd should not start “successfully” and then lose the port because rules were replaced.
Joke #1: Changing SSH ports at 4:59 PM is a great way to ensure you’re “available” all weekend.
How to change SSH ports without self-inflicted outages
The safest port change is a migration, not a flip. Run both ports temporarily, validate from at least two independent client networks, then remove the old port.
Yes, it’s boring. Yes, boring is the point.
Principle 1: Keep a lifeline
Before you touch anything: ensure you have an out-of-band path. That might be a cloud serial console, IPMI/iDRAC/iLO, a hypervisor console,
or a screened “break glass” bastion path. If you don’t have it, your change window is already a gamble.
Principle 2: Validate config before restarting
sshd is strict for a reason. A typo in sshd_config can drop the service. Validate config with sshd -t and only then reload.
Reload beats restart when you’re remote—restarts close existing sessions sooner.
Principle 3: Do not rely on a single firewall tool
Pick one authority: nftables, or UFW (which manages nftables/iptables behind the scenes), or a configuration management policy that owns the rules.
Mixing iptables commands with nftables-managed systems is how you create rules that appear but don’t apply, or apply until the next reload.
Principle 4: Prove reachability from the outside
Local checks lie by omission. Your server can be listening happily while a security group blocks the port. Validate from a separate host on the internet
or at least from a different segment than your admin workstation.
Principle 5: Make it reversible within 60 seconds
When you edit the port, stage a rollback: keep an open root shell, schedule an automatic firewall rollback with at or systemd-run,
and only cancel it after you confirm you can reconnect on the new port.
Joke #2: Firewalls are like corporate org charts—everyone thinks they understand them until they need one tiny exception.
Practical tasks (commands, outputs, decisions)
These are the tasks I actually run in production when someone says “SSH died after a port change.” Each one includes what the output means and
what decision you make next. Do them in order when possible.
Task 1: Confirm what port sshd thinks it should use
cr0x@server:~$ sudo sshd -T | grep -E '^(port|listenaddress)\b'
port 2222
listenaddress 0.0.0.0
listenaddress ::
Meaning: This is sshd’s effective configuration after includes and Match blocks. If port still shows 22, your edit didn’t apply or is overridden.
If listenaddress is restricted, you may be binding only on a management interface.
Decision: If the port is wrong, fix config first. If port is right, move on to listening sockets.
Task 2: Validate sshd config syntax before you reload anything
cr0x@server:~$ sudo sshd -t
cr0x@server:~$ echo $?
0
Meaning: Exit code 0 means syntax is OK. Non-zero means sshd will not start with that config.
Decision: If non-zero, do not restart. Fix the config and rerun sshd -t.
Task 3: Check what’s actually listening on the box
cr0x@server:~$ sudo ss -lntp | grep -E '(:22|:2222)\b' || true
LISTEN 0 128 0.0.0.0:2222 0.0.0.0:* users:(("sshd",pid=944,fd=3))
LISTEN 0 128 [::]:2222 [::]:* users:(("sshd",pid=944,fd=4))
Meaning: sshd is bound on IPv4 and IPv6 for port 2222. If you see only IPv6, IPv4-only clients will fail.
If you see nothing, sshd didn’t bind—either it’s down, socket activation is in play, or it crashed.
Decision: If not listening, inspect systemd status and logs next.
Task 4: Check sshd service state and recent errors
cr0x@server:~$ systemctl status ssh --no-pager
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; preset: enabled)
Active: active (running) since Wed 2025-12-31 10:11:22 UTC; 2min 9s ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 944 (sshd)
Tasks: 1 (limit: 19000)
Memory: 6.4M
CPU: 72ms
CGroup: /system.slice/ssh.service
└─944 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups"
Meaning: Running is good, but doesn’t prove reachability. If you see restart loops, config or bind issues are likely.
Decision: If active, proceed to firewall. If failing, jump into logs.
Task 5: Read sshd logs for bind and auth problems
cr0x@server:~$ sudo journalctl -u ssh -n 50 --no-pager
Dec 31 10:11:21 server sshd[944]: Server listening on 0.0.0.0 port 2222.
Dec 31 10:11:21 server sshd[944]: Server listening on :: port 2222.
Meaning: Binding succeeded. If you instead see “Bad configuration option” or “Could not load host key,” sshd may not accept connections.
Decision: If bind is OK, move outward: firewall and upstream.
Task 6: Detect whether systemd socket activation is involved
cr0x@server:~$ systemctl list-unit-files | grep -E '^ssh\.socket'
ssh.socket disabled enabled
Meaning: If ssh.socket is enabled and active, it may define the listening port(s). If it’s disabled, sshd binds directly.
Decision: If socket activation is enabled, inspect and edit the socket unit, not just sshd_config.
Task 7: Inspect socket unit listening ports (if applicable)
cr0x@server:~$ systemctl cat ssh.socket
# /lib/systemd/system/ssh.socket
[Unit]
Description=OpenBSD Secure Shell server socket
Before=sockets.target
[Socket]
ListenStream=22
Accept=no
[Install]
WantedBy=sockets.target
Meaning: This socket is still on 22. If it’s active, clients hitting 2222 will fail.
Decision: Either update ListenStream (or add another) and restart the socket, or disable socket activation and run ssh.service normally.
Task 8: Confirm your firewall backend and active ruleset
cr0x@server:~$ sudo update-alternatives --display iptables | sed -n '1,12p'
iptables - auto mode
link best version is /usr/sbin/iptables-nft
link currently points to /usr/sbin/iptables-nft
link iptables is /usr/sbin/iptables
slave iptables-restore is /usr/sbin/iptables-restore
slave iptables-save is /usr/sbin/iptables-save
Meaning: This host uses iptables-nft compatibility. If you run “classic” iptables commands assuming legacy behavior, your mental model can drift.
Decision: Prefer inspecting nftables directly with nft list ruleset.
Task 9: List nftables rules and search for your SSH port
cr0x@server:~$ sudo nft list ruleset | sed -n '1,120p'
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
ct state established,related accept
iif "lo" accept
tcp dport 2222 accept
icmp type echo-request accept
}
}
Meaning: Policy drop means “everything not explicitly accepted dies.” You do have an allow rule for tcp/2222.
Decision: If the rule is missing, add it before you move sshd to the new port (or keep both ports temporarily).
Task 10: If using UFW, verify it actually allows the new port
cr0x@server:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
2222/tcp ALLOW IN Anywhere
2222/tcp (v6) ALLOW IN Anywhere (v6)
Meaning: UFW is allowing tcp/2222 for IPv4 and IPv6. If only v4 exists, v6 clients will fail (or the reverse).
Decision: Add both families if you use dual-stack. Then test from the outside.
Task 11: Test from the server itself (sanity check, not proof)
cr0x@server:~$ nc -vz 127.0.0.1 2222
Connection to 127.0.0.1 2222 port [tcp/*] succeeded!
Meaning: Local loopback connect works. This proves “listening” but not “reachable remotely.”
Decision: If this fails, sshd isn’t listening or local policy is blocking. Fix locally before touching upstream.
Task 12: Test from a remote host with verbose SSH
cr0x@server:~$ ssh -vvv -p 2222 admin@203.0.113.10
OpenSSH_9.7p1 Debian-2, OpenSSL 3.0.14 4 Jun 2024
debug1: Connecting to 203.0.113.10 [203.0.113.10] port 2222.
debug1: Connection established.
debug1: Remote protocol version 2.0, remote software version OpenSSH_9.7p1 Debian-2
debug1: Authenticating to 203.0.113.10:2222 as 'admin'
debug1: Offering public key: /home/cr0x/.ssh/id_ed25519
debug1: Server accepts key: /home/cr0x/.ssh/id_ed25519
Authenticated to 203.0.113.10 ([203.0.113.10]:2222).
Meaning: This shows the critical stages: TCP connect, handshake, auth. If it stalls before “Connection established,” it’s likely firewall/routing.
If handshake happens but auth fails, focus on sshd auth controls.
Decision: If remote connect works, keep session open and proceed to remove old port later. If not, do not remove port 22 yet.
Task 13: Confirm the service is reachable on the public interface (listening address mistakes)
cr0x@server:~$ ip -br a
lo UNKNOWN 127.0.0.1/8 ::1/128
ens3 UP 203.0.113.10/24 2001:db8:10::10/64
Meaning: If sshd is bound to 127.0.0.1 or a private interface only, remote access fails.
Decision: Ensure ListenAddress in sshd_config covers the correct interface(s), or remove it to bind to all.
Task 14: Catch “it works until reboot” ordering bugs
cr0x@server:~$ systemd-analyze critical-chain ssh.service
ssh.service +42ms
└─network-online.target @3.211s
└─systemd-networkd-wait-online.service @1.902s +1.292s
└─systemd-networkd.service @1.501s +386ms
└─systemd-udevd.service @1.221s +265ms
└─systemd-tmpfiles-setup-dev.service @1.159s +54ms
└─kmod-static-nodes.service @1.085s +58ms
└─systemd-journald.socket @1.010s
Meaning: This tells you what ssh waited for. If your firewall service comes later and flushes rules, ssh might be reachable briefly then not.
Decision: Inspect firewall unit order and ensure firewall is applied before ssh is considered “ready,” or ensure firewall doesn’t flush established policies unexpectedly.
Task 15: Verify the firewall service ordering and whether it flushes rules
cr0x@server:~$ systemctl status nftables --no-pager
● nftables.service - nftables
Loaded: loaded (/lib/systemd/system/nftables.service; enabled; preset: enabled)
Active: active (exited) since Wed 2025-12-31 10:10:59 UTC; 2min 50s ago
Docs: man:nft(8)
Process: 611 ExecStart=/usr/sbin/nft -f /etc/nftables.conf (code=exited, status=0/SUCCESS)
Meaning: nftables loaded from a file and exited (normal). If you have a different firewall manager, check its unit and scripts for “flush rules.”
Decision: If the firewall is not enabled, rules might not persist after reboot. Enable it or manage persistence another way.
Task 16: Prove persistence: reboot-safe config check without rebooting
cr0x@server:~$ sudo nft -c -f /etc/nftables.conf && echo "nftables.conf parses OK"
nftables.conf parses OK
Meaning: -c checks syntax and semantics without applying. If it fails, a reboot could apply a broken ruleset and block SSH.
Decision: Fix parse errors now. Then confirm the unit is enabled so it loads the correct rules at boot.
Task 17: Stage a timed rollback of firewall changes
cr0x@server:~$ echo "sudo nft flush ruleset; sudo nft -f /root/nftables.known-good.conf" | at now + 5 minutes
warning: commands will be executed using /bin/sh
job 12 at Wed Dec 31 10:20:00 2025
Meaning: You just created a parachute. If you lock yourself out, the server will revert in 5 minutes (assuming it can run the job).
Decision: Only cancel the job after you’ve confirmed remote SSH on the new port and you’ve verified the firewall rules are persistent.
Task 18: Cancel the rollback once the new port is verified
cr0x@server:~$ atq
12 Wed Dec 31 10:20:00 2025 a root
cr0x@server:~$ atrm 12
cr0x@server:~$ atq
Meaning: No queued rollback remains.
Decision: Now it’s safe to proceed with removing port 22 (later), because you’ve proven access and removed the training wheels.
Systemd ordering: sshd vs firewall at boot
Most “I changed the port and it worked until reboot” incidents are not mystical. They’re ordering and ownership problems:
a firewall loads after sshd and replaces rules, a tool flushes the ruleset, or a config management agent writes an older policy at boot.
What “ordering” really means in this context
systemd ordering is not only about which unit starts first. It’s also about what unit is considered “ready” and what it pulls in via dependencies.
sshd being “active (running)” doesn’t mean “reachable from your laptop.” It just means the daemon is alive.
Make firewall rules apply early, consistently, and from one source
On Debian, if you manage firewall via nftables.service, the rules are typically loaded from /etc/nftables.conf.
If you manage via UFW, UFW has its own systemd service and will generate rules.
If you manage via a corporate agent, it might rewrite both.
The goal is simple:
- Firewall policy is applied before you rely on it for access control.
- Firewall policy does not get flushed or replaced unexpectedly after sshd is already in use.
- The SSH port you need is allowed in the same policy that actually loads at boot.
A practical approach: keep sshd reachable during transitions
When you’re migrating ports, allow both old and new ports at the firewall, and configure sshd to listen on both ports temporarily.
Then remove port 22 from the firewall only after you’ve validated new access from multiple vantage points and survived at least one reboot.
Editing sshd_config safely: multiple ports
OpenSSH supports multiple Port directives. For a migration window, do this:
cr0x@server:~$ sudo sh -c 'printf "\n# Migration window: keep 22 temporarily\nPort 22\nPort 2222\n" >> /etc/ssh/sshd_config'
Then validate and reload (not restart) when remote:
cr0x@server:~$ sudo sshd -t && sudo systemctl reload ssh
Decision: Only remove Port 22 after you can connect reliably on the new port and your firewall persistence is confirmed.
When ssh.socket changes the game
If ssh.socket is enabled, systemd owns the listening socket. In that case:
- Changing
Portin sshd_config may not do what you think. - You should change
ListenStreamin the socket unit (ideally via a drop-in). - Then restart the socket unit, not just the service.
If you don’t want socket activation, disable it intentionally—don’t live in the half-state where you’re unsure who owns the port.
Firewall recipes: nftables, UFW, iptables
The “right” firewall is the one that your system actually uses consistently across reboots and automation. On Debian 13, nftables is the cleanest path.
If you use UFW, treat it as your only interface. If you’re stuck with iptables in a legacy estate, be explicit about whether it’s legacy or nft backend.
nftables: allow new SSH port (inet table)
Example /etc/nftables.conf snippet. This assumes a default drop input policy:
cr0x@server:~$ sudo grep -n 'chain input' -n /etc/nftables.conf | head
12: chain input {
Add an allow rule near other TCP accepts:
cr0x@server:~$ sudo nft add rule inet filter input tcp dport 2222 accept
Meaning: This modifies the running ruleset but not necessarily the config file. If you reboot without saving, it may vanish.
Decision: Immediately reflect the change in /etc/nftables.conf (or your managed include) and validate with nft -c.
UFW: allow new SSH port (and keep it survivable)
cr0x@server:~$ sudo ufw allow 2222/tcp
Rule added
Rule added (v6)
Meaning: UFW created rules for IPv4 and IPv6.
Decision: Keep ufw status verbose as your source of truth. Don’t mix with raw nft rules unless you enjoy debugging at 2 AM.
iptables (legacy estates): allow new port carefully
cr0x@server:~$ sudo iptables -S INPUT | sed -n '1,25p'
-P INPUT DROP
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
Add the new rule:
cr0x@server:~$ sudo iptables -A INPUT -p tcp --dport 2222 -j ACCEPT
Meaning: Rule appended; if order matters (it often does), you might prefer inserting before a reject/log rule.
Decision: Ensure persistence via your chosen mechanism (e.g., a managed rules file or service). Otherwise reboot reverts you.
Don’t forget upstream firewalls
Host firewall can be perfect and still useless if:
- a cloud security group blocks 2222,
- a corporate perimeter ACL allows only 22 to your subnet,
- a load balancer health check is pinned to 22 and declares your node dead.
Your evidence is in the network behavior: “refused” usually means you reached the host; timeouts usually mean you didn’t.
Treat that difference like a compass.
Three corporate mini-stories from the trenches
1) The incident caused by a wrong assumption: “We opened the firewall”
A mid-size company migrated a fleet to Debian 13 and decided to “reduce attack surface” by moving SSH to 2222. The engineer on call opened 2222 in UFW
and confirmed ss -lntp showed sshd listening. Green check marks. They then removed port 22 everywhere. Confident, tidy, and wrong.
The next morning, half the fleet was unreachable. Not all—just the hosts in one region. The logs on the reachable hosts were clean, and on the unreachable ones
the console showed sshd running fine. The team burned hours chasing phantom sshd bugs and even rebuilt one VM from an image just to “reset” it.
The root cause was upstream: a cloud security group template was region-specific. One region allowed only 22, another allowed “SSH” as a named service
that mapped to 22, and only a third had the new 2222 rule. The engineer’s assumption—“host firewall equals access”—was the trap.
The fix was boring: update security groups, stage dual-port listening for 48 hours, and add an external connectivity test in the change procedure.
The team learned to treat “I can connect from inside the VPC” as a partial truth, not a victory.
2) The optimization that backfired: “Let’s simplify the firewall service”
A large enterprise platform team standardized on nftables and wrote a small oneshot systemd unit that flushed and reloaded the ruleset at boot.
Their goal was sane: one source of truth, no drift, fast boot. They shipped it broadly and moved on.
Weeks later, a different team changed SSH ports during an approved window. They updated /etc/nftables.conf, reloaded nftables, validated external access,
and then rebooted for a kernel update. After reboot, SSH was dead—but only intermittently for some hosts.
The culprit was that “optimization” unit. It ran after network-online but before some interface naming settled, and it also flushed rules as a first step.
For a short window, the host had a default drop policy with no allow rules loaded yet. Some connections were attempted during that window and failed.
Humans called it “SSH broken,” but it was really a boot-time race plus an aggressive flush.
The permanent fix: stop flushing indiscriminately, load an atomic ruleset, and order the firewall unit early enough that services depending on inbound access
aren’t exposed to a brief “deny all” gap. Also: don’t “optimize” firewall startup without measuring what it does to reachability during boot.
3) The boring but correct practice that saved the day: dual-port + timed rollback
A financial services team had a rule: any change that could lock you out must have a timed rollback. Not “we’ll be careful.” An actual scheduled revert.
People complained it was paranoid. Then one Friday it paid for itself, quietly.
An engineer changed SSH to 2222 and updated the firewall. It worked from their workstation. It did not work from the CI runners that lived in a separate
subnet with its own egress policy. Nobody noticed until deployment started failing. The team still had an open SSH session, but they were one disconnect away
from a real incident.
Because they’d scheduled a rollback job, they could experiment safely. They re-opened 22 temporarily, fixed the runners’ outbound policy, tested 2222 from both
networks, and only then removed 22 again. No rescue consoles, no reimages, no panicked “who has the password to the hypervisor?”
The takeaway is annoyingly consistent: the best reliability practice often looks like unnecessary ceremony right up until it prevents an outage.
Common mistakes: symptom → root cause → fix
1) Symptom: “Connection refused” on the new port
- Root cause: sshd isn’t listening on that port, or it’s bound only to localhost/another interface, or ssh.socket is still on 22.
- Fix: Check
sshd -T,ss -lntp, andsystemctl cat ssh.socket. CorrectPort/ListenAddressor the socket unit, validate withsshd -t, then reload.
2) Symptom: Timeout / hang on connect
- Root cause: firewall drop (host or upstream), wrong IP, routing/NAT mismatch, or security group not updated.
- Fix: From a remote host, run
ssh -vvv. On server, check nftables/UFW. Then confirm upstream firewall rules. Timeouts are almost never “sshd is misconfigured.”
3) Symptom: Works from one network, fails from another
- Root cause: asymmetric allowlists; corp egress blocks high ports; only one path updated.
- Fix: Test from at least two vantage points (office + CI subnet, or home ISP + bastion). Update ACLs. Consider keeping port 22 open but restricted to a bastion IP range during migration.
4) Symptom: Works until reboot, then fails
- Root cause: firewall rules not persistent; a boot-time service flushes rules; socket activation reverted; config management overwrote your changes.
- Fix: Ensure the firewall service is enabled and loads a valid config. Confirm with
nft -c -f /etc/nftables.conf. Auditsystemctl list-dependenciesfor your firewall tooling and check CM policies.
5) Symptom: SSH connects but authentication fails after port change
- Root cause: You edited more than the port:
AllowUsers,PasswordAuthentication,PubkeyAuthentication, or aMatchblock applies differently than expected. - Fix: Use
sshd -Tto view effective settings; checkjournalctl -u sshfor auth messages; ensure key files and permissions are correct.
6) Symptom: IPv6 works, IPv4 fails (or vice versa)
- Root cause: sshd bound only to one family; firewall rules exist only for v4 or v6; clients prefer a family you didn’t test.
- Fix: Confirm
ssshows both0.0.0.0:2222and[::]:2222if you want dual-stack. Mirror firewall rules for both families.
7) Symptom: Port appears open locally, but remote sees “No route to host”
- Root cause: upstream routing/ACL issue; sometimes ICMP is blocked and path MTU issues complicate, but “No route” is usually network.
- Fix: Confirm correct public IP, check security group/NACL, and validate from a host in the same region/segment.
Checklists / step-by-step plan
Plan A: Safe migration (recommended for production)
- Keep an existing SSH session open (preferably two: one root/sudo shell, one read-only observer).
- Confirm out-of-band access exists (cloud console, IPMI, hypervisor console).
- Open the new port in the firewall first (host firewall and upstream firewall). Do not remove 22 yet.
- Configure sshd to listen on both ports:
Port 22andPort 2222. - Validate config:
sshd -tandsshd -T | grep '^port '. - Reload sshd (not restart):
systemctl reload ssh. - Verify listening sockets:
ss -lntp | grep ':2222'. - Test from two remote vantage points: workstation + a host in another network.
- Schedule a rollback job before removing 22; cancel it only after success.
- Reboot once during the window if you can, to prove persistence and ordering.
- Remove port 22 from sshd config and firewall after the migration window.
- Update automation (Ansible inventory, bastion configs, monitoring, incident runbooks).
Plan B: Emergency recovery when you’re already locked out
- Use out-of-band console to access the host.
- Check sshd status and logs:
systemctl status ssh,journalctl -u ssh. - Check listening ports:
ss -lntp. - Temporarily allow port 22 (or your known-good port) in the host firewall and upstream firewall.
- Revert sshd_config to dual-port and validate with
sshd -t. - Reload sshd, then test from outside.
- Only then re-attempt a controlled migration using Plan A.
Plan C: If systemd socket activation is enabled
- Confirm ownership: check whether
ssh.socketis enabled and active. - Edit via drop-in rather than modifying vendor units directly.
- Add new ListenStream for migration (e.g., 22 and 2222), then reload daemon and restart the socket.
- Validate with
ss -lntpand remote tests. - Remove 22 later by deleting the old ListenStream.
FAQ
1) Is changing the SSH port worth it?
It’s not security magic. It’s noise reduction. If you already require keys, disable password auth, and restrict by source IP or bastion, the port change is optional.
In high-noise environments, it can reduce log spam and brute-force attempts. Treat it as an operational tweak, not a control you bet your audit on.
2) Should I restart or reload sshd?
Reload when remote. Reload applies config without dropping existing sessions in most cases. Restart is fine when you have console access or a maintenance window,
but it’s an unnecessary risk when you’re changing the very thing you’re connected through.
3) Can sshd listen on two ports at once?
Yes. Add multiple Port lines in /etc/ssh/sshd_config. This is the safest migration technique because it’s reversible and testable.
4) I changed Port, but sshd still listens on 22. Why?
Common causes: a later config file overrides it (includes), a Match block changes behavior, or systemd socket activation is enabled and ssh.socket is still on 22.
Use sshd -T and systemctl cat ssh.socket to stop guessing.
5) Why do I get timeouts instead of “refused”?
Timeouts usually mean packet drop somewhere: host firewall default drop, security group drop, upstream ACL, or routing. “Refused” usually means you reached the host
and nothing listened (or it actively rejected). This distinction is one of your best troubleshooting signals.
6) Do I need to open the port for IPv6 too?
If your host has IPv6 connectivity and clients can reach it, yes. Otherwise you’ll get inconsistent results: one client hits v6 and works, another hits v4 and fails.
Mirror rules for both families if you’re dual-stack.
7) What’s the safest rollback technique?
Keep an existing session open and schedule a timed rollback of firewall and/or sshd config with at or systemd-run.
Only cancel the rollback after you’ve successfully connected on the new port from an external host and validated persistence across reboot.
8) How do I avoid breaking automation and monitoring?
Inventory and tooling often assume port 22: config management, backup agents, monitoring checks, bastions, and CI runners.
Update those configs during the migration window while both ports work, then remove 22. If you flip first, you’ll spend your evening finding every hidden assumption.
9) Does changing the port break fail2ban or intrusion detection rules?
It can. Some rulesets and jails are keyed to “ssh” service definitions or port 22 assumptions. After migration, verify your security tooling is watching the new port
and not quietly doing nothing.
10) Should I restrict SSH by source IP instead of changing the port?
Prefer source restriction (bastion-only, VPN-only, allowlisted admin IPs) where feasible. Port change can be layered on top, but access control beats obscurity.
Do both if it fits your threat model and your operational tolerance.
Conclusion: practical next steps
If you take one thing from case #87, make it this: treat SSH port changes like a production migration, not a config tweak. Open the new path first, keep the old one
during validation, prove reachability from outside, and only then close the old door.
Next steps that pay off immediately:
- Run
sshd -Tandss -lntpto confirm reality, not intent. - Pick one firewall authority (nftables or UFW) and make it persistent across reboots.
- Adopt dual-port migration plus a timed rollback job as your standard operating procedure.
- Reboot during the change window at least once, if you can, to catch ordering and persistence problems while you still have humans awake.
Production systems don’t reward bravery. They reward repeatable habits, clean evidence, and the kind of paranoia that looks like professionalism.