If you’ve ever inherited a Windows fleet where “patch Tuesday” is treated like a seasonal suggestion, you already know the feeling: a creeping dread wrapped in a ticket queue.
WannaCry took that dread and turned it into a global outage with a countdown timer.
The uncomfortable part wasn’t that ransomware existed. It was that a broadly available patch existed before the fire, and plenty of organizations still managed to burn.
WannaCry wasn’t clever. It was efficient. That’s worse.
What actually happened (and why it worked)
WannaCry hit in May 2017 and did what every SRE dreads: it turned an internal hygiene problem into a customer-facing disaster at internet speed.
The core of the outbreak wasn’t a phishing email with immaculate social engineering. It was an SMB exploit chained to a worm.
If a Windows host was vulnerable and reachable over TCP/445, it could be hit, encrypted, and used as a launchpad to hit the next one.
This matters operationally because it changed the shape of incident response. With typical ransomware, you hunt initial access, stolen credentials, and lateral movement through admin tooling.
With WannaCry, you also had to treat your network like a petri dish: any vulnerable endpoint could be patient zero in its own subnet.
“Containment” wasn’t only disabling accounts and blocking C2. It was stopping a self-propagating exploit from turning your flat network into kindling.
Here’s the part people underplay: Microsoft shipped a patch for the SMB vulnerability (MS17-010) before the outbreak. The industry still got rolled.
That’s not “because patching is hard.” It’s because many enterprises had built processes that made patching optional, slow, or politically dangerous.
Systems that couldn’t reboot. Apps “too fragile” to test. Legacy Windows versions no one wanted to admit were still around. Networks that assumed “inside = trusted.”
WannaCry didn’t win because it was brilliant. It won because it found the exact seam between “we know we should” and “we’ll do it later.”
Fast facts and historical context (the parts people forget)
- MS17-010 (March 2017) addressed critical SMB vulnerabilities later exploited by WannaCry; the patch existed before the outbreak.
- EternalBlue was the exploit name widely associated with the SMB vulnerability; it was part of a toolkit later leaked publicly.
- WannaCry included worm-like propagation, spreading automatically via SMB to other vulnerable machines without user interaction.
- A kill switch domain embedded in the malware reduced spread when registered and reachable; many infections still occurred in segmented or blocked-DNS environments.
- SMBv1 was a major risk amplifier; disabling SMBv1 and hardening SMB reduced attack surface and blast radius.
- Legacy Windows versions (notably older, unsupported releases at the time) were heavily impacted; emergency patches were later made available for some.
- Healthcare and public services were notably disrupted; these environments often have long-lived devices and constrained change windows.
- The ransom payment wasn’t the main cost; downtime, recovery labor, lost productivity, and reputational damage dominated the bill.
- Flat internal networks turned localized infections into campus-wide outages because SMB was routable and widely permitted.
Mechanics: EternalBlue, SMB, and the worm behavior
Why SMB made this so explosive
SMB is how Windows shares files, printers, and various “enterprise convenience” services. It’s also how malware gets an all-access pass if you let TCP/445 roam freely.
In many environments, SMB is permitted everywhere because file shares exist everywhere and “it’s always been that way.”
That assumption is a gift to attackers: one compromised host can reach many others with a protocol the OS treats as a first-class citizen.
WannaCry’s spread leaned on an SMB implementation flaw. Exploit succeeds, code execution follows, then the ransomware payload runs and the worm component keeps scanning.
The result: speed. Not APT patience. Speed.
The kill switch: not a magic shield
The embedded domain check acted like a crude safety latch: if the domain resolved and was reachable, the malware would stop executing.
This slowed the outbreak dramatically once the domain was registered and reachable for many victims.
But “reachable” is doing a lot of work here.
Networks with restricted DNS, blocked outbound HTTP, sinkholed behavior, or partial connectivity could still see infections.
Also, variants and copycats emerged quickly. Counting on a kill switch is not a strategy; it’s a coincidence you might not get twice.
Operational takeaway: patching is necessary but not sufficient
Yes, MS17-010 patching matters. But you also needed: accurate inventory, the ability to isolate endpoints fast, controls on lateral movement, and backups that weren’t writable by the same machines getting encrypted.
WannaCry punished weak links in every one of those areas.
Paraphrased idea from Werner Vogels (Amazon CTO): “Everything fails, all the time—design for it.”
WannaCry is what it looks like when you design as if patching will happen eventually, and “eventually” never arrives.
The real failure modes: patching, inventory, and trust
Failure mode 1: “We patched… mostly”
In enterprise reality, “patched” often means: a subset of systems under central management, plus a handful of exclusions, plus a backlog of “special” servers, plus laptops that missed VPN.
Malware doesn’t care about your intent. It cares about the one unpatched box with TCP/445 open.
The fix is not “try harder.” The fix is measurable patch compliance with consequences: dashboards, deadlines, and enforcement that includes exceptions as first-class citizens.
Failure mode 2: “We don’t have SMB exposed to the internet”
Great. The internet isn’t the only threat model. Internal spread is what turned WannaCry into an outage generator.
A flat network with permissive east-west traffic makes every endpoint an internal attacker once compromised.
Failure mode 3: “Backups exist”
Plenty of victims had backups. Some were online and writable, which means they got encrypted too. Some were offline but untested, which means restores failed at the worst moment.
If you can’t restore a file server to a clean state on a stopwatch, you don’t have backups; you have a comforting story.
Failure mode 4: “Legacy systems are unavoidable”
Some are. That’s not permission to run them naked on the corporate LAN.
If you must keep an old OS or device, wrap it: segment it, restrict its traffic, control admin access, and monitor it like it’s radioactive. Because it is.
Joke 1: “Our patching policy was ‘if it isn’t on fire, don’t touch it.’ WannaCry brought the fire.”
Fast diagnosis playbook: what to check first/second/third
This is the “you have 30 minutes before the blast radius doubles” playbook. It’s biased toward containment and triage over perfect attribution.
First: stop the bleeding (containment)
- Identify the infected hosts by symptoms: ransom note files, sudden file extensions, high SMB scanning, endpoint alerts.
- Isolate suspected machines from the network (switch port shutdown, NAC quarantine VLAN, EDR isolation). Do not “just reboot and see.”
- Block lateral movement: temporarily restrict TCP/445 between subnets at core firewalls. If you must keep SMB, only allow from known file servers to known clients.
- Confirm patch status on the rest of the fleet, especially anything reachable on 445.
Second: determine exposure (where can it spread next?)
- Scan internally for TCP/445 listeners and map them to owners/roles.
- Check for SMBv1 presence and usage; plan to disable it broadly.
- Find unpatched systems and prioritize those with broad connectivity (jump boxes, shared service hosts, VDI brokers, file servers, domain-adjacent systems).
Third: recovery decisions (restore vs rebuild)
- Decide rebuild vs restore: endpoints usually rebuild; servers depend on role and backup maturity.
- Validate backups offline and ensure restore targets are clean and patched.
- Credential hygiene: assume local admin credentials and cached domain creds may be exposed on infected endpoints; rotate where appropriate.
Practical tasks: commands, outputs, decisions (12+)
These tasks are written like you’re on a Linux admin host doing network triage and querying Windows via standard tools where possible.
In real incidents, you’ll also use EDR/MDM, Windows event forwarding, and your CMDB. But commands don’t lie, and they’re fast.
1) Discover internal SMB exposure quickly
cr0x@server:~$ nmap -n -Pn -p 445 --open 10.20.0.0/16 -oG smb-445.gnmap
Starting Nmap 7.94 ( https://nmap.org ) at 2026-01-22 10:17 UTC
Nmap scan report for 10.20.12.34
Host is up (0.0031s latency).
PORT STATE SERVICE
445/tcp open microsoft-ds
Nmap scan report for 10.20.55.10
Host is up (0.0020s latency).
PORT STATE SERVICE
445/tcp open microsoft-ds
Nmap done: 65536 IP addresses (4212 hosts up) scanned in 98.40 seconds
What it means: These hosts accept SMB connections. They’re potential targets and potential spreaders.
Decision: Prioritize patch verification and segmentation rules for these IPs first. Don’t start with the machines that “feel risky”; start with the ones that are reachable.
2) Check if SMBv1 is supported by a target (quick signal)
cr0x@server:~$ smbclient -L //10.20.12.34 -N
protocol negotiation failed: NT_STATUS_INVALID_NETWORK_RESPONSE
What it means: This can indicate hardened SMB settings, firewall interference, or that SMBv1 negotiation is blocked/disabled.
Decision: Don’t assume “good.” Validate by checking server configuration centrally. If you still allow SMBv1 elsewhere, keep hunting.
3) Identify Windows versions from SMB (when allowed)
cr0x@server:~$ crackmapexec smb 10.20.55.10
SMB 10.20.55.10 445 FS-LEGACY01 [*] Windows 7 Professional 7601 Service Pack 1 (name:FS-LEGACY01) (domain:WORKGROUP) (signing:False) (SMBv1:True)
What it means: SMB signing off and SMBv1 enabled are risk multipliers; OS is old enough to warrant extra scrutiny.
Decision: If this system is business-critical and can’t be upgraded fast, isolate it into a tight VLAN and restrict SMB access to only what’s necessary.
4) Detect active scanning for SMB (worm-like behavior)
cr0x@server:~$ sudo tcpdump -ni eth0 'tcp dst port 445 and (tcp[tcpflags] & (tcp-syn) != 0)' -c 10
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
10:21:11.102938 IP 10.20.88.44.51532 > 10.20.12.34.445: Flags [S], seq 10293812, win 64240, options [mss 1460,sackOK,TS val 89123 ecr 0,nop,wscale 7], length 0
10:21:11.103114 IP 10.20.88.44.51533 > 10.20.55.10.445: Flags [S], seq 7736123, win 64240, options [mss 1460,sackOK,TS val 89123 ecr 0,nop,wscale 7], length 0
10:21:11.103290 IP 10.20.88.44.51534 > 10.20.101.9.445: Flags [S], seq 5512312, win 64240, options [mss 1460,sackOK,TS val 89123 ecr 0,nop,wscale 7], length 0
10 packets captured
What it means: One host is spraying SYNs at many SMB targets. That’s classic worm propagation behavior.
Decision: Isolate 10.20.88.44 immediately. Then search for other scanners; worms rarely travel alone in a large network.
5) Map the scanning host to a MAC and switch port (containment-ready)
cr0x@server:~$ arp -n | grep 10.20.88.44
10.20.88.44 ether 00:25:90:ab:12:ef C eth0
What it means: You have the MAC address for the suspect host.
Decision: Use your switch/controller to shut the port or move it into a quarantine VLAN. Don’t waste time emailing the desktop team.
6) Check if your Linux file servers are being hammered (SMB connection count)
cr0x@server:~$ sudo ss -tn sport = :445 | head
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 10.20.55.200:445 10.20.88.44:51533
ESTAB 0 0 10.20.55.200:445 10.20.88.44:51534
ESTAB 0 0 10.20.55.200:445 10.20.88.44:51535
What it means: A single client is opening many SMB connections to your server. Could be legitimate, could be scanning, could be backup software, could be malware.
Decision: Correlate with tcpdump and Windows logs. If the peer is suspicious, block it at the host firewall while network containment catches up.
7) Confirm Windows patch MS17-010 via package presence (offline artifact check)
cr0x@server:~$ strings WS2012R2-C_drive/Windows/Logs/CBS/CBS.log | grep -E 'KB4012213|KB4012216|KB4012598' | tail -n 3
2017-04-02 08:14:09, Info CBS Package: Package_for_KB4012213~31bf3856ad364e35~amd64~~6.3.1.0, state: Installed
2017-04-02 08:14:10, Info CBS Mark store corruption flag because of package: Package_for_KB4012213~31bf3856ad364e35~amd64~~6.3.1.0
2017-04-02 08:14:15, Info CBS Session: 30712533_1972450825 finalized. Reboot required: no
What it means: The host appears to have a relevant KB installed (exact KB depends on OS).
Decision: Treat this as supporting evidence, not proof. Still validate SMBv1 state and vulnerability posture; partial patching and rollback happen.
8) Verify SMBv1 state on a Windows host via remote query (when you have WinRM)
cr0x@server:~$ evil-winrm -i 10.20.55.10 -u 'ops' -p 'REDACTED' -s . -c "powershell -NoProfile -Command \"Get-SmbServerConfiguration | Select EnableSMB1Protocol,EnableSMB2Protocol\""
EnableSMB1Protocol EnableSMB2Protocol
------------------ ------------------
True True
What it means: SMBv1 is enabled. That’s a problem you can fix today, not “in a program.”
Decision: Plan an emergency change to disable SMBv1, starting with endpoints and non-critical servers, then expanding. Validate app dependencies instead of guessing.
9) Check if TCP/445 is allowed east-west through a firewall (quick reality test)
cr0x@server:~$ sudo traceroute -T -p 445 10.20.12.34
traceroute to 10.20.12.34 (10.20.12.34), 30 hops max, 60 byte packets
1 10.20.88.1 (10.20.88.1) 0.421 ms 0.388 ms 0.361 ms
2 10.20.0.1 (10.20.0.1) 0.910 ms 0.899 ms 0.885 ms
3 10.20.12.34 (10.20.12.34) 1.914 ms 1.902 ms 1.890 ms
What it means: Nothing is blocking the path for TCP/445 between networks; traffic likely flows freely.
Decision: Implement temporary ACLs at the core to restrict 445. A short-term “break some workflows” change is cheaper than a long-term “rebuild everything” outage.
10) Look for ransomware note artifacts on a file share snapshot (triage without touching endpoints)
cr0x@server:~$ find /srv/samba/shares -maxdepth 4 -type f -iname '@Please_Read_Me@.txt' -o -iname '*WanaDecryptor*' | head
/srv/samba/shares/finance/@Please_Read_Me@.txt
/srv/samba/shares/hr/@Please_Read_Me@.txt
What it means: Encryption activity likely occurred in these directories. You can scope impact without logging into suspect Windows boxes.
Decision: Freeze the share (read-only) if possible and preserve evidence. Start restore planning from last known-good snapshot or backup.
11) Validate that backups are not mounted read-write to production (a classic self-own)
cr0x@server:~$ mount | grep -E 'backup|repo|vault'
backup01:/repo on /mnt/backup_repo type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.20.55.200,local_lock=none,addr=10.20.1.50)
What it means: Your backup repository is mounted read-write on a production server. That’s an invitation to encrypt your safety net.
Decision: Move to a pull-based model or mount read-only where feasible. Enforce immutability/WORM on backup targets. If you can’t, at least isolate credentials and network paths.
12) Check snapshot availability for fast restore (storage engineer’s first question)
cr0x@server:~$ zfs list -t snapshot -o name,creation -s creation | tail -n 5
tank/shares@2026-01-22_0000 Wed Jan 22 00:00 2026
tank/shares@2026-01-22_0100 Wed Jan 22 01:00 2026
tank/shares@2026-01-22_0200 Wed Jan 22 02:00 2026
tank/shares@2026-01-22_0300 Wed Jan 22 03:00 2026
tank/shares@2026-01-22_0400 Wed Jan 22 04:00 2026
What it means: You have frequent snapshots. That’s your best friend during ransomware events because it’s fast and granular.
Decision: Identify the last clean snapshot (before encryption). Plan a clone/rollback strategy that avoids overwriting evidence and minimizes downtime.
13) Identify which client modified encrypted files (Samba audit logs)
cr0x@server:~$ sudo journalctl -u smbd --since "2026-01-22 03:00" | grep -E 'rename|unlink|pwrite' | head -n 6
Jan 22 03:07:12 fileserver smbd[1823]: [2026/01/22 03:07:12.221] rename finance/q1.xlsx -> finance/q1.xlsx.WNCRY
Jan 22 03:07:12 fileserver smbd[1823]: [2026/01/22 03:07:12.223] pwrite finance/q1.xlsx.WNCRY (client 10.20.88.44)
Jan 22 03:07:13 fileserver smbd[1823]: [2026/01/22 03:07:13.004] rename finance/payroll.csv -> finance/payroll.csv.WNCRY
Jan 22 03:07:13 fileserver smbd[1823]: [2026/01/22 03:07:13.009] pwrite finance/payroll.csv.WNCRY (client 10.20.88.44)
What it means: You’ve identified the client IP doing the destructive writes.
Decision: That IP is an immediate containment target. Also consider temporarily revoking the user’s access token/session if you can map it to identity.
14) Verify that your DNS isn’t doing something “helpful” that breaks kill-switch behavior
cr0x@server:~$ dig +short -t a example-killswitch-domain.invalid @10.20.0.53
10.20.0.10
What it means: Your internal DNS is wildcarding NXDOMAIN to a local IP. That kind of “helpfulness” can change malware behavior unpredictably.
Decision: Disable wildcarding for unknown domains in enterprise DNS. Security controls should be explicit, not surprise-driven.
Joke 2: “Nothing builds cross-team collaboration like ransomware. Suddenly everyone knows where the firewall team sits.”
Three corporate mini-stories from the trenches
Mini-story 1: The incident caused by a wrong assumption
A mid-sized company ran a mix of modern Windows servers and a stubborn set of older desktops used for industrial control monitoring.
Everyone “knew” those desktops were isolated. They were on a dedicated VLAN, after all. The network diagram said so, and the diagram had a logo.
During a routine remote support session, a technician plugged one of those desktops into a convenient wall jack in a general office area.
The port was configured for the default user VLAN because “hoteling.” The desktop got a normal IP, normal routing, normal everything. It could see file shares, printers, and other desktops.
Nobody noticed, because nothing broke immediately.
When WannaCry-style SMB scanning hit the environment (not necessarily the original 2017 strain—variants and similar worms exist), that desktop was vulnerable and reachable.
It became a source of lateral infection attempts, and it also had cached credentials from previous admin work.
The outbreak wasn’t stopped by “the VLAN” because the VLAN was only as real as the switch port configuration at that moment.
The postmortem was blunt: the wrong assumption was “devices stay where we think they are.”
Fixes were equally blunt: NAC enforcement for device classes, port security policies that matched the diagram, and a rule that legacy boxes live behind a firewall, not behind good intentions.
The lesson: if your containment relies on humans plugging cables correctly forever, you don’t have containment. You have a hope-shaped plan.
Mini-story 2: The optimization that backfired
Another organization was proud of its lean IT posture. They had reduced “wasted” maintenance windows by stretching patch cycles.
Instead of frequent small batches, they did big quarterly patch events. Fewer reboots, fewer app tests, fewer angry emails.
The dashboard looked clean. The risk register looked theoretical.
They also optimized network operations: broad east-west connectivity, minimal firewalling internally, because internal segmentation was “complex” and “hard to troubleshoot.”
They told themselves endpoint AV would catch anything important. It was a neat story until it wasn’t.
When the SMB worm behavior appeared, the environment behaved exactly as designed: like a fast internal delivery network.
The patch gap was measured in weeks to months. The internal connectivity was measured in “everything can talk to everything.”
By the time people realized this wasn’t a normal malware alert, a chunk of the workstation fleet was already unrecoverable without rebuild.
The optimization wasn’t just the patch cadence; it was the institutional belief that “less change equals more stability.”
In security, less change often equals delayed pain with interest.
They moved to monthly patching, enforced exception expirations, and introduced SMB restrictions between user networks and server networks.
The lesson: optimizing for fewer changes is fine—until the world changes for you. Worms are very pro-change.
Mini-story 3: The boring but correct practice that saved the day
A third company had a reputation for being annoyingly disciplined. They ran weekly patch compliance reviews. Not exciting.
They had a standing change window. Also not exciting.
They used tiered admin: workstation admin creds didn’t touch servers, and server admin creds didn’t touch domain controllers unless required. Very not exciting.
They also treated file shares like production systems. Hourly snapshots for the most critical datasets, daily offsite copies with immutability controls, and a quarterly restore drill.
The restore drill was unpopular. It consumed weekends and produced the kind of tickets nobody gets promoted for.
When ransomware hit a user segment, it did encrypt some mapped drive content from a handful of clients.
But SMB to the server network was restricted, so it didn’t fan out across server-to-server pathways.
The infected endpoints were isolated by EDR within minutes, and the file shares were rolled back to a clean snapshot with minimal data loss.
The incident still cost time and stress—no fairy tales here—but it didn’t become a company-wide outage.
The “boring” controls turned an existential crisis into an unpleasant Wednesday.
The lesson: boring practices compound. They don’t make headlines. They make survival normal.
Common mistakes: symptom → root cause → fix
1) “We patched MS17-010, so we’re safe”
Symptom: You still see SMB scanning and infections on some hosts.
Root cause: Patch compliance was incomplete (missed laptops, isolated subnets, manual servers, image drift), or SMBv1 remained enabled and exposed.
Fix: Measure compliance by active verification (scan and config checks), not by “WSUS says approved.” Disable SMBv1. Restrict TCP/445 east-west.
2) “We blocked SMB at the perimeter, why is it spreading?”
Symptom: Outbreak is entirely internal and keeps expanding.
Root cause: Flat internal network with unrestricted TCP/445; internal threat model ignored.
Fix: Segmentation: only allow SMB between clients and designated file servers. Block workstation-to-workstation SMB. Use firewall rules that match business flows, not subnets-as-politics.
3) “EDR says contained, but file shares keep getting encrypted”
Symptom: Endpoint alerts stop, but files continue changing to encrypted extensions or ransom note files appear.
Root cause: An infected but unisolated host still has share access; or a second host is compromised; or a service account is being abused.
Fix: Use file server logs to identify client IPs modifying content. Temporarily set shares read-only. Revoke tokens by disabling accounts if necessary, then rotate credentials.
4) “Restores are failing”
Symptom: Backup jobs exist, but restore is slow, incomplete, or corrupted.
Root cause: Backups were online and encrypted; backups were never tested; the backup server used domain admin creds exposed to endpoints.
Fix: Implement immutable backups and separate credentials. Run routine restore tests. Keep backup repositories off the same trust boundary as endpoints.
5) “We can’t disable SMBv1 because something might break”
Symptom: Endless exceptions and delays; SMBv1 stays on by default.
Root cause: Unknown dependencies and lack of ownership for legacy apps/devices.
Fix: Inventory SMBv1 usage, name owners, and force deadlines. If a device requires SMBv1, isolate it and put a plan to replace it on paper with a date.
6) “We can’t reboot servers for patches”
Symptom: Critical systems accumulate months of missing updates.
Root cause: Fragile services with no HA, no maintenance window, or change fear driven by past outages.
Fix: Build HA or rebuild architecture so reboots are routine. If you can’t reboot a Windows server, it’s not “critical,” it’s “a single point of failure you refuse to admit.”
Checklists / step-by-step plan
1) Emergency response checklist (first 4 hours)
- Contain: Isolate infected endpoints; block TCP/445 between user segments immediately.
- Scope: Identify file shares touched; identify scanning hosts; list all SMB listeners internally.
- Stabilize: Protect backups: unmount writable backup repos from production; ensure backup credentials are not used on endpoints.
- Verify patches: Prioritize MS17-010 coverage on SMB-exposed hosts and anything with broad reach.
- Communicate: Tell the business what will be disrupted (SMB restrictions, forced reboots) and why. Clarity beats optimism.
- Preserve evidence: Snapshots/log exports of affected shares; collect endpoint artifacts if you have forensic capability.
2) Hardening checklist (next 7 days)
- Disable SMBv1 broadly; track and isolate exceptions.
- Enforce SMB signing where feasible, especially on servers.
- Segment networks: block workstation-to-workstation SMB; allow only required client-to-server SMB.
- Patch compliance: define SLA (e.g., critical within days) and enforce via reporting + automation.
- Least privilege admin: tiered admin accounts; no domain admin use on endpoints.
- Backup hardening: immutable copies, offline/air-gapped option, and restore tests.
- Logging: centralize Windows event logs and file server audit logs for rapid client attribution.
3) Engineering checklist (next 30–90 days)
- Rebuild patching pipeline as a production system: phased rings, canaries, rollback, metrics.
- Asset inventory accuracy: reconcile DHCP, AD, EDR, and network scans; unknown devices become incidents.
- Design for reboots: HA where needed; maintenance windows that are real; eliminate “no reboot” folklore.
- Service accounts: rotate regularly; restrict logon rights; monitor abnormal SMB activity and mass file changes.
- Tabletop exercises: run a wormable-ransomware drill, not a slide deck. Validate isolation and restore time.
FAQ
1) Was WannaCry mostly a phishing attack?
The defining feature was worm-like propagation via SMB exploiting MS17-010-class vulnerabilities. User interaction was not required for spread between vulnerable hosts.
2) If we patched MS17-010 today, are we done?
No. Patch, then restrict SMB exposure, disable SMBv1, and harden backups. Wormable outbreaks thrive on reachability and weak internal segmentation, not just missing KBs.
3) Why is SMBv1 still showing up in enterprises?
Because old devices and old apps stick around, and people are afraid to break them. The correct move is to isolate SMBv1 dependencies tightly and replace them on a deadline.
4) Should we pay the ransom?
Operationally, paying doesn’t guarantee recovery and can create more problems (including repeat targeting). Your best investment is restore capability: snapshots, immutable backups, and practiced recovery.
5) What’s the single fastest containment move for a WannaCry-like event?
Block TCP/445 east-west (especially workstation-to-workstation) and isolate scanning hosts. You’re cutting the worm’s legs off.
6) How do we find “the one unpatched box” quickly?
Start from network truth: scan for TCP/445 listeners, then verify patch/config state for those hosts. CMDBs are useful, but packet flow is reality.
7) Can backups get encrypted too?
Yes, if they’re mounted read-write, reachable with compromised credentials, or stored on shares accessible from infected endpoints. Immutable backups and separation of trust boundaries are non-negotiable.
8) Does disabling SMB break Windows file sharing?
Disabling SMB entirely would, but you typically disable SMBv1 (old, risky) and keep SMBv2/v3. You also restrict where SMB is allowed, instead of letting it roam.
9) Why do “patch windows” fail in practice?
Because patching is treated as a project, not an operational loop. Successful orgs run it like deployments: rings, monitoring, fast rollback, and ownership for exceptions.
Next steps you can do this week
If WannaCry taught the industry anything, it’s that “we meant to patch” is not an incident control. It’s a confession.
The goal isn’t to be perfect. The goal is to stop being surprised by something you can measure.
- Run an internal TCP/445 exposure scan and hand the list to the people who can actually change firewall policy.
- Pick a date to disable SMBv1 enterprise-wide. Start with a pilot ring, then move fast. Exceptions go into isolation VLANs with owner sign-off.
- Measure patch compliance from the outside (network scans + config checks), not just from the patch tool’s perspective.
- Fix backups like you mean it: immutable copy, offsite copy, and a restore drill that produces a real restore time, not a PowerPoint.
- Implement a containment switch: an emergency firewall rule set to restrict SMB east-west that you can enable in minutes, not hours.
Do those, and the next wormable ransomware wave will still be annoying. It just won’t be existential. That’s the bar. Clear it.