It’s 9:07 a.m. Sales says “customers are getting invoices from us that we didn’t send.” Support says “we’re on a blocklist.” The CEO forwards a screenshot with the subject line: “URGENT: password reset.” And the From: header is your domain.
This is the moment you find out whether your email posture is real, or just vibes. Below is a playbook written for people who run production systems: you need containment, evidence, and a fix that holds under daylight—without making deliverability worse.
Fast diagnosis playbook
You’re looking for the bottleneck: is mail being sent by your infrastructure, or is someone just spoofing your headers and riding the public internet? Don’t “review everything.” Check the three things that separate those worlds.
First: did any message traverse your outbound systems?
- Check your MTA logs (Postfix/Exim/Exchange message tracking). If there’s no evidence of the spam leaving your systems, you’re mostly dealing with spoofing and policy, not a compromised sender.
- Check your provider’s message trace (Microsoft 365 / Google Workspace). If it shows successful outbound sends, you likely have credential theft or API token abuse.
Second: do your recipients’ headers show authentication passing?
- If SPF=pass for your domain and DKIM=pass with your selector, the sender probably used your real infrastructure or credentials.
- If SPF/DKIM fails but the message still lands, your DMARC policy is too weak (or nonexistent), and recipients are making their own decisions.
Third: is your domain policy enforcing anything?
- Check DMARC record: if it’s
p=none, you’re observing, not protecting. - Check alignment: SPF/DKIM can “pass” but still fail DMARC if not aligned with the From: domain.
Speed rule: in the first 30 minutes, optimize for stopping additional sends and preserving evidence. Reputation repair can wait until the fire is out.
What “phishing from your domain” usually means
There are three common realities behind the same scary screenshot.
1) Pure spoofing (your domain is forged, not hacked)
Email was designed in an era when “be nice” was a reasonable security model. Anyone can write From: ceo@yourdomain.com in a message. If your domain lacks strong DMARC enforcement, many receiving systems will deliver it anyway—especially to humans who have been trained by corporate life to click urgent things.
2) Account compromise (someone is sending as you)
A mailbox got phished, an OAuth consent screen got approved, an API token leaked, or MFA got bypassed with a session cookie theft. The attacker sends from your real tenant, so SPF and DKIM often pass. This is the painful one because it’s “authentic” spam.
3) Infrastructure misuse (open relay, exposed SMTP submission, abused marketing tool)
Less common, but catastrophic when true: your MTA is relaying for the world, a forgotten SMTP credential is still valid, or a third-party sending service is allowed in SPF and is being abused. The difference here is you’ll see it in your logs.
Good incident response is mostly classification. Once you know which of the three you’re in, the fix becomes boring. Boring is good.
Interesting facts and historical context
- SMTP shipped without authentication in the early 1980s; identity was assumed, not verified. Spoofing wasn’t a “bug,” it was outside the original threat model.
- SPF started as a DNS-based sender authorization idea in the early 2000s, largely to reduce forged-envelope spam.
- DKIM came from DomainKeys and Identified Internet Mail work; it cryptographically signs email content so recipients can verify domain control.
- DMARC (2012-ish) glued SPF and DKIM together with alignment rules and a policy knob (none/quarantine/reject), plus reporting.
- DMARC reporting created a new telemetry industry: suddenly you could see who was “sending as you,” including long-forgotten vendors.
- “Alignment” is the trap door: SPF can pass for one domain while the visible From: is another; DMARC is what cares about that mismatch.
- ARC (Authenticated Received Chain) exists because forwarding and mailing lists break authentication; ARC lets intermediaries attest to what they saw.
- Major mailbox providers slowly turned the screws: over time, bulk senders were pushed toward SPF/DKIM/DMARC enforcement and one-click unsubscribe expectations.
- Security teams learned the hard way: “we have SPF” is not a defense. SPF is one door. Attackers walk around doors.
Decision tree: spoofing vs compromise vs relay
If outbound logs show the messages leaving your systems
- Likely: compromised mailbox, compromised SMTP credential, abused vendor integration.
- Do now: block sender, revoke tokens, reset credentials, force sign-out, investigate access logs, preserve samples.
If outbound logs show nothing, but recipients see your From:
- Likely: spoofing.
- Do now: tighten DMARC, ensure DKIM is enabled everywhere you legitimately send, and verify SPF includes only what you control.
If logs show your MTA relayed to many domains you don’t do business with
- Likely: open relay or exposed submission/auth.
- Do now: lock down relay rules, restrict submission ports, rotate creds, block offending IPs, and check for malware on the host.
Quote worth keeping on the wall (paraphrased idea): John Allspaw: reliability comes from how systems behave under stress, not from how nice your diagrams look.
Practical tasks with commands (and how to decide)
These are real tasks you can run on a Linux MTA host or a jump box with DNS tools. Each includes: the command, what output means, and the decision you make. Adjust domains and hostnames accordingly.
Task 1 — Pull SPF record and sanity-check it
cr0x@server:~$ dig +short TXT yourdomain.com
"v=spf1 ip4:203.0.113.10 include:_spf.google.com include:sendgrid.net ~all"
What it means: SPF authorizes 203.0.113.10 plus Google plus SendGrid. The ~all is “softfail,” which is basically a polite suggestion.
Decision: inventory every include. If you can’t explain why it’s there, it’s not “temporary,” it’s “unknown risk.” Move toward -all only when you’re confident all legit senders are covered.
Task 2 — Check DMARC record and policy level
cr0x@server:~$ dig +short TXT _dmarc.yourdomain.com
"v=DMARC1; p=none; rua=mailto:dmarc@yourdomain.com; ruf=mailto:dmarc-forensics@yourdomain.com; adkim=r; aspf=r; pct=100"
What it means: You’re collecting reports but not enforcing. Relaxed alignment is set.
Decision: if you’re actively being spoofed, p=none is not a response plan. Move to p=quarantine quickly, then p=reject after validating legitimate sources sign/auth correctly.
Task 3 — Verify DKIM selector exists (common failure: missing record)
cr0x@server:~$ dig +short TXT s1._domainkey.yourdomain.com
"v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A..."
What it means: DKIM public key exists for selector s1.
Decision: if missing, DKIM will fail for mail signed with that selector. Fix DNS, then validate signing on the sender side.
Task 4 — Inspect a phishing sample’s Authentication-Results
cr0x@server:~$ grep -E "Authentication-Results|From:|Return-Path:|Received-SPF|DKIM-Signature" -n sample.eml | sed -n '1,120p'
12:From: "Accounts Payable" <ap@yourdomain.com>
18:Return-Path: <bounce@evil-example.net>
31:Authentication-Results: mx.example.net;
32: spf=fail (mx.example.net: domain of bounce@evil-example.net does not designate 198.51.100.77 as permitted sender) smtp.mailfrom=evil-example.net;
33: dkim=none (message not signed);
34: dmarc=fail (p=none) header.from=yourdomain.com
What it means: Not signed, SPF fails for the envelope domain, DMARC fails but policy is none so it can still land.
Decision: treat as spoofing. Your job is to make failing DMARC mean “don’t deliver.” Tighten DMARC and ensure legit senders align.
Task 5 — Confirm your outbound IP reputation signals (quick: check if you’re listed)
cr0x@server:~$ dig +short 10.113.0.203.zen.spamhaus.org A
127.0.0.2
What it means: 203.0.113.10 appears listed (example response code). Some lists return different loopbacks for different reasons.
Decision: if you’re listed and you actually sent the mail, containment and cleanup matter more than arguing with the list. If you didn’t send it, you still need DMARC enforcement and evidence for delisting requests.
Task 6 — Search Postfix logs for suspicious volume spikes
cr0x@server:~$ sudo zgrep -h "postfix/smtp" /var/log/mail.log* | awk '{print $6}' | cut -d= -f2 | sort | uniq -c | sort -nr | head
4821 status=sent
118 status=deferred
22 status=bounced
What it means: Lots of successful sends. That’s consistent with real outbound activity, not mere spoofing.
Decision: pivot to “who sent it” (authenticated user, originating IP, submission logs). If volume is zero, stop chasing MTA ghosts and fix DMARC.
Task 7 — Identify authenticated SMTP users (submission abuse)
cr0x@server:~$ sudo zgrep -h "sasl_username=" /var/log/mail.log* | awk -F'sasl_username=' '{print $2}' | awk '{print $1}' | sort | uniq -c | sort -nr | head
913 payroll@yourdomain.com
112 notifications@yourdomain.com
10 helpdesk@yourdomain.com
What it means: A specific mailbox is dominating sends via authenticated SMTP.
Decision: immediately disable that account, rotate password, revoke sessions, and review login history. Then check if it’s a real business sender (payroll systems often are) or an attacker.
Task 8 — Trace one message queue ID to see destinations and timing
cr0x@server:~$ sudo postqueue -p | head
-Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient-------
A1B2C3D4E5* 2048 Mon Jan 4 08:55:12 ap@yourdomain.com
user1@example.net
user2@example.org
cr0x@server:~$ sudo postcat -q A1B2C3D4E5 | sed -n '1,80p'
*** ENVELOPE RECORDS ***
message_size: 2048
sender: ap@yourdomain.com
*** MESSAGE CONTENTS ***
From: "Accounts Payable" <ap@yourdomain.com>
Subject: Updated remittance advice
Date: Mon, 04 Jan 2026 08:55:11 +0000
What it means: This is actually in your queue. That’s not spoofing; your system is sending it.
Decision: contain by pausing outbound or holding the queue if necessary, then isolate the source account or app.
Task 9 — Stop Postfix outbound temporarily (containment lever)
cr0x@server:~$ sudo postfix stop
postfix/postfix-script: stopping the Postfix mail system
What it means: You halted the MTA. This is blunt but effective if your server is the cannon.
Decision: use only if you’ve confirmed your host is sending abusive mail and you can tolerate an outage. Otherwise, restrict at the edge (firewall, rate limits, account disable) to avoid collateral damage.
Task 10 — Check for open relay behavior (the classic “nope”)
cr0x@server:~$ swaks --to victim@example.net --from spoof@yourdomain.com --server mail.yourdomain.com
=== Trying mail.yourdomain.com:25...
=== Connected to mail.yourdomain.com.
<= 220 mail.yourdomain.com ESMTP Postfix
=> EHLO server
<= 250-mail.yourdomain.com
=> MAIL FROM:<spoof@yourdomain.com>
<= 250 2.1.0 Ok
=> RCPT TO:<victim@example.net>
<= 554 5.7.1 Relay access denied
=> QUIT
<= 221 2.0.0 Bye
What it means: Relay is denied. Good. If you saw 250 2.1.5 Ok for RCPT to an external domain without auth, you’d have an open relay.
Decision: if it’s open, fix relay restrictions immediately. If it’s closed, stop blaming the MTA for forged mail you didn’t send.
Task 11 — Validate TLS and submission ports exposure
cr0x@server:~$ sudo ss -lntp | grep -E ":(25|465|587)\s"
LISTEN 0 100 0.0.0.0:25 0.0.0.0:* users:(("master",pid=1123,fd=13))
LISTEN 0 100 0.0.0.0:587 0.0.0.0:* users:(("master",pid=1123,fd=15))
What it means: SMTP and submission are listening on all interfaces. That might be correct, or it might be “why are we doing this.”
Decision: if submission (587/465) is internet-exposed, ensure strong auth, rate limits, and MFA on accounts. If you don’t need external submission, restrict by firewall/VPN.
Task 12 — Pull recent successful logins for a suspected mailbox (generic IMAP auth logs)
cr0x@server:~$ sudo zgrep -h "imap-login: Login:" /var/log/mail.log* | grep "payroll@yourdomain.com" | tail -5
Jan 4 08:41:02 mail dovecot: imap-login: Login: user=<payroll@yourdomain.com>, method=PLAIN, rip=198.51.100.88, lip=203.0.113.10, mpid=22101, TLS
Jan 4 08:43:19 mail dovecot: imap-login: Login: user=<payroll@yourdomain.com>, method=PLAIN, rip=198.51.100.88, lip=203.0.113.10, mpid=22144, TLS
What it means: Repeated logins from a single remote IP. Could be a user, could be an attacker.
Decision: correlate with known user locations/VPN egress. If suspicious, revoke sessions and reset credentials. Don’t wait for HR to “confirm if it was them” while spam continues.
Task 13 — Confirm DMARC aggregate reports are arriving (mailbox health)
cr0x@server:~$ sudo mailq | head
Mail queue is empty
What it means: Your local queue isn’t stuck, but it doesn’t prove DMARC reports are processed. This is a “sanity check,” not a victory lap.
Decision: if you depend on reports for visibility, make sure the receiving mailbox is monitored and has enough quota. DMARC telemetry that silently bounces is like a smoke alarm with dead batteries.
Task 14 — Quick DNS propagation check from multiple resolvers
cr0x@server:~$ dig @1.1.1.1 +short TXT _dmarc.yourdomain.com
"v=DMARC1; p=quarantine; rua=mailto:dmarc@yourdomain.com; adkim=s; aspf=s; pct=100"
cr0x@server:~$ dig @8.8.8.8 +short TXT _dmarc.yourdomain.com
"v=DMARC1; p=none; rua=mailto:dmarc@yourdomain.com; adkim=r; aspf=r; pct=100"
What it means: Different resolvers see different records—propagation or split-brain DNS. Until this converges, your enforcement is inconsistent.
Decision: don’t declare “fixed” while the internet still sees the old policy. Investigate DNS hosting, TTLs, and whether multiple TXT records exist.
Task 15 — Check for multiple DMARC TXT records (breaks evaluation)
cr0x@server:~$ dig TXT _dmarc.yourdomain.com +noall +answer
_dmarc.yourdomain.com. 300 IN TXT "v=DMARC1; p=reject; rua=mailto:dmarc@yourdomain.com"
_dmarc.yourdomain.com. 300 IN TXT "v=DMARC1; p=none; rua=mailto:old@yourdomain.com"
What it means: Two DMARC records. Many receivers treat this as a permerror and may ignore DMARC entirely.
Decision: remove the extra record. One DMARC record per domain. Always.
Joke #1: Email authentication is the only security system where “pass” can still mean “fail,” and everyone nods like it’s normal.
Three corporate mini-stories from the trenches
Mini-story 1: The incident caused by a wrong assumption
A mid-size financial services company started receiving complaints: vendors got “updated bank details” emails from the company’s domain. The IT team did the sensible first move: search the outbound logs. Nothing. They declared, confidently, “we weren’t hacked.”
They were half right and fully wrong. The attackers were spoofing the visible From: domain and using a disposable sending infrastructure. The company had SPF and DKIM “set up,” but DMARC was p=none. The receiving mail systems were left to guess whether to deliver, and many guessed “yes” because the message looked businessy and didn’t trip their content filters.
The wrong assumption was subtle: “If we didn’t send it, we can’t stop it.” You can’t stop people from forging your name in SMTP headers, but you can make it expensive to deliver those forgeries. DMARC exists exactly for that.
Once they moved to p=quarantine and then p=reject, complaints dropped sharply. A few legitimate systems broke—an old invoicing vendor wasn’t signing DKIM and was sending from an IP not in SPF. That clean-up was overdue anyway.
The postmortem was blunt: the incident wasn’t “email compromise,” it was “policy weakness.” The fix wasn’t a SOC miracle; it was a DNS TXT record and a week of vendor inventory.
Mini-story 2: The optimization that backfired
A SaaS company decided to “simplify” email by routing all outbound mail through one third-party platform. One SPF include, one DKIM selector, one place to manage templates. The migration worked, deliverability improved, and everyone moved on.
Six months later, a phishing wave hit their customers with perfect authentication. SPF passed. DKIM passed. DMARC passed. The messages were sent through that third-party platform, using valid API keys that had leaked from a CI log artifact. It wasn’t the vendor’s fault, exactly. The company had treated API keys like configuration, not like production credentials.
The optimization mistake was centralization without containment. They removed diversity (multiple senders), but also removed blast-radius controls. Every email function—password resets, invoices, marketing, support—shared one “god key.” When it leaked, attackers gained the credibility of their strongest sender.
Recovery took days: rotate keys, rebuild CI hygiene, separate transactional vs marketing, implement per-service keys with scoped permissions, and add outbound rate limits and anomaly detection. The irony: the “simplification” turned an annoying incident into a reputation event.
Mini-story 3: The boring but correct practice that saved the day
A healthcare org had a routine that nobody liked: monthly DMARC report reviews and a vendor authorization register. It was tedious. It produced spreadsheets. It made people sigh in meetings.
Then an attacker tried to spoof them during a regional outage, using a “helpful” email asking patients to confirm personal details. The org’s DMARC was already at p=reject with strict alignment for the organizational domain. The spoofed messages failed DMARC and were rejected by many major providers before they hit inboxes.
They still got some reports—screenshots from fringe mail systems and a few forwards—but the scale never reached “incident bridge.” What they did do was check their DMARC aggregate data and confirm the attack pattern: lots of fails from new IP ranges, zero corresponding outbound activity.
The boring practice paid twice: strong enforcement reduced delivery, and the baseline telemetry made it easy to classify the event as spoofing, not compromise. That meant they didn’t waste a day rotating every password in the company “just in case.”
Containment: stop the bleeding first
Containment differs depending on the classification, but the mindset is the same: reduce attacker throughput quickly, without destroying evidence you’ll need to prove what happened.
If it’s mailbox compromise (most urgent)
- Disable the account or block sign-in. Don’t start with “ask the user.” Users are asleep or defensive; attackers are neither.
- Revoke sessions and tokens. Password resets without token revocation are theater.
- Remove malicious inbox rules (classic: auto-forwarding, delete on receipt, “mark as read”).
- Stop outbound from that identity via transport rules/policy, or temporarily block external recipients.
- Preserve samples: full headers, raw MIME, timestamps, and message IDs.
If it’s third-party sender abuse
- Rotate API keys, then invalidate old ones. If you can’t invalidate old keys, assume you can’t contain.
- Scope permissions (separate keys for transactional vs marketing vs support; per-app restriction).
- Rate limit outbound per key and per IP, and alert on anomalies.
If it’s spoofing
- Move DMARC from none → quarantine quickly if you can tolerate some false positives. Use
pct=25as a ramp if you must, but time-box it. - Confirm DKIM is enabled for every legitimate source, especially CRM/marketing systems.
- Clean SPF includes. Remove stale vendors. SPF is an allowlist; treat it like firewall rules.
Joke #2: If your SPF record includes five vendors you’ve never heard of, congratulations—you’ve invented “outsourced open relay.”
Eradication and hardening: make it stay fixed
Containment buys time. Hardening prevents the same class of incident from recurring next quarter with a different sender name.
Lock down identities and access paths
- MFA everywhere, but also: block legacy auth protocols and app passwords if you can. Attackers love the old doors.
- Conditional access for risky sign-ins: impossible travel, new device, suspicious IP ranges.
- OAuth app governance: restrict consent, review enterprise apps, and monitor new grants.
Make authentication policy enforceable
- DMARC at p=reject for your organizational domain when you’re ready. If you can’t get to reject, you don’t know all your senders.
- Use strict alignment where possible (
adkim=s,aspf=s) to reduce lookalike abuse. Relaxed is a stepping stone, not a destination. - Separate domains: use a subdomain for bulk marketing and keep core domain locked down. Marketing platforms get compromised; that’s not a prediction, it’s physics.
Fix forwarding and mailing list edge cases intentionally
Forwarders often break SPF (because the forwarder’s IP isn’t authorized) and mailing lists often break DKIM (because they modify the message). ARC helps, but don’t assume every receiver uses it consistently.
- For critical workflows (invoices, password resets), avoid sending to lists/forwarders as the only delivery path.
- If you operate list servers, configure them to preserve authentication where possible and use ARC if appropriate.
Build detection that doesn’t depend on luck
- Alert on outbound spikes per user/app/API key.
- Track new forwarding rules and new inbox rules that delete or redirect.
- Log retention: ensure you have enough history to answer “when did this start?” without guessing.
Reputation and deliverability recovery
Once you’ve stopped the ongoing abuse, you have a second problem: the internet now believes you’re messy. Getting out of the penalty box is mostly about proving you’re stable.
Stabilize your sending patterns
- Reduce volume temporarily if you can (especially marketing). The first 48 hours after an incident is when filters are least forgiving.
- Send clean, expected traffic: transactional messages to engaged recipients are your “healthy heartbeat.”
- Remove dead addresses and stop repeated bounces; they’re reputation poison.
Fix the source, then address blocklists
Delisting without fixing cause is like mopping while the pipe is still spraying. Some lists will relist you fast, and the second time is usually slower to resolve.
Communicate like an adult
- Tell internal teams what happened and what changed: DMARC policy, account lockouts, vendor restrictions.
- Provide customers a short guidance note: “We do not request password resets by email link; verify domain and authentication indicators.” Keep it factual.
- Don’t overpromise. If you say “we were not hacked” when you haven’t verified, you’re betting trust on an assumption.
Checklists / step-by-step plan
0–30 minutes: classify and contain
- Collect at least one full sample (raw .eml with headers). Screenshot is not evidence.
- Check outbound traces/logs: did your systems send it?
- If yes: disable suspected accounts and revoke tokens; pause outbound if needed.
- If no: confirm DMARC policy; prepare to move to quarantine/reject.
- Announce an incident channel with one owner and one decision-maker. Chaos is the real malware.
Same day: fix the enabling conditions
- Inventory senders: M365/GWS, marketing platform, ticketing system, monitoring alerts, invoicing, HR.
- Enable DKIM signing per sender where possible.
- Constrain SPF: remove stale includes; ensure you’re under DNS lookup limits.
- Set DMARC to quarantine (optionally ramp
pct). Start collecting reports if you weren’t. - Audit OAuth apps and consent grants; remove unknown or risky integrations.
- Hunt for forwarding rules and suspicious mailbox rules.
Within a week: make it durable
- Move DMARC to reject for the org domain once alignment is clean.
- Split domains: transactional on a tightly controlled domain; marketing on a subdomain with clear policy.
- Implement rate limits and anomaly alerts on outbound sends.
- Write the runbook you wish you had. Then run a tabletop exercise and watch it break (kindly).
Common mistakes: symptom → root cause → fix
“We have SPF, so why is spoofing still happening?”
Symptom: recipients see your domain in From:, but Authentication-Results show SPF fail and DKIM none/fail, yet messages deliver.
Root cause: SPF only checks the envelope domain, and without DMARC enforcement receivers may still deliver forged From: mail.
Fix: implement DMARC with quarantine/reject, ensure DKIM signing, and align SPF/DKIM with the visible From: domain.
“DMARC is set, but receivers say permerror”
Symptom: DMARC fails with “permerror” in reports; spoofing continues.
Root cause: multiple DMARC TXT records, malformed tags, or oversized records.
Fix: ensure exactly one DMARC record, validate syntax, and keep it within DNS limits.
“DKIM passes sometimes, fails other times”
Symptom: inconsistent DKIM validation depending on receiver or route.
Root cause: multiple sending systems with different selectors/keys, DNS propagation issues, or intermediaries modifying content (lists/footers).
Fix: standardize DKIM per sender, verify selector DNS, stop message body modifications, and consider ARC for known intermediaries.
“We set DMARC to reject and now legitimate mail is missing”
Symptom: invoices or helpdesk replies disappear; users complain.
Root cause: legitimate third-party sender not included in SPF and not DKIM-signing with aligned domain.
Fix: onboard the vendor properly: authorize IPs or include, enforce DKIM signing with your domain (or use a subdomain), then retest before ramping enforcement.
“Our domain is being used, but our mail server is clean”
Symptom: no outbound logs; still lots of complaints.
Root cause: pure spoofing combined with weak DMARC (p=none) and/or poor receiver-side heuristics.
Fix: move to DMARC quarantine/reject, publish clear policy, and monitor reports for remaining unauthorized sources.
“We rotated passwords, but spam keeps sending”
Symptom: outbound spam persists after password reset.
Root cause: attacker is using existing sessions, OAuth refresh tokens, or API keys, not the password.
Fix: revoke sessions/tokens, rotate API keys, and disable suspicious enterprise apps or consent grants.
“We blocked port 25 and the problem stopped… and so did our business email”
Symptom: containment action caused widespread outbound failures.
Root cause: blunt network blocks without understanding which services rely on that path (transactional mail, alerts, password resets).
Fix: prefer targeted containment: block specific accounts/keys, rate limit, restrict relay rules; keep critical send paths available.
FAQ
1) How can attackers send “from my domain” if we weren’t breached?
Because the From: header is just text unless receivers enforce authentication. Spoofing is trivial; stopping delivery requires DMARC enforcement plus proper DKIM/SPF alignment.
2) Should we immediately set DMARC to p=reject?
If you’re confident you know all legitimate senders and they align, yes. If not, go to p=quarantine first and use DMARC reports to find stragglers. Time-box the quarantine phase.
3) What’s the difference between SPF “pass” and DMARC “pass”?
SPF “pass” means the sending IP is authorized for the envelope domain. DMARC “pass” means SPF or DKIM passed and aligned with the visible From: domain. Alignment is the point.
4) Why do forwarded emails break SPF/DKIM?
SPF breaks because the forwarding server’s IP isn’t authorized to send for the original domain. DKIM can break if the forwarder modifies the message. This is why DMARC can be painful for mailing lists and why ARC exists.
5) We use multiple vendors. How do we avoid SPF turning into a monster?
Prefer DKIM signing per vendor and keep SPF tight. Every include: is a trust relationship. Also watch SPF DNS lookup limits; too many includes can cause SPF permerror, which is a self-inflicted outage.
6) How do we tell if it’s an OAuth token compromise?
Look for new or unusual app consents, sign-ins without corresponding password resets, and continued sending after password change. Token revocation and app removal are the key containment moves.
7) Can we stop spoofing without DMARC?
You can’t stop forging; you can only influence receiver behavior. Without DMARC enforcement, you’re hoping every mailbox provider’s heuristics save you. Hope is not a control.
8) What should we send to customers during the incident?
Short, factual, and actionable: what to ignore, how to verify legitimate messages, what you will never ask for, and a support channel. Don’t speculate about root cause until verified.
9) Why did the phishing still land in some inboxes after we set p=reject?
Propagation delays, cached DNS, receivers not honoring DMARC consistently, or the mail being forwarded in ways that alter evaluation. Also check for multiple DMARC records and alignment misconfigurations.
Conclusion: next steps you can actually do
If your domain is being used for phishing, don’t start with a company-wide password reset and a prayer. Start with classification: did it traverse your systems or not? Then contain the correct layer: identity, vendor key, MTA relay rules, or policy enforcement.
Practical next steps for today:
- Get one raw message sample with full headers and store it somewhere sane.
- Check outbound evidence (provider trace + MTA logs). Decide: spoofing vs compromise vs relay.
- If spoofing: move DMARC to quarantine now, and plan reject after sender inventory.
- If compromise: disable the account, revoke tokens/sessions, remove malicious rules, rotate keys.
- Within a week: clean SPF, ensure DKIM everywhere, and push DMARC to reject for the org domain.
- Within a month: split domains, implement outbound anomaly alerts, and rehearse this runbook once.
The goal isn’t perfection. The goal is that next time, your incident bridge lasts 30 minutes, not 30 hours—and your domain stops being a costume anyone can wear.