DMARC Reports: How to Read Them and Catch Spoofing Early

Was this helpful?

Email spoofing is a weird kind of outage: nothing “breaks” inside your stack, but your customers lose trust anyway. Support gets screenshots of invoices you didn’t send. Sales complains prospects “replied” to an email nobody wrote. Security asks for “the DMARC thing,” and everyone pretends they totally know what that is.

DMARC reports are your early-warning system. They’re also a pile of XML that feels like it was designed by someone who hates weekends. Read them right and you’ll catch spoofing while it’s still cheap. Ignore them and you’ll discover the problem when your CFO forwards a phishing email asking why “Accounts Payable” suddenly uses a new bank.

What DMARC reports actually are (and what they are not)

DMARC reports are feedback loops sent by mailbox providers (and some intermediaries) to the domain owner who publishes a DMARC record. They tell you, at scale, how mail claiming to be from your domain is performing against SPF and DKIM, and whether it would pass DMARC policy.

Two report types matter:

  • Aggregate reports (a.k.a. RUA): periodic summaries, typically daily, showing counts of messages by source IP, authentication results, and policy disposition. They’re what you use for monitoring and trend detection.
  • Forensic/failure reports (a.k.a. RUF): per-message (or per-sample) failure details. Many providers limit or don’t send them due to privacy and abuse concerns. If you enable these without a plan, you’ll end up with a mailbox full of personal data you didn’t want to own.

What DMARC reports are not:

  • A guarantee that spoofing is impossible. DMARC helps prevent direct spoofing at compliant receivers; it does not stop lookalike domains, display-name fraud, or compromised vendors.
  • A perfect ground truth of “who sent mail.” Reports show what receivers observed. Forwarding, gateways, and mailing lists will distort reality unless you understand alignment and authentication breakage.
  • A replacement for logs. DMARC reports are a receiver-side view. You still need sending-side telemetry to close the loop.

Think of DMARC reports as SLO burn-rate alerts for brand trust. They won’t fix the fire, but they’ll tell you where the smoke is and how fast it’s spreading.

Interesting facts and a little history

  • DMARC emerged in the early 2010s as big mailbox providers tried to reduce phishing at internet scale without breaking the existing email ecosystem overnight.
  • SPF predates DMARC and was designed around the SMTP envelope domain (Return-Path), not the human-visible From header. That mismatch is why DMARC needed “alignment” in the first place.
  • DKIM was born from DomainKeys and Identified Internet Mail; it standardized cryptographic signing of message headers/body, but it didn’t define how receivers should treat failures consistently. DMARC added policy.
  • DMARC’s “pct” tag exists because the world runs on staged rollouts. It’s the canary deployment knob for email authentication.
  • “Relaxed” vs “strict” alignment is basically “subdomains allowed or not,” which sounds simple until your org has 47 marketing platforms and a legacy ERP that thinks it’s 2003.
  • Aggregate reports are XML by tradition, not because anyone enjoys it. The format is widely supported and compresses well, which mattered when reports were first scaled out.
  • Major providers throttle or omit forensic reports due to privacy risk; some never send full message samples anymore.
  • ARC (Authenticated Received Chain) exists largely because legitimate mail flows like mailing lists break SPF/DKIM; ARC provides a way for intermediaries to preserve authentication results downstream.

One paraphrased idea worth keeping close, attributed to Werner Vogels: Everything fails eventually; design so you can recover quickly and learn from the failure.

The DMARC basics that matter in production

DMARC is policy + alignment + reporting

A DMARC record lives at _dmarc.example.com in DNS. It defines:

  • Policy: what receivers should do when DMARC fails (p=none, p=quarantine, p=reject).
  • Alignment: whether SPF/DKIM “pass” must align with the From domain (relaxed/strict).
  • Reporting endpoints: where to send aggregate (rua) and/or forensic (ruf) reports.

Alignment is where most orgs bleed

DMARC doesn’t require both SPF and DKIM to pass. It requires either SPF or DKIM to pass and align with the From domain.

Common production reality:

  • SPF passes but does not align because the vendor uses their own bounce domain (Return-Path), not yours.
  • DKIM passes but does not align because the vendor signs with d=vendor.example instead of d=yourdomain.com.
  • Both fail after forwarding, because SPF breaks on IP changes and DKIM breaks on header/body modifications.

Policy is not a badge of honor

p=reject is not a trophy. It’s a production change with user-facing consequences. You earn it by instrumenting and stabilizing your senders first.

Short dry truth: if you can’t name your legitimate senders, you’re not ready to reject unknown ones.

Joke #1: Email authentication is like flossing: everyone agrees it’s good, and nobody does it until things start bleeding.

Aggregate vs forensic: choosing what you can handle

Aggregate (RUA): your daily pulse

Aggregate reports arrive as compressed attachments (often ZIP or GZIP) containing XML. Each report covers a time window and includes multiple “records,” each summarizing mail that shared:

  • Source IP (or sometimes a netblock abstraction)
  • From domain (header From)
  • Authentication results and alignment status
  • Counts and receiver’s applied disposition

They’re great for detecting:

  • New sources sending as your domain
  • Sudden DMARC failure spikes from a legitimate platform change
  • Receivers applying quarantine/reject unexpectedly due to misalignment

Forensic (RUF): high signal, high liability

Failure reports can contain message headers and sometimes portions of the message. That can include personal data, internal content, or sensitive identifiers. Treat it like log data with PII and apply retention and access control accordingly.

In many environments, the correct decision is: don’t enable RUF unless you have a specific use case and a safe handling process.

How to read an aggregate report like an SRE

Start with the question: “What changed?”

DMARC reporting is less about single events and more about deltas. You’re hunting for new IPs, new senders, and new failure modes. Treat it like traffic analysis:

  • Baseline: known sources, expected pass rates, expected receiver mix.
  • Change detection: new sources, sharp drops in alignment, or provider-specific anomalies.
  • Attribution: map IPs to vendors/MTAs and decide whether to authorize or block.

Understand the three domains in play

You will get confused unless you keep these separate:

  • Header From domain: what users see; what DMARC uses for policy.
  • SPF domain: the envelope domain (Return-Path / MAIL FROM) checked by SPF.
  • DKIM signing domain: the d= value in the DKIM-Signature.

DMARC asks: does SPF pass and align with Header From? Or does DKIM pass and align with Header From? If neither aligns, DMARC fails even if something technically “passed.”

Disposition is receiver behavior, not your intent

DMARC reports include what the receiver did: none/quarantine/reject. That’s influenced by your published policy, but also by local policy and spam heuristics. If you publish p=none, receivers may still junk obvious garbage. Conversely, even with p=reject, some receivers may accept mail if they trust a forwarding chain or ARC.

Focus on “fail but should have passed” first

Spoofing exists, sure. But your fastest win is fixing your legitimate mail that fails DMARC. That failure drives deliverability issues, and it forces you to keep policy weak because you’re afraid of self-owning your business mail.

The operational order I recommend:

  1. Fix legitimate sources failing DMARC (alignment, DKIM keys, SPF flattening issues).
  2. Move to p=quarantine with pct ramp-up.
  3. Then and only then: p=reject (also ramped), with monitoring that makes noise when failures rise.

Practical tasks: commands, outputs, and decisions (12+)

Below are the kinds of tasks you can actually run on a Linux box or in a CI job that pulls DMARC reports from a mailbox/S3-like store. Each task includes: a command, sample output, what it means, and the decision you make from it.

Task 1 — Fetch your DMARC record and sanity-check the policy

cr0x@server:~$ dig +short TXT _dmarc.example.com
"v=DMARC1; p=none; rua=mailto:dmarc-agg@example.com; ruf=mailto:dmarc-fail@example.com; adkim=r; aspf=r; pct=100"

What it means: DMARC is enabled, monitoring-only (p=none), relaxed alignment, full sampling.

Decision: If you’re already stable and still on p=none, schedule a move to p=quarantine with a pct ramp. If you’re not stable, keep p=none but prioritize alignment fixes.

Task 2 — Verify the reporting mailbox exists and is receiving mail

cr0x@server:~$ ls -lh /var/mail/dmarc-agg
-rw------- 1 dmarc dmarc 218M Jan  3 08:40 /var/mail/dmarc-agg

What it means: Reports are arriving and accumulating.

Decision: If the mailbox is huge, you’re not processing reports. Automate ingestion and set retention. If it’s empty, your rua address may be wrong or blocked.

Task 3 — Identify attachment types (zip/gz) before parsing

cr0x@server:~$ file /srv/dmarc/inbox/*
/srv/dmarc/inbox/report-001.xml.gz: gzip compressed data, was "report.xml", last modified: Thu Jan  2 00:00:00 2026, from Unix
/srv/dmarc/inbox/report-002.zip: Zip archive data, at least v2.0 to extract, compression method=deflate

What it means: You’ve got mixed formats. Your pipeline needs to handle both.

Decision: Normalize to raw XML as the first step (decompress), then parse.

Task 4 — Decompress and validate XML is present

cr0x@server:~$ gunzip -c /srv/dmarc/inbox/report-001.xml.gz | head -n 8
<?xml version="1.0" encoding="UTF-8"?>
<feedback>
  <report_metadata>
    <org_name>ExampleMailboxProvider</org_name>
    <email>noreply-dmarc@provider.example</email>
    <report_id>abc123</report_id>

What it means: It’s a DMARC aggregate report (feedback XML).

Decision: Proceed to parsing. If you see HTML or a bounce, your ingestion is pointed at the wrong mailbox or you’re receiving DSNs.

Task 5 — Extract key fields quickly using xmllint and XPath

cr0x@server:~$ gunzip -c /srv/dmarc/inbox/report-001.xml.gz | xmllint --xpath 'string(/feedback/report_metadata/org_name)' -
ExampleMailboxProvider

What it means: This report came from a specific receiver. Receiver-specific quirks matter.

Decision: Tag ingested data by org_name so you can spot “only fails at Microsoft” vs “fails everywhere.”

Task 6 — Pull the date range to correlate with incidents or deploys

cr0x@server:~$ gunzip -c /srv/dmarc/inbox/report-001.xml.gz | xmllint --xpath 'concat(/feedback/report_metadata/date_range/begin, " ", /feedback/report_metadata/date_range/end)' -
1704153600 1704240000

What it means: Epoch timestamps for the report window (begin/end). This is a daily-ish slice.

Decision: Convert to human time in your tooling and overlay with deploy timelines. If you can’t correlate, you’ll blame the wrong change.

Task 7 — Summarize sources (IP + count) to find new senders

cr0x@server:~$ gunzip -c /srv/dmarc/inbox/report-001.xml.gz | \
xmllint --xpath '//record/row/source_ip/text() | //record/row/count/text()' - 2>/dev/null | \
tr ' ' '\n' | paste - - | head
192.0.2.10	1450
198.51.100.23	12
203.0.113.77	980

What it means: Three sending IPs were observed, with message counts.

Decision: Any new IP with non-trivial volume gets investigated. Low-volume unknown IPs can still be targeted spearphishing, so don’t ignore them—just triage them differently.

Task 8 — Look specifically for DMARC failures at scale

cr0x@server:~$ gunzip -c /srv/dmarc/inbox/report-001.xml.gz | \
xmllint --xpath '//record[row/policy_evaluated/dkim="fail" and row/policy_evaluated/spf="fail"]/row/source_ip/text()' - 2>/dev/null | \
sort | uniq -c | sort -nr | head
3 198.51.100.23

What it means: A source IP appears in multiple “both fail” records. That’s either spoofing or a legitimate sender that’s completely misconfigured.

Decision: If the IP isn’t yours or a known vendor, treat as spoofing. If it is a known vendor, escalate to fix DKIM/SPF alignment immediately.

Task 9 — Check SPF record size and lookup count risk

cr0x@server:~$ dig +short TXT example.com | tr ' ' '\n' | grep -E '^"v=spf1' -n
1:"v=spf1 include:_spf.mailvendor.example include:_spf.crmvendor.example include:_spf.payroll.example -all"

What it means: SPF uses includes (normal). Risk is the hidden DNS lookup count.

Decision: If DMARC reports show SPF permerror/temperror, you may be exceeding the 10-lookup SPF limit. Start inventorying includes and reduce complexity; don’t “just add one more include.”

Task 10 — Query DKIM selectors that vendors commonly use

cr0x@server:~$ dig +short TXT s1._domainkey.example.com
"v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A..."

What it means: Selector s1 exists. If your vendor signs with s1 and it aligns, you’re good.

Decision: If DMARC reports show DKIM fail, verify the selector exists, the key is correct, and the vendor is actually signing with your domain (d=example.com).

Task 11 — Confirm alignment settings (strict vs relaxed) and evaluate risk

cr0x@server:~$ dig +short TXT _dmarc.example.com | sed 's/"//g' | tr ';' '\n' | sed 's/^ *//'
v=DMARC1
p=none
rua=mailto:dmarc-agg@example.com
ruf=mailto:dmarc-fail@example.com
adkim=r
aspf=r
pct=100

What it means: Relaxed alignment for both SPF and DKIM. Subdomains can align depending on organizational domain rules.

Decision: Stay relaxed during rollout unless you have a disciplined subdomain strategy. Strict alignment is a sharp tool; don’t swing it near marketing.

Task 12 — Identify top “disposition=quarantine/reject” and treat as deliverability incident

cr0x@server:~$ gunzip -c /srv/dmarc/inbox/report-001.xml.gz | \
xmllint --xpath '//record/row/policy_evaluated/disposition/text()' - 2>/dev/null | \
sort | uniq -c | sort -nr
1200 none
230 quarantine
12 reject

What it means: Some portion is being quarantined/rejected. That can be expected (spoofing) or a self-inflicted wound.

Decision: If quarantine/reject is non-trivial and correlates with legitimate traffic sources, stop and fix alignment before increasing policy enforcement.

Task 13 — Extract “header_from” domains to detect subdomain abuse

cr0x@server:~$ gunzip -c /srv/dmarc/inbox/report-001.xml.gz | \
xmllint --xpath '//record/identifiers/header_from/text()' - 2>/dev/null | \
tr ' ' '\n' | sort | uniq -c | sort -nr
2100 example.com
45 billing.example.com
7 security-update.example.com

What it means: Mail claims to come from subdomains too. Some are legitimate (departmental mail), some are attacker creativity.

Decision: If you don’t intentionally send from a subdomain, add subdomain DMARC records or organizational policies, and lock down DNS delegation for those names.

Task 14 — Reverse-lookup suspicious IPs (quick triage, not proof)

cr0x@server:~$ dig +short -x 198.51.100.23
mailout-23.somehost.example.

What it means: PTR suggests a hosting provider. Attackers and vendors both live there.

Decision: Use this as a lead, then verify with vendor inventory and sending logs. Don’t whitelist based on PTR; that’s how you end up trusting a stranger with a nice hostname.

Task 15 — Spot SPF permerror/temperror in DMARC auth results

cr0x@server:~$ gunzip -c /srv/dmarc/inbox/report-001.xml.gz | \
grep -Eo '<spf>(permerror|temperror)</spf>' | sort | uniq -c
18 <spf>permerror</spf>

What it means: Receivers couldn’t evaluate SPF correctly (often too many DNS lookups, malformed records, or transient DNS issues).

Decision: Treat permerror as a configuration bug. Fix SPF complexity and DNS reliability before tightening DMARC policy.

Task 16 — Confirm your inbound pipeline is not dropping reports (basic mail log check)

cr0x@server:~$ sudo grep -E "dmarc-agg@example.com|dmarc-fail@example.com" /var/log/mail.log | tail -n 5
Jan  3 08:40:12 mx1 postfix/lmtp[22091]: 9B2C312345: to=<dmarc-agg@example.com>, relay=local, delay=0.22, delays=0.05/0/0/0.17, dsn=2.0.0, status=sent (delivered to mailbox)
Jan  3 08:40:14 mx1 postfix/lmtp[22092]: 1A7D912346: to=<dmarc-agg@example.com>, relay=local, delay=0.18, delays=0.04/0/0/0.14, dsn=2.0.0, status=sent (delivered to mailbox)

What it means: Reports are being delivered locally; your pipeline issue (if any) is post-delivery.

Decision: If you see bounces or rejects here, fix mail routing/size limits first. A DMARC program with missing reports is basically monitoring with the batteries removed.

Fast diagnosis playbook

When someone pings you with “DMARC is failing” or “We’re being spoofed,” you need a short path to clarity. Here’s the order that minimizes time-to-truth.

First: confirm whether it’s spoofing or self-harm

  1. Check DMARC policy and alignment settings in DNS. If you’re p=none, you’re mostly observing, not blocking.
  2. From aggregate reports, list top failing source IPs by count and disposition. High volume + both SPF and DKIM fail usually means spoofing attempts.
  3. Map failing IPs to known senders. If a failing IP is your ESP/CRM, you likely have an alignment problem, not an attacker problem.

Second: isolate the failing mechanism (SPF vs DKIM vs alignment)

  1. If DKIM passes but DMARC fails, it’s usually DKIM alignment (wrong d= domain) or the From domain is different than you think (subdomain vs apex).
  2. If SPF passes but DMARC fails, it’s usually SPF alignment (Return-Path domain differs) or you’re authenticating a different domain than the visible From.
  3. If both fail, suspect spoofing, forwarding breakage, or a completely unconfigured sender.

Third: decide the mitigation path

  • Legitimate sender failing: fix DKIM signing domain/selector and/or configure a custom bounce domain for SPF alignment; then recheck reports over 24–48 hours.
  • Unknown sender (likely spoofing): keep collecting evidence (receiver mix, IP ranges, volumes) and move policy towards quarantine/reject once legitimate traffic is clean.
  • Forwarding/mailing list breakage: evaluate ARC support and consider using DKIM as your primary aligned mechanism rather than SPF.

Three corporate mini-stories from the trenches

Mini-story 1: The incident caused by a wrong assumption

The company had a clean-looking setup: SPF, DKIM, and DMARC. The security team was proud; the IT team was tired; marketing was blissfully unaware of both.

They assumed their CRM platform “handled DKIM” because a checkbox said so. It did sign mail—just not with the company’s domain. The DKIM d= was the vendor’s domain. SPF also “passed,” but with the vendor’s Return-Path. So the mail authenticated, but it didn’t align with the visible From domain. DMARC failed quietly because policy was p=none.

Then they ramped to p=quarantine in a single change window because “we’ve been monitoring for months.” Monitoring, yes. Reading the reports, no. A chunk of legitimate customer lifecycle emails started landing in spam. Sales noticed first, as usual, in the form of pipeline mysteriously slowing down.

The fix was simple but annoying: configure the CRM to sign DKIM with d=example.com using a selector they controlled, and set a custom bounce domain so SPF would align too. The real lesson: “authentication passing” is not the same as “DMARC passing.” That assumption costs money.

Mini-story 2: The optimization that backfired

A different org had an SPF record that looked like a Christmas tree: includes for every tool that ever sent email on someone’s behalf. They’d been told SPF has a 10-DNS-lookup limit, so an engineer decided to “optimize” by flattening SPF into a long list of IPs generated nightly.

At first, it worked. Lookups dropped. Mail passed. Everyone moved on.

Then the backfire: one vendor rotated their sending IPs. The nightly job didn’t pick up the change due to a transient DNS failure during the build. For the next day, a meaningful slice of mail failed SPF at receivers. DKIM should have saved them, but that vendor didn’t sign with aligned DKIM. DMARC failed. Quarantine went up. Support tickets followed.

They reverted the flattening and built a safer approach: limit includes by consolidating vendors, keep SPF simple, and push for aligned DKIM everywhere so SPF is not a single point of deliverability failure. Optimization is fine. Optimization without failure modeling is performance art.

Mini-story 3: The boring but correct practice that saved the day

A regulated business ran DMARC like an ops program, not a checkbox. They had a weekly review: top sources, new sources, failures by receiver, and a short “authorize or block” decision list. They also maintained a living inventory of all systems allowed to send mail as the domain, with owner, purpose, and authentication mechanism.

One Monday, the report showed a new IP range sending a small number of messages with the company’s From domain, all failing both SPF and DKIM. Volume wasn’t huge; it wouldn’t have triggered the usual “email outage” alarms. But the new source stood out because the baseline was clean and stable.

They contacted their security team with evidence: receiver org, source IP, header_from, and counts. Security correlated it with customer reports of a targeted payroll phishing campaign. Because the DMARC policy was already p=reject, compliant receivers were refusing the mail. The campaign’s blast radius was limited to fringe systems and screenshots forwarded by vigilant employees.

No heroics. No war room. Just boring monitoring and a domain that had already earned enforcement. This is what “operational maturity” looks like: fewer exciting stories.

Joke #2: DMARC reports are the only emails you’ll ever receive that complain about other emails sending emails.

Common mistakes: symptom → root cause → fix

1) Symptom: DMARC fails, but SPF shows “pass” in reports

Root cause: SPF passes for the envelope domain, but it does not align with the Header From domain (vendor bounce domain).

Fix: Configure a custom Return-Path / bounce domain under your domain with the vendor, or rely on aligned DKIM instead. Verify alignment in reports before tightening policy.

2) Symptom: DKIM passes, but DMARC still fails

Root cause: DKIM signing domain (d=) does not align with the Header From domain (vendor signs with their own domain).

Fix: Configure DKIM signing with your domain and publish the required selector TXT records. Confirm by sending a test mail and checking headers at a mailbox you control.

3) Symptom: Sudden spike in SPF permerror

Root cause: SPF exceeded DNS lookup limits due to too many includes/redirects, or malformed SPF syntax.

Fix: Reduce includes, remove dead vendors, and push DKIM alignment so SPF isn’t your only aligned method. Validate SPF with a linter in CI.

4) Symptom: Everything works except when recipients are on a specific provider

Root cause: Receiver-specific enforcement quirks, local policy, or differences in how forwarding/ARC is treated.

Fix: Segment reports by receiver org_name. For the problematic receiver, verify whether forwarding is common and whether your mail is DKIM-aligned (more resilient than SPF).

5) Symptom: DMARC failures only for mailing list traffic

Root cause: Mailing lists modify messages (breaking DKIM) and resend from their own infrastructure (breaking SPF alignment).

Fix: Prefer DKIM signing that survives expected modifications where possible, and evaluate ARC-aware receivers. Operationally, accept some failures for lists or use subdomain strategies.

6) Symptom: You never receive DMARC reports

Root cause: Bad rua address, mailbox rejects large attachments, missing MX, or provider requires verification for external reporting addresses.

Fix: Ensure the mailbox exists, accepts large mail, and is routable. Consider using a dedicated subdomain and mailbox infrastructure for DMARC reporting.

7) Symptom: Reports show your own outbound IPs failing SPF

Root cause: SPF record doesn’t include the actual sending IPs (new MTA, new relay, or traffic moved to a cloud provider).

Fix: Update SPF includes/IPs, but also deploy aligned DKIM so an SPF miss doesn’t become a deliverability incident.

8) Symptom: Duplicate or inconsistent report counts across receivers

Root cause: Different receivers sample differently, aggregate differently, or attribute sources differently (especially with large shared infrastructures).

Fix: Use reports for trends and new-source detection, not exact accounting. Correlate with your sending logs for precise counts.

Checklists / step-by-step plan

Step-by-step: build a DMARC reporting workflow that doesn’t rot

  1. Create dedicated report addresses (aggregate and optional failure) and put them behind a mailbox system you control. Don’t dump reports into someone’s personal inbox.
  2. Automate ingestion: fetch attachments, decompress, store raw XML, parse to structured data (tables or JSON), and retain originals for a limited time.
  3. Tag each record with receiver org_name, report_id, date range, header_from, source_ip, and policy outcomes.
  4. Maintain a sender inventory: every legitimate sender (MTAs, SaaS platforms, ticketing systems, monitoring tools), owner, and whether it uses aligned DKIM or aligned SPF.
  5. Baseline and alert on:
    • New source IPs
    • New header_from domains/subdomains
    • DMARC fail rate by receiver
    • Any increase in quarantine/reject dispositions
  6. Fix alignment first, then tighten policy. If you try to enforce before you’re aligned, your deliverability team will invent new words for you.
  7. Roll out enforcement gradually: p=quarantine; pct=10 then 25/50/100, then consider p=reject with another ramp.
  8. Set retention and access controls for reports, especially if you enable forensic reporting. Treat as sensitive telemetry.
  9. Run a weekly review with security + messaging owner: top fails, new sources, vendor changes, and pending authorization decisions.

Operational checklist: when you add a new email-sending vendor

  • Confirm whether the vendor can sign DKIM with d=yourdomain. If not, think hard before letting them send as you.
  • Publish DKIM selectors they provide, with change control and ownership.
  • Configure custom bounce domain (for SPF alignment) if supported.
  • Send test messages to multiple mailbox providers and verify headers show aligned pass.
  • Watch DMARC reports for 48 hours after enablement; check for alignment failures and unexpected IPs.
  • Document the sender in the inventory: owner, purpose, expected From domains, selectors, and support escalation path.

Change management checklist: moving from p=none to enforcement

  • List all known legitimate sources from the last 30 days of aggregate reports.
  • For each source, ensure at least one aligned mechanism (DKIM preferred) is consistently passing.
  • Confirm SPF is under lookup limits and doesn’t rely on fragile flattening jobs.
  • Start p=quarantine with a low pct and alert on quarantine rate changes.
  • Only move to p=reject after quarantine is stable and legitimate failures are near-zero.

FAQ

1) Do I need both SPF and DKIM to pass for DMARC to pass?

No. DMARC passes if either SPF or DKIM passes and aligns with the Header From domain. In practice, aim for aligned DKIM everywhere, and treat SPF as useful but more fragile.

2) Why do DMARC reports show a “pass” for DKIM but “fail” for DMARC?

Because DKIM can pass cryptographically while failing alignment. If the DKIM signature uses d=vendor.com and your From is example.com, DMARC fails.

3) Are DMARC aggregate reports real-time?

No. They’re periodic summaries, usually daily. Use them for trends and detection of new sources, not minute-by-minute incident response.

4) Should we enable forensic (RUF) reports?

Only if you can handle the privacy and retention implications and you have a specific investigative need. Many providers won’t send them, and the ones that do may include sensitive content.

5) If we publish p=reject, will spoofing stop everywhere?

It will reduce direct spoofing at receivers that honor DMARC. Attackers can still use lookalike domains, compromised accounts, or channels outside email. DMARC is necessary, not sufficient.

6) Why does forwarding break authentication?

SPF is checked against the sending IP; forwarding changes the sending IP. DKIM can break if intermediaries modify the message (headers or body). ARC can help some receivers understand the original authentication state.

7) What’s the fastest way to detect a new spoofing campaign in aggregate reports?

Look for new source IPs sending your Header From domain with both SPF and DKIM failing, especially if counts rise quickly or appear across multiple receiver orgs.

8) We have many subdomains. How should DMARC handle them?

Decide whether subdomains are allowed to send mail and publish explicit DMARC records for them where needed. If you ignore subdomains, you’ll eventually discover someone else is “using” them.

9) Why are DMARC report counts different from our outbound logs?

Receivers aggregate differently, sample, and may exclude some mail. Your logs are sender-side truth; reports are receiver-side observations. Use both; don’t expect perfect equality.

10) Can I use DMARC reports to find every third-party vendor sending mail?

You can find many of them, especially those that send at volume and reach major receivers. But you still need procurement/process discipline because some mail never reaches the receivers who report to you.

Conclusion: next steps that don’t waste a quarter

DMARC reports are not a compliance artifact. They’re an ops signal. If you treat them like logs—ingest, parse, baseline, alert—you’ll catch spoofing and misconfigurations early, when the fix is an email to a vendor instead of a brand incident.

Do this next:

  1. Verify your DMARC record and make sure aggregate reports are actually arriving.
  2. Build a sender inventory and map every significant source IP in reports to an owner.
  3. Fix alignment for legitimate senders (DKIM aligned preferred), then ramp policy from nonequarantinereject with pct stages.
  4. Set alerts for new sources and rising failure rates, and review weekly with security.

If you want email to be boring—and you do—make DMARC reporting boring too. Boring means you’re in control.

← Previous
Email “421 too many connections”: tune concurrency without delaying mail
Next →
Tag chips and filter bars: overflow handling, wrap, scroll, selected states

Leave a comment