DNS Wildcard Records: The Convenience That Quietly Breaks Things (and Fixes)

Was this helpful?

Wildcard DNS records are the duct tape of naming: fast, cheap, and suspiciously effective until you ship it to production and discover
your “helpful” *.example.com just ate your ACME challenge, rerouted your staging traffic into prod, and made every typo look
like a working endpoint.

If you’ve ever stared at a browser error thinking “that host shouldn’t even exist,” you’ve met the wildcard’s dark side. This is a field
guide for operators: what wildcards actually do, how they interact with other record types, where they blow up, and how to use them without
turning DNS into performance art.

What wildcard DNS really does (and what it doesn’t)

A wildcard record is a DNS record whose owner name begins with *., like *.example.com. It’s not “match anything
anywhere.” It’s “match one label at this point in the tree, when there is no more specific record.”

Operational definition

If a resolver asks for foo.example.com and the zone has no record for foo.example.com, but it does have a wildcard
*.example.com, then the wildcard can answer.

If the zone has foo.example.com explicitly (even as an empty non-terminal in some cases), the wildcard does not override it.
Wildcards are “last resort,” not “override.”

What it will not do

  • It will not match across multiple labels. *.example.com does not match a.b.example.com. For that you’d need
    *.b.example.com (and now you’re in the business of managing an entire taxonomy of regret).
  • It does not create a DNS delegation. If you delegate dev.example.com to another zone, the wildcard in example.com
    doesn’t apply inside the delegated child zone.
  • It will not “fix” missing services. It just makes nonexistent names resolve. Your TCP stack will still deliver the bad news.

The convenience is real

Wildcards are excellent for:

  • Ephemeral environments with unknown names ahead of time (preview apps, per-branch deployments).
  • Multi-tenant routing where the application reads the Host header and selects a tenant.
  • CDN front doors where the origin doesn’t care about specific hostnames.
  • Catch-all UX (“if you mistype, we still land you on something helpful”).

But the trade is subtle: wildcards make DNS stop being a source of truth about what exists. That’s fine if you have other guardrails.
It’s a long night if you don’t.

Interesting facts and historical context

A few short, concrete points that matter in practice:

  1. Wildcard matching rules were standardized early and tightened over time; RFC 1034 described wildcards conceptually, and later RFCs clarified
    edge cases around empty non-terminals and negative answers.
  2. DNS wildcards predate the web. They were useful for generic naming and mail routing long before “subdomains for everything” became the norm.
  3. “NXDOMAIN” is a strong signal in DNS: it means the name does not exist. Wildcards intentionally reduce the number of NXDOMAINs you emit,
    which changes caching behavior and error detection.
  4. Negative caching (RFC 2308) means an NXDOMAIN can be cached. Wildcards reduce NXDOMAINs, so typos and scans can generate lots of “valid”
    answers instead—sometimes stressing your edge.
  5. DNSSEC adds a twist: proving non-existence uses NSEC/NSEC3 records. Wildcards interact with “closest encloser” proofs; misconfigurations
    can create validation failures that only appear for “random” names.
  6. Certificate automation changed the stakes. ACME DNS-01 challenges made wildcard certificates easy, which encouraged wildcard DNS and wildcard
    certs to be deployed together—sometimes too casually.
  7. Some providers implement “alias”/“flattening” at the zone apex as a non-standard extension. Mixing those with wildcards can produce surprises
    because they’re not classic DNS records.
  8. CDN “proxied” modes often terminate TLS and respond to HTTP for any hostname you point at them. Combine that with wildcards and you can
    accidentally “serve” hostnames you never intended to publish.

Why people use wildcards (and when they’re right)

In production, most wildcard decisions are made for three reasons: speed, scale, and laziness. Only two of those are defensible, and laziness
occasionally masquerades as speed.

Legitimate use cases

The solid cases look like this:

  • Preview environments: pr-1847.preview.example.com appears and disappears on demand. DNS shouldn’t need a ticket.
  • Tenant routing: tenant-a.app.example.com, tenant-b.app.example.com all hit the same ingress, and your
    app routes by hostname.
  • Service mesh / gateway abstraction: Everything goes through an API gateway that knows what to do. DNS is just the entry sign.
  • Developer experience: Humans type weird things. Wildcards reduce friction when the system is designed to accept it safely.

Bad motivations dressed as engineering

  • “We don’t want to manage DNS records.” Sure. But then you just moved the complexity into debugging, monitoring, and security review.
  • “It’s easier than tracking inventory.” DNS isn’t an inventory system. If you use it like one, it will lie to you in the most polite way possible.
  • “We can point everything at the same load balancer and sort it out later.” You will not sort it out later. Your incident will sort it out for you.

One dry truth: if you can’t explain what your wildcard is supposed to match, you’re not ready to run it.

Joke #1: A wildcard record is like a “misc” folder—useful until it becomes your whole file system.

Where wildcards quietly break things

Wildcards fail in predictable ways. They rarely break the happy path. They sabotage the edge cases—the ones you only hit during incidents,
migrations, certificate renewals, or security reviews. In other words: the times you least want surprises.

1) ACME challenges and certificate automation

If you’re issuing certificates via ACME (Let’s Encrypt or internal ACME), there are different challenge types:
HTTP-01, TLS-ALPN-01, DNS-01. Wildcards on DNS and wildcards on certificates get conflated, and that’s where people start making assumptions.

Common break: you add *.example.com to point at a new ingress, but _acme-challenge.example.com or
_acme-challenge.foo.example.com needs special TXT records. If you also have automation that creates TXT records dynamically,
your wildcard may not be the direct cause, but it changes resolution paths and can expose gaps in delegation or split-horizon DNS.

2) Email and “surprise mail domains”

Wildcards do not directly control MX records for the zone apex (mail for example.com), but they can create the illusion that
mail domains exist for every subdomain. Some mail systems treat subdomains as separate identities and will query for MX at
sub.example.com.

If you have *.example.com A 203.0.113.10 and no explicit MX for sub.example.com, behavior varies by
implementation: some will fall back to A/AAAA per RFC rules if no MX exists. Now your web IP is your mail target. That’s not “broken DNS.”
That’s DNS doing exactly what you asked.

3) Traffic shadowing and environment bleed

A wildcard pointing to production can quietly capture staging names. If your staging zone is split-horizon (internal vs external) and someone
forgets to add a record internally, the resolver may fall back to public DNS. Then your staging app calls production services because DNS says
“sure, that exists.”

4) Typos resolve and monitoring lies

With wildcards, typos don’t NXDOMAIN. They resolve. That changes user experience (sometimes good) and monitoring (often bad). Health checks that
rely on “does DNS resolve?” become meaningless. Security scanners that look for dangling DNS also behave differently, because nothing dangles if
everything resolves to something.

5) CDN and proxy behaviors

If your wildcard points to a CDN provider that serves a default page for unknown hostnames, you may accidentally “publish” hostnames you never
intended. That can confuse customers, leak internal naming patterns, and create messy certificate/SNI behaviors if the CDN doesn’t have a cert
for that hostname.

6) Delegations and partial zones

People assume *.example.com covers everything under example.com, then delegate dev.example.com to
a different set of nameservers. Now foo.dev.example.com is answered by the child zone, not the parent wildcard. If the child zone
is empty or misconfigured, you’ll get NXDOMAIN for “some” hosts, and the wildcard will still happily answer for others. Diagnosis becomes a
geography lesson.

Matching, precedence, and “why is it resolving?”

DNS resolution is a chain of authority, cache, and rules. Wildcards slot into that chain in a way that feels intuitive until you’re on-call.
Then it feels like the zone file is gaslighting you.

Precedence: explicit beats wildcard

If api.example.com exists explicitly, it wins. Even if it’s a different type. That sounds obvious—until you realize your automation
might be creating explicit records you forgot about.

“Closest encloser” and negative answers

When a name doesn’t exist, authoritative servers don’t just shrug. They prove non-existence (especially with DNSSEC) based on the closest existing
ancestor and the wildcard rules. This is where you get fun phenomena like:

  • Some random names resolve via wildcard, others return NXDOMAIN because a closer name exists but blocks wildcard expansion.
  • Resolvers cache NXDOMAIN for a period (negative caching TTL). If you later create explicit records, some clients will keep failing until the negative cache expires.

Wildcards don’t remove the need for inventory

A wildcard is not discovery. It’s a default answer. If you need to know what services exist, track them somewhere else:
service registry, IaC state, gateway config, CMDB (if you’re into that kind of pain).

A reliability paraphrased idea from John Allspaw: “Failures are rarely caused by a single mistake; they emerge from normal work and complex systems.”
(paraphrased idea)

That’s exactly how wildcard incidents happen. Nobody “broke DNS.” Everybody made reasonable decisions that intersected.

Three corporate-world mini-stories

Mini-story #1: An incident caused by a wrong assumption

A mid-sized SaaS company ran *.app.example.com to route tenants through a shared ingress. It worked well. New tenants didn’t need
DNS changes, only application configuration. Someone suggested extending the same pattern for internal tools: *.tools.example.com.

The wrong assumption was subtle: “Wildcards are safe because explicit records override them.” True, but incomplete. Their internal DNS used split-horizon:
public zone for external, private zone for internal. The private zone had some explicit tool records but not all, and the public zone had the wildcard.

During a datacenter maintenance window, internal resolvers briefly failed over to public recursive resolvers (a misconfigured resolver forwarder).
Suddenly, internal hosts that were missing private records started resolving via the public wildcard to the external ingress IP.
Requests that should have stayed on internal networks went out to the internet, hit WAF rules, and returned 403s. The incident looked like a network outage,
then an application outage, then a “why is staging calling prod?” moment.

The fix wasn’t “remove the wildcard.” The fix was to make split-horizon robust: pin internal resolvers, add monitoring for resolver path changes, and
enforce “no wildcard answers for tools” via explicit NXDOMAIN-style blocks (more on that later) and explicit internal records.

The lasting lesson: wildcards aren’t just a DNS feature; they’re a policy. If you don’t know which resolvers your clients actually use during failover,
your policy is fantasy.

Mini-story #2: An optimization that backfired

Another company ran thousands of preview environments. DNS management was a bottleneck, so they introduced a wildcard:
*.preview.example.com pointing to a single Anycasted edge that routed requests based on hostname. Great simplification. Their IaC
stopped churning DNS providers with record updates.

Then they optimized caching: they bumped the wildcard’s TTL to an hour to reduce DNS query load. The effect was immediate and bad.
Preview environments were deprovisioned frequently, and the routing layer updated its map in seconds. But clients—especially corporate proxies and
mobile networks—kept the DNS answer for an hour. So “deleted” environments stayed reachable (sometimes hitting the wrong tenant if hostnames were reused).

Worse, their incident response depended on quickly blackholing certain preview hostnames during abuse. With a high TTL wildcard, they could block at the
edge, but DNS remained stable, and some upstream components kept hammering the same IP with bad hostnames. Logs were noisy, rate limits triggered, and the
whole “preview system” looked flaky.

They rolled TTL back down, but not to the original tiny value. They segmented: *.preview.example.com stayed low TTL, while stable
app hostnames got longer TTLs. And they added a host-based denylist at the edge that returned a fast 451/404 with a short cache header, so “dead hostnames”
died quickly even when DNS didn’t.

Mini-story #3: A boring but correct practice that saved the day

A financial services team had a strict DNS change process. It wasn’t glamorous. Every wildcard had an owner, a diagram, and a test plan. They also had
a policy: every wildcard zone must have a “canary” hostname with an explicit record that should never be answered by the wildcard.

One night, they migrated DNS hosting providers. The zone transferred cleanly, but the new provider interpreted an apex ALIAS plus wildcard CNAME in a way
that differed from the old one. Most names resolved fine, but a handful of “nonexistent” names started resolving where they should have been NXDOMAIN.
This was the kind of bug that usually takes days of user reports.

Their monitoring caught it within minutes: the canary hostname, which must return NXDOMAIN, began returning an A record. That triggered a pager with a
very specific message: “Wildcard scope changed.” The on-call immediately compared authoritative answers between old and new nameservers, found the wildcard
expansion difference, and fixed the zone before morning traffic.

No heroics. Just boring correctness: canaries, explicit expectations, and tests that validated the absence of records—not just the presence.

Fast diagnosis playbook

When wildcard-related DNS issues hit, the fastest path is to stop guessing and establish three facts:
(1) which nameserver is authoritative, (2) what it actually returns, and (3) what the client is caching.

First: confirm whether you’re seeing authoritative truth or cache

  • Query the authoritative nameservers directly. If you can’t, you’re debugging a rumor.
  • Query from a “clean” resolver (or use +norecurse) to avoid recursive cache hiding changes.

Second: determine matching path and delegation boundaries

  • Walk up the labels: does dev.example.com delegate elsewhere? Are you crossing zones?
  • Check if an explicit record exists that blocks the wildcard (including unexpected records created by automation).

Third: identify what’s broken (DNS vs routing vs TLS)

  • If DNS returns an IP but HTTP fails, you may have a wildcard DNS “working” but no corresponding virtual host, SNI cert, or ingress route.
  • If some clients work and others don’t, suspect TTL and negative caching.
  • If internal clients behave differently than external, suspect split-horizon and resolver failover.

Joke #2: DNS is the only system where “it resolved” can mean “it’s wrong faster.”

Practical tasks: commands, outputs, and decisions (12+)

These are runnable tasks you can do during design, change review, or incident response. Each includes what the output means and what decision to make.
Replace example.com and nameserver IPs as needed.

Task 1: See what a wildcard returns from your default resolver

cr0x@server:~$ dig +short totallymadeup-12345.example.com A
203.0.113.10

What it means: The name resolves to an A record, likely due to a wildcard (or a catch-all in a provider).

Decision: If you expect NXDOMAIN for unknown hosts, you must remove/limit the wildcard or block with explicit records.

Task 2: Prove whether the answer is coming from a wildcard (query authoritative)

cr0x@server:~$ dig @ns1.example.net totallymadeup-12345.example.com A +noall +answer +authority
totallymadeup-12345.example.com. 300 IN A 203.0.113.10
example.com. 300 IN NS ns1.example.net.
example.com. 300 IN NS ns2.example.net.

What it means: The authoritative server returns an A record for a name that likely doesn’t exist explicitly.

Decision: Inspect the zone for a wildcard record at the closest matching level (often *.example.com).

Task 3: Check for explicit records that should override wildcard

cr0x@server:~$ dig @ns1.example.net api.example.com ANY +noall +answer
api.example.com. 300 IN A 203.0.113.20

What it means: There is an explicit A record for api.example.com; wildcard should not affect it.

Decision: If api.example.com resolves incorrectly for some clients, suspect cache or split-horizon, not wildcard precedence.

Task 4: Determine delegation boundaries with +trace

cr0x@server:~$ dig +trace foo.dev.example.com A
; <<>> DiG 9.18.24 <<>> +trace foo.dev.example.com A
.                       518400  IN      NS      a.root-servers.net.
...
example.com.            172800  IN      NS      ns1.example.net.
dev.example.com.        3600    IN      NS      ns-dev1.example.net.
foo.dev.example.com.    300     IN      A       198.51.100.44

What it means: dev.example.com is delegated to separate nameservers; parent wildcards won’t apply inside.

Decision: Fix records in the child zone if behavior differs under that subtree; don’t chase the parent wildcard.

Task 5: Confirm wildcard doesn’t match multiple labels

cr0x@server:~$ dig +short a.b.example.com A
203.0.113.10

What it means: If this resolves, it’s not because of *.example.com alone; either *.b.example.com exists or b.example.com is delegated/handled differently.

Decision: Search for a deeper wildcard or provider “catch-all” behavior; map the subtree.

Task 6: Inspect TXT for ACME DNS-01 challenges

cr0x@server:~$ dig _acme-challenge.example.com TXT +noall +answer
_acme-challenge.example.com. 60 IN TXT "mM9vXkZlKfZy0Q2nq3...redacted..."

What it means: A TXT record exists for DNS-01 validation.

Decision: If issuance fails, verify you’re querying the correct authoritative zone and that TTL aligns with your ACME client timing.

Task 7: Check CNAME chains that can confuse validation or routing

cr0x@server:~$ dig www.example.com CNAME +noall +answer
www.example.com. 300 IN CNAME example.com.

What it means: www is an alias; wildcard at *.example.com won’t help if you intended different behavior for www.

Decision: Decide whether you want explicit records per important hostname (recommended) instead of relying on wildcard defaults.

Task 8: Verify whether a name should be NXDOMAIN (and whether it is)

cr0x@server:~$ dig nosuchhost.example.com A +noall +answer +comments
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51142
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

nosuchhost.example.com. 300 IN A 203.0.113.10

What it means: Status is NOERROR with an A record; this is not NXDOMAIN. Wildcard (or explicit) is answering.

Decision: If unknown names should fail, redesign: remove wildcard, limit scope, or implement explicit deny patterns via zone structure.

Task 9: Check negative caching TTL (SOA) for NXDOMAIN behavior

cr0x@server:~$ dig example.com SOA +noall +answer
example.com. 300 IN SOA ns1.example.net. hostmaster.example.com. 2025123101 7200 3600 1209600 300

What it means: The last SOA field (here 300) is commonly used as the negative caching TTL (depending on server/config).

Decision: If you rely on fast propagation of new records after NXDOMAIN, keep negative TTL modest. If you rely on stability, raise it carefully.

Task 10: Identify split-horizon differences (internal vs external)

cr0x@server:~$ dig @10.0.0.53 api.example.com A +short
10.20.30.40
cr0x@server:~$ dig @1.1.1.1 api.example.com A +short
203.0.113.20

What it means: Internal DNS returns private IP, public resolver returns public IP. That’s intentional split-horizon.

Decision: If clients sometimes get the public answer internally, fix resolver configuration and egress DNS rules; don’t “paper over” with more wildcards.

Task 11: Validate SNI/certificate mismatch caused by wildcard DNS to shared edge

cr0x@server:~$ openssl s_client -connect 203.0.113.10:443 -servername typoedhost.example.com -brief
depth=0 CN = default.edge.example.net
verify error:num=62:Hostname mismatch
CONNECTION ESTABLISHED
Protocol version: TLSv1.3
Ciphersuite: TLS_AES_256_GCM_SHA384

What it means: DNS resolves, TCP connects, TLS negotiates, but the certificate doesn’t match the hostname. Classic wildcard DNS to shared edge without cert coverage.

Decision: Either stop resolving unknown hostnames, provision certs appropriately, or ensure the edge rejects unknown SNI early with a clear response.

Task 12: Detect unexpected wildcard expansion by comparing two authoritative servers

cr0x@server:~$ dig @ns1.example.net weirdname.example.com A +short
203.0.113.10
cr0x@server:~$ dig @ns2.example.net weirdname.example.com A +short
198.51.100.77

What it means: Authoritative servers disagree. That’s either propagation lag, split brain, or different zone contents. With wildcards, divergence can hide until a random name gets queried.

Decision: Freeze changes, confirm zone serials, and ensure both authorities serve identical records before continuing rollout.

Task 13: Verify zone serial and propagation (SOA comparison)

cr0x@server:~$ dig @ns1.example.net example.com SOA +noall +answer
example.com. 300 IN SOA ns1.example.net. hostmaster.example.com. 2025123101 7200 3600 1209600 300
cr0x@server:~$ dig @ns2.example.net example.com SOA +noall +answer
example.com. 300 IN SOA ns1.example.net. hostmaster.example.com. 2025123100 7200 3600 1209600 300

What it means: Serial mismatch suggests replication delay or failed zone transfer/publish.

Decision: Fix propagation before you debug higher layers. A wildcard change on one server is indistinguishable from chaos to clients.

Task 14: Confirm record type conflict (CNAME vs other records)

cr0x@server:~$ dig @ns1.example.net foo.example.com CNAME +noall +answer
foo.example.com. 300 IN CNAME edge.example.net.
cr0x@server:~$ dig @ns1.example.net foo.example.com A +noall +answer
foo.example.com. 300 IN A 203.0.113.55

What it means: If both CNAME and A appear for the same name in your provider UI, something is wrong; standards don’t allow it. Some providers mask this via “flattening” features, but it can break resolvers or tooling.

Decision: Normalize the record set: pick CNAME or A/AAAA. Avoid clever provider-specific combinations where wildcards are involved.

Task 15: Catch wildcard effects on service discovery by testing random labels

cr0x@server:~$ for i in 1 2 3 4 5; do host "rand-$RANDOM.example.com"; done
rand-21483.example.com has address 203.0.113.10
rand-1207.example.com has address 203.0.113.10
rand-30061.example.com has address 203.0.113.10
rand-9229.example.com has address 203.0.113.10
rand-18022.example.com has address 203.0.113.10

What it means: Random labels consistently resolve. You’ve effectively made “existence” meaningless at that level.

Decision: Decide whether your monitoring, inventory, and security controls assume NXDOMAIN. If they do, adjust them or restrict the wildcard scope.

Common mistakes: symptoms → root cause → fix

Mistake 1: “Every subdomain resolves, so everything is deployed”

Symptoms: New environment appears “up” in dashboards that only check DNS; users get 404/502/TLS errors.

Root cause: Wildcard DNS answers for names without corresponding ingress routes, virtual hosts, or certificates.

Fix: Stop using “DNS resolves” as readiness. Add HTTP/TLS checks to the specific hostname and require explicit routing configuration. Consider returning NXDOMAIN for unknown hosts.

Mistake 2: ACME issuance fails “randomly” for some names

Symptoms: Some cert renewals succeed, others time out or validate wrong TXT records.

Root cause: Wrong authoritative zone queried due to delegation, split-horizon, or stale TXT records cached; wildcard assumptions obscure where the record should live.

Fix: Trace delegation, query authoritative nameservers directly, and ensure TXT records are created in the correct zone with sane TTL. Separate _acme-challenge handling from generic wildcard automation.

Mistake 3: Email goes to the web server

Symptoms: Mail for tenant.example.com ends up at the web ingress IP; spam scanners light up; SMTP errors on the edge.

Root cause: No MX record for subdomain; sender falls back to A/AAAA; wildcard provides A/AAAA, making the web IP the de facto mail exchanger.

Fix: Explicitly publish MX for subdomains that should receive mail, or explicitly block mail by publishing MX 0 . (null MX) where appropriate. Don’t let a wildcard A become an accidental mail endpoint.

Mistake 4: Staging leaks into production during resolver failover

Symptoms: Internal apps suddenly hit public IPs; WAF blocks internal traffic; latency spikes due to hairpinning.

Root cause: Split-horizon DNS not enforced; internal resolvers fall back to public recursion; wildcard in public zone captures internal-only names.

Fix: Lock down resolver configuration, monitor resolver reachability, and ensure internal-only names exist explicitly in internal DNS (or are explicitly NXDOMAIN). Add egress rules preventing internal networks from using public resolvers.

Mistake 5: “We increased TTL to reduce DNS load” and things got weird

Symptoms: Deleted environments still reachable; traffic routes to wrong backend after redeploy; incidents persist after rollback.

Root cause: High TTL on a wildcard record amplifies stale routing. Wildcards are often used for dynamic names—high TTL fights that.

Fix: Keep wildcard TTL low where backends change frequently. Use longer TTL only for stable names. Add edge-layer controls to reject unknown hostnames quickly.

Mistake 6: DNSSEC validation failures only for some hostnames

Symptoms: Some resolvers fail with SERVFAIL; others work; failures correlate with “random” names.

Root cause: DNSSEC proofs of non-existence (NSEC/NSEC3) and wildcard behavior misconfigured; authoritative server returns answers that don’t validate for certain queries.

Fix: Validate DNSSEC with authoritative tools, ensure correct signing, and test random names plus explicit names. Treat wildcard zones as higher risk for DNSSEC edge cases.

Checklists / step-by-step plan

Checklist: should you use a wildcard here?

  1. Define intent: Which label depth is wildcarded (one label)? What’s the expected behavior for typos?
  2. Decide on NXDOMAIN policy: Do you want “unknown name fails” or “unknown name routes to default”?
  3. Enumerate critical hostnames: api, www, mail, _acme-challenge, admin consoles. Make them explicit.
  4. Plan TLS: Will the edge have certificates that match every served hostname? If not, reject unknown SNI/Host early and loudly.
  5. Email policy: For subdomains, publish explicit MX (including null MX) so wildcard A/AAAA doesn’t become a mail exchanger.
  6. Split-horizon check: If you have internal DNS, confirm internal clients can’t accidentally use public recursion.
  7. Monitoring plan: Add “must NXDOMAIN” canaries and “must resolve” canaries.
  8. Rollback plan: Know what removing the wildcard will break (often preview apps and tenant routing). Document it.

Step-by-step: deploying a wildcard safely

  1. Create explicit records for critical names first. Don’t let your wildcard define api, www, mail, or validation prefixes.
  2. Pick a conservative TTL. If the backing service changes frequently, start at 60–300 seconds.
  3. Test authoritative answers. Query each authoritative nameserver directly for:
    known hostnames, known-absent hostnames, random names, and ACME TXT names.
  4. Test from multiple resolver vantage points. Internal resolver, public resolver, and a “clean” VM/network.
  5. Deploy edge behavior for unknown hostnames. Decide: 404, 410, 451, or redirect. Don’t serve a default “it works” page unless you mean it.
  6. Add canary tests. One hostname that must resolve to wildcard, one hostname that must not resolve (NXDOMAIN), and one that must resolve to an explicit target.
  7. Watch logs for typos and scans. Wildcards can turn background noise into backend load. Rate limit accordingly.
  8. Document the scope. Humans forget. Incidents don’t.

Step-by-step: removing or narrowing a wildcard without breaking everything

  1. Inventory callers. Analyze edge logs: which hostnames are actually used?
  2. Create explicit records for the active set. Move from implicit wildcard to explicit where possible.
  3. Introduce a new narrower wildcard. For example, move from *.example.com to *.preview.example.com.
  4. Shorten TTL ahead of the change. Do it at least one TTL window before the cutover so caches drain.
  5. Implement an “unknown host” response at the edge. It’s a safety net during transition.
  6. Remove wildcard and monitor NXDOMAIN rates. A spike is expected; a sustained spike might indicate missing explicit records.

FAQ

1) Does *.example.com match example.com?

No. The apex example.com is not matched by *.example.com. You need explicit records at the apex.

2) Does *.example.com match a.b.example.com?

Not by itself. It matches one label: anything.example.com. If a.b.example.com resolves, something else is providing that answer.

3) If I have an explicit record for foo.example.com, can the wildcard override it?

No. Explicit beats wildcard. If you see otherwise, you’re likely dealing with split-horizon DNS, multiple zones, or propagation inconsistency.

4) Are wildcard DNS records the same as wildcard certificates?

Different layers. Wildcard DNS makes names resolve. Wildcard certificates make TLS valid for many hostnames. You can use one without the other, but using wildcard DNS without TLS coverage is a common source of hostname mismatch errors.

5) Why do wildcard records make debugging harder?

Because absence stops being visible. NXDOMAIN is useful: it tells you the name isn’t present. With a wildcard, everything “exists” in DNS, so you have to debug at HTTP routing, TLS SNI, and application logic instead.

6) Can wildcard DNS increase load or cost?

Yes. Typos, scans, and random subdomain probes turn into real traffic against your edge. If your edge forwards unknown hostnames upstream, you can amplify noise into backend load. Rate limit and reject unknown hosts early.

7) How do I prevent email from using wildcard A/AAAA records for subdomains?

Publish explicit MX for subdomains that should receive mail. For subdomains that should not, publish a null MX record (MX 0 .) to signal “no mail here.” That prevents fallback to A/AAAA in many implementations.

8) What TTL should I use for a wildcard record?

If the target changes or routing is dynamic, keep TTL low (60–300 seconds is common). If the target is stable and you’ve tested rollback behavior, you can go higher. Don’t raise TTL to “optimize” without understanding how fast you need changes to take effect.

9) Can I “block” a wildcard for a specific hostname?

You can override it by creating explicit records. To force “does not exist” behavior for a particular name, you may need zone design tricks or provider features; classic DNS doesn’t let you publish “NXDOMAIN as a record.”

10) Are wildcards safe with DNSSEC?

They can be, but test thoroughly. DNSSEC adds validation requirements for non-existence and wildcard expansion. Misconfigurations can cause resolver-specific failures that look intermittent.

Conclusion: next steps you can actually do

Wildcard DNS is not inherently evil. It’s inherently powerful. That power is what makes it dangerous: it changes the semantics of “existence,” masks typos,
and shifts failure modes from DNS into TLS, routing, and application behavior.

Practical next steps:

  1. Audit your zones for wildcards at the top levels. If you have *.example.com, assume it’s affecting everything you forgot existed.
  2. Create explicit records for critical names (API, login, mail, admin, validation prefixes) so the wildcard can’t “helpfully” answer them.
  3. Add two canaries: one hostname that must resolve via wildcard, and one that must be NXDOMAIN. Alert on changes.
  4. Verify split-horizon behavior with direct queries to internal and public resolvers. If internal clients can reach public DNS, fix that first.
  5. Make the edge reject unknown hostnames quickly and clearly. If you serve a default page, do it intentionally and monitor the fallout.

When you use wildcards with intent and guardrails, they’re a clean tool. Without them, they’re the kind of convenience that quietly breaks things—right up
until it fixes your deployment pipeline and you forget why you were afraid.

← Previous
Split-horizon DNS: Make LAN names resolve without breaking the public internet
Next →
PostgreSQL vs YugabyteDB: distributed Postgres-style—amazing or overkill

Leave a comment