Encrypted DNS is the kind of improvement that looks boring until you’re the one on call reading a packet capture at 02:00 and realizing half the fleet is leaking internal hostnames to whatever Wi‑Fi it wandered onto. You want DNS over TLS (DoT) or DNS over HTTPS (DoH) because privacy, integrity, and sometimes plain reliability.
Enterprise networks, however, have opinions: split-horizon zones, internal resolvers with policy, proxies that intercept, VPN clients that “help,” and compliance folks who want to know where your queries went. Debian 13 can do encrypted DNS cleanly, but you have to respect the environment or you’ll manufacture a slow-motion outage.
What you’re actually changing (and what you are not)
On Debian 13, the practical way to “turn on encrypted DNS” is to put systemd-resolved in charge of name resolution and tell it to use either:
a) your corporate resolvers over DoT if they support it, or
b) a controlled DoH/DoT forwarder you run (recommended), or
c) a public resolver (only if policy allows and you’re okay with the consequences).
The key distinction: enabling DoT/DoH on a client does not magically fix DNS. It changes transport security between the client and its configured resolver. If your resolver is still doing plain DNS upstream, your query leaves your network unencrypted at that point. That may be fine; it may also be the exact compliance problem you were trying to solve.
Also: encrypting DNS does not stop SNI, ECH, IP-based tracking, or the fact that someone can still see where you connect. It just reduces passive DNS query leakage and makes certain tampering harder.
Opinionated guidance: in enterprises, don’t point endpoints directly at random public DoH. Run a forwarder (or upgrade your internal resolvers) so policy, split-horizon, logging, and incident response keep functioning.
Facts and context you can repeat in a meeting
- DNS originally assumed a friendly network. The base protocol is UDP/53 from 1983-era thinking: fast, lossy, and easy to observe or spoof.
- DNSSEC adds integrity, not privacy. It can validate answers but your questions still travel in the clear unless you use DoT/DoH.
- DoT standardized as RFC 7858 (2016). It’s DNS in TLS, typically TCP/853, clean and inspectable as “DNS-ish” traffic.
- DoH standardized as RFC 8484 (2018). It’s DNS over HTTPS, usually TCP/443, which can blend into web traffic and annoy firewall teams.
- Browsers sparked the enterprise backlash. “Automatic DoH” rollouts in browsers broke internal zones in many orgs, making network teams allergic to DoH by default.
- Split-horizon DNS is a feature, not a hack. Internal names like
corp.exampleshould resolve differently inside and outside; encrypted DNS doesn’t remove that requirement, it amplifies mistakes. - Captive portals hate encrypted DNS. If you can’t resolve the portal hostname without first agreeing to terms, “smart” DNS can keep you offline.
- Many corporate resolvers already support encryption. Modern BIND, Unbound, and some vendor DNS appliances can do DoT/DoH—but certificates and policy must be handled deliberately.
- Central logging is often non-negotiable. Security teams rely on DNS logs for investigations; bypassing internal resolvers can be treated as policy violation, not innovation.
DoT vs DoH: pick the right tool for your network
DoT: predictable, firewall-friendly (in the enterprise sense)
DoT uses a distinct port (853). That’s a blessing: you can allow it where appropriate, block it where policy says no, and identify it in flow logs without pretending every TLS session is “just web traffic.”
If your enterprise has a security boundary where egress is tightly controlled, DoT is usually easier to negotiate. You can prove what it is. You can also route it to an internal resolver cluster and keep split-horizon intact.
DoH: convenient, sometimes politically radioactive
DoH rides on HTTPS. That’s great for roaming clients because 443 is almost always open. It’s also why some networks treat it as “DNS exfiltration disguised as web browsing.” If your proxy does TLS inspection, DoH can break in entertaining ways unless you control both ends.
Use DoH when you have a known, managed endpoint (your own DoH front-end, or a corporate-approved provider) and you understand how it behaves with proxies, mTLS, and certificate interception.
Dry truth: when you tell a firewall team you want “DNS over HTTPS,” they hear “I’d like to smuggle policy around in a trench coat.” They might not be wrong.
Enterprise failure modes: where encrypted DNS goes to die
Split-horizon and search domains
If clients bypass internal resolvers, internal zones fail. Your ticket queue fills with “VPN is broken” and “Git server down” while everything is technically “up.” The fix is not “disable DoH,” it’s “stop bypassing the resolver that knows the private zones.”
Proxy interception and TLS break-and-inspect
Corporate proxies that intercept TLS can cause DoH to fail in two opposite ways:
they either block it outright, or they MITM it and present a corporate CA that your DoH client doesn’t trust. If the client is strict (as it should be), you get timeouts and mysterious fallback behavior.
VPN DNS settings getting overwritten
VPN clients often push DNS servers and domains. If you hardcode DoT/DoH at the OS and ignore per-link DNS, you may route queries for internal zones outside the tunnel. Congratulations, you just created a data leak and a reliability incident in one move.
Middleboxes that “help” by rewriting
Some networks rewrite DNS answers (captive portals, parental controls, malware filters). If you bypass them, you bypass the feature. Sometimes that’s the whole point. Sometimes that feature is how guests get to the Wi‑Fi login page. Choose your pain.
Performance regressions from handshake overhead and broken caching
TLS handshakes aren’t free. With good connection reuse, the overhead is minor. With bad networks, short-lived connections, or mis-tuned resolvers, you can add noticeable latency. People notice DNS latency because it manifests as “the internet feels slow” and nobody files a precise bug report.
Fast diagnosis playbook
When encrypted DNS “doesn’t work,” don’t guess. Don’t toggle random knobs. Run the same three checks every time so you can find the bottleneck quickly.
-
First: confirm who is doing DNS and what servers are in use.
If the system is still using a legacy/etc/resolv.conffrom DHCP, none of yoursystemd-resolvedsettings matter. -
Second: confirm transport reachability (853 for DoT, 443 for DoH endpoint).
A blocked port looks like “DNS timeout” and wastes hours unless you check it immediately. -
Third: confirm certificate/SNI/name validation (DoT especially).
If the resolver’s certificate doesn’t match what the client expects,systemd-resolvedwill silently degrade depending on configuration. Decide whether you want strict mode or opportunistic mode, then enforce it.
If those three pass, the remaining suspects are: split DNS routing, proxy interception, DNSSEC validation failures, or resolver-side performance/capacity issues.
Practical tasks: commands, outputs, and decisions
These are the field tasks I actually run on Debian hosts when I’m enabling DoT/DoH or debugging why someone’s laptop can’t resolve git.corp.example while on VPN.
Each task includes: command, example output, what it means, and the decision you make.
Task 1 — Identify the active resolver stack
cr0x@server:~$ ls -l /etc/resolv.conf
lrwxrwxrwx 1 root root 39 Dec 30 09:12 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf
Meaning: /etc/resolv.conf points to the systemd stub, so applications will query 127.0.0.53 and systemd-resolved will do the upstream work.
Decision: If you do not see a symlink to systemd’s stub/real resolv.conf, fix that first; otherwise you’ll be changing settings that nothing uses.
Task 2 — Confirm systemd-resolved is running and authoritative
cr0x@server:~$ systemctl status systemd-resolved --no-pager
● systemd-resolved.service - Network Name Resolution
Loaded: loaded (/lib/systemd/system/systemd-resolved.service; enabled; preset: enabled)
Active: active (running) since Tue 2025-12-30 09:10:21 UTC; 6min ago
Docs: man:systemd-resolved.service(8)
Main PID: 612 (systemd-resolve)
Status: "Processing requests..."
Meaning: Resolver service is running.
Decision: If it’s inactive/disabled, either enable it properly or choose a different resolver stack (Unbound, dnscrypt-proxy, etc.)—but don’t half-enable both.
Task 3 — Inspect current DNS configuration per link (split DNS reality check)
cr0x@server:~$ resolvectl status
Global
Protocols: +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 2 (ens192)
Current Scopes: DNS
Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.20.30.40
DNS Servers: 10.20.30.40 10.20.30.41
DNS Domain: corp.example
Meaning: DNS is coming from corporate servers on this interface; DNS-over-TLS is currently off.
Decision: Keep per-link DNS. Don’t override globally unless you’re sure you won’t break VPN-provided domains.
Task 4 — Verify that DoT/DoH capability exists in your systemd-resolved build
cr0x@server:~$ resolvectl --version
systemd 258 (258.2-1)
+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +BPF_FRAMEWORK
+IDN2 +XZ +ZSTD +BZIP2 +LZ4 +ACL +BLKID +CURL +ELFUTILS +FIDO2 +TPM2 +PWQUALITY +P11KIT
default-hierarchy=unified
Meaning: Modern systemd is present; DoT support is standard here. DoH support depends on build features and version, so verify in your Debian 13 environment.
Decision: If DoH is required and your build lacks it (or your policy requires a dedicated DoH client), plan for a local forwarder like Unbound doing DoT upstream, or a DoH proxy you manage.
Task 5 — Check whether DNS is currently leaking to public resolvers
cr0x@server:~$ grep -vE '^\s*#|^\s*$' /etc/resolv.conf
nameserver 127.0.0.53
options edns0 trust-ad
search corp.example
Meaning: Apps use local stub; upstream selection is hidden behind resolved.
Decision: If you see a public IP here, you’re already bypassing corporate DNS. Decide whether that’s allowed before adding encryption.
Task 6 — Prove basic resolution works before encrypting anything
cr0x@server:~$ resolvectl query debian.org
debian.org: 151.101.194.132 151.101.2.132 151.101.66.132 151.101.130.132
-- Information acquired via protocol DNS in 21.5ms.
-- Data is authenticated: no
Meaning: Baseline DNS works. Don’t change transport when baseline is already broken.
Decision: If this fails, fix routing/DHCP/VPN DNS first. Encrypted DNS will not save you from “wrong DNS server” problems.
Task 7 — Enable DoT in systemd-resolved (opportunistic vs strict)
Edit /etc/systemd/resolved.conf. For enterprise, start with opportunistic DoT if you’re unsure of certs, then move to strict.
cr0x@server:~$ sudoedit /etc/systemd/resolved.conf
cr0x@server:~$ grep -nE '^(DNS=|DNSOverTLS=|Domains=|FallbackDNS=)' /etc/systemd/resolved.conf
9:DNS=10.20.30.40#dns.corp.example 10.20.30.41#dns.corp.example
12:FallbackDNS=
15:Domains=~corp.example ~.
20:DNSOverTLS=opportunistic
Meaning: You pin internal resolvers and provide the expected TLS hostname (#dns.corp.example) so certificate validation can work when you go strict. The Domains= line indicates split DNS routing: ~corp.example is routed to these servers; ~. indicates default route.
Decision: Use opportunistic for first rollout. Move to yes (strict) once you verify certificates and middleboxes.
Task 8 — Restart resolved and validate that the config is active
cr0x@server:~$ sudo systemctl restart systemd-resolved
cr0x@server:~$ resolvectl status | sed -n '1,25p'
Global
Protocols: +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Meaning: +DNSOverTLS indicates DoT is enabled globally. This does not guarantee every upstream supports it; it means the client will try.
Decision: If it still shows -DNSOverTLS, your config isn’t being read or was overridden by a drop-in. Investigate before continuing.
Task 9 — Confirm queries are going to the intended resolver, not “whatever”
cr0x@server:~$ resolvectl query git.corp.example
git.corp.example: 10.55.8.19
-- Information acquired via protocol DNS in 6.3ms.
-- Data is authenticated: no
Meaning: Internal name resolves; split-horizon is intact.
Decision: If internal names fail after enabling DoT, you probably routed them to the wrong upstream (global public resolvers are the usual culprit).
Task 10 — Detect whether port 853 is blocked (the “it times out” classic)
cr0x@server:~$ nc -vz 10.20.30.40 853
nc: connect to 10.20.30.40 port 853 (tcp) timed out: Operation now in progress
Meaning: TCP/853 is not reachable (firewall, ACL, resolver not listening). Opportunistic DoT will fall back to plain DNS; strict DoT will break resolution.
Decision: If policy allows DoT, open 853 to the resolver VIP. If policy forbids it, don’t enable strict DoT on clients; run encryption inside the network instead (client→internal resolver plain, internal resolver→upstream encrypted).
Task 11 — Validate resolver certificate and SNI using OpenSSL
cr0x@server:~$ openssl s_client -connect 10.20.30.40:853 -servername dns.corp.example -brief
CONNECTION ESTABLISHED
Protocol version: TLSv1.3
Ciphersuite: TLS_AES_256_GCM_SHA384
Peer certificate: CN = dns.corp.example
Verification: OK
Meaning: The resolver presents a cert matching dns.corp.example, and the host trusts the issuing CA.
Decision: If verification fails, do not switch to strict DoT until you fix CA distribution or correct the server name in DNS=.
Task 12 — Check logs for resolved’s decision-making (it will tell you, if you ask)
cr0x@server:~$ journalctl -u systemd-resolved --since "10 min ago" --no-pager | tail -n 12
Dec 30 09:15:33 server systemd-resolved[612]: Using DNS server 10.20.30.40 for interface ens192.
Dec 30 09:15:33 server systemd-resolved[612]: Switching to DNS server 10.20.30.41 for interface ens192.
Dec 30 09:16:02 server systemd-resolved[612]: Server returned error NXDOMAIN, ignoring.
Dec 30 09:16:44 server systemd-resolved[612]: DNS-over-TLS negotiation failed with 10.20.30.40:853: Connection timed out
Meaning: You get explicit evidence: negotiation failed, it switched servers, or it got NXDOMAIN. This is your breadcrumb trail.
Decision: Connection timeout → network/port issue. Verification failure → PKI issue. NXDOMAIN → split DNS routing or wrong resolver content.
Task 13 — Measure latency and caching behavior (because “encrypted” can become “slow”)
cr0x@server:~$ resolvectl statistics
Transactions: 224
Cache Hits: 141
Cache Misses: 83
DNSSEC Verdicts: 0
DNSSEC Supported: no
DNSSEC NTA: 0
Meaning: Cache hit ratio gives you a quick feel. If it’s near-zero on a busy system, something is off (apps bypassing the stub, or cache disabled elsewhere).
Decision: Low hit ratio + high latency complaints: ensure everyone uses 127.0.0.53, and ensure you’re not restarting resolved constantly (yes, that happens).
Task 14 — Prove which process is sending DNS if you suspect bypass
cr0x@server:~$ sudo ss -ntup | awk '$5 ~ /:53$|:853$|:443$/ {print}'
tcp ESTAB 0 0 10.9.1.12:45652 10.20.30.40:853 users:(("systemd-resolve",pid=612,fd=25))
Meaning: The DoT connection is from systemd-resolve, as intended.
Decision: If you see apps connecting directly to public IPs on 53/853/443, you have bypass. Address it with policy (and sometimes firewall egress rules).
Task 15 — Verify that DHCP/VPN aren’t clobbering DNS unexpectedly
cr0x@server:~$ nmcli dev show ens192 | sed -n '1,80p'
GENERAL.DEVICE: ens192
GENERAL.STATE: 100 (connected)
IP4.DNS[1]: 10.20.30.40
IP4.DNS[2]: 10.20.30.41
IP4.DOMAIN[1]: corp.example
Meaning: NetworkManager is supplying the corporate DNS servers and domain. Good. That’s the foundation for split DNS.
Decision: If you see unexpected DNS servers here (hotel Wi‑Fi, VPN, etc.), decide which should win and configure per-link routing rather than a global sledgehammer.
Task 16 — Check firewall policy locally (don’t assume the network is the only firewall)
cr0x@server:~$ sudo nft list ruleset | sed -n '1,80p'
table inet filter {
chain output {
type filter hook output priority 0; policy accept;
}
}
Meaning: Local firewall isn’t blocking outbound 853/443. If it were, you’d see explicit drops.
Decision: If local policy drops 853, fix locally or accept that DoT cannot function on this host.
Checklists / step-by-step plan (safe rollout)
Step 0 — Decide what “enterprise-safe” means in your org
- Must internal zones resolve? If yes, clients must use internal resolvers for those zones.
- Must DNS logs be centralized? If yes, don’t bypass internal resolvers. Encrypt client→resolver if you can; otherwise encrypt resolver→upstream.
- Are proxies in play? If yes, treat DoH as “needs design,” not “flip a switch.”
- Is strict certificate validation required? Usually yes. But do a controlled rollout to avoid mass failure.
Step 1 — Inventory current DNS behavior
- Collect
resolvectl statusfrom representative hosts: on LAN, on VPN, on Wi‑Fi, in a DC segment. - Identify where split DNS exists: domains, conditional forwarders, NRPT-like behavior.
- Confirm whether internal resolvers support DoT/DoH today (and whether they have proper certificates).
Step 2 — Prefer “encrypt to your resolver,” not “bypass your resolver”
If your internal resolvers can speak DoT with a valid certificate: do that. It’s the cleanest model for enterprises.
If they can’t: run a small, managed forwarding layer (Unbound/BIND) that accepts plain DNS from clients and does encrypted upstream inside your trust boundary.
Step 3 — Roll out opportunistic DoT first, with monitoring
- Enable
DNSOverTLS=opportunisticon a pilot group. - Monitor: DNS latency, error rate, resolver CPU, TCP/853 connection counts, and user reports about “slow internet.”
- Watch logs for negotiation failures. Fix reachability and certificates before turning on strict mode.
Step 4 — Move to strict DoT where possible
Once you can validate certificates reliably (openssl s_client task above), switch to strict:
it prevents silent downgrade to plaintext when a middlebox blocks 853.
cr0x@server:~$ sudo perl -0777 -i -pe 's/DNSOverTLS=opportunistic/DNSOverTLS=yes/g' /etc/systemd/resolved.conf
cr0x@server:~$ sudo systemctl restart systemd-resolved
cr0x@server:~$ resolvectl status | sed -n '1,12p'
Global
Protocols: +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported
Step 5 — Keep split DNS explicit
When you need internal and external resolution, be explicit about routing. Let corporate zones go to corporate resolvers, and everything else follow the default route.
The worst plan is “one global public resolver for everything.” That’s not a plan; it’s how you learn which teams can yell the loudest.
Step 6 — Document the exception cases (they will happen)
- Captive portal networks
- Restricted DC segments that block 853/443 egress
- Systems with vendor agents that hardcode DNS
- Legacy apps that ship their own resolver libraries
Three corporate mini-stories (why this is harder than it looks)
Mini-story 1 — The incident caused by a wrong assumption
A mid-sized company decided to “modernize DNS.” A well-meaning engineer assumed that encrypted DNS was purely a privacy upgrade, functionally equivalent to plain DNS. They enabled DoH on a subset of Debian laptops, pointing to a public provider because it was “reliable.”
Monday morning was a slow burn. Nothing exploded immediately. But developers on VPN started reporting that internal services intermittently failed to resolve. The helpdesk saw a pattern: users who recently rebooted had more issues. Users who stayed on all weekend were fine. That’s the kind of pattern that makes you suspect caching, not infrastructure.
The wrong assumption was simple: “DNS is DNS.” In reality, their internal resolvers were doing split-horizon for corp.example, returning internal VIPs only when the query came from inside the network or the VPN address pool. Public resolvers did what public resolvers do: they either returned nothing or returned external addresses intended for partners. Some services were accessible externally, but with different authentication paths. Chaos, but the quiet kind.
The fix wasn’t dramatic. They stopped bypassing corporate DNS. They rolled the laptops back to using internal resolvers, then upgraded the internal resolver layer to provide encrypted transport for roaming users via a managed endpoint. The lesson stuck: privacy upgrades are still network changes. Treat them like one.
Mini-story 2 — The optimization that backfired
Another org wanted to reduce DNS latency across global sites. Someone proposed: “Let’s force a single global DoH endpoint on 443. It’ll traverse every firewall, and HTTP/2 will multiplex queries. Faster, right?” It sounded modern. It was modern. It was also naive about geography and proxies.
The rollout started in a region where outbound HTTPS was forced through an explicit proxy with TLS inspection. The DoH client didn’t trust the proxy’s MITM certificate for that endpoint. DNS lookups began stalling because the system tried DoH first, waited, then fell back. The fallback was blocked too, because outbound UDP/53 was denied by policy (as it should be on that network).
Users experienced it as “every website takes ten seconds to start loading.” It wasn’t the sites. It was the DNS resolver timing out on every new hostname until caches warmed or apps retried differently. A few apps handled it; others thrashed.
The post-incident writeup was refreshingly blunt: the optimization assumed a clean internet path. Enterprises rarely have one. They rebuilt the design around internal resolvers per region and used DoT inside controlled egress points. DNS got faster, and security stopped side-eyeing the project.
Mini-story 3 — The boring but correct practice that saved the day
A heavily regulated shop had a rule: no client-side DNS policy changes without a canary group, and no canary without automated rollback. Everyone grumbled because it slowed down “simple” improvements. Then they tried enabling DoT strict mode for corporate resolvers across Debian endpoints.
The canary group lit up within minutes. Not because DoT was broken everywhere, but because one site still had an old firewall rule that allowed UDP/53 to the resolver VIP but blocked TCP/853. Opportunistic mode would have masked it; strict mode correctly failed. Users in that site couldn’t resolve anything.
The automation rolled them back to opportunistic immediately, and the incident stayed contained. The network team fixed the firewall policy, then the canary succeeded, then the rollout continued. Nobody got paged at midnight. The boring process didn’t just “reduce risk”; it translated a sharp edge into a small bruise.
Reliability is often just refusing to make the same mistake everywhere at once.
Common mistakes: symptoms → root cause → fix
1) “Internal domains don’t resolve after enabling DoH/DoT”
Symptoms: git.corp.example NXDOMAIN or resolves to external IPs; VPN users hit wrong endpoints.
Root cause: Bypassing internal resolvers that serve split-horizon zones; global DNS override ignores per-link routing.
Fix: Use internal resolvers for ~corp.example with split DNS. Don’t point clients directly at public resolvers. Validate with resolvectl status and resolvectl query.
2) “Everything is slow, but not down”
Symptoms: Pages and package installs pause before connecting; intermittent stalls; retries “fix it.”
Root cause: DoH/DoT timeouts followed by fallback, often due to blocked 853, proxy interception, or certificate mismatch.
Fix: Check reachability (nc -vz ... 853), certificate validation (openssl s_client), and resolved logs. Prefer strict mode only when you know the path is clean; otherwise you create timeout-based latency.
3) “DoT strict mode breaks on one site only”
Symptoms: Works in HQ, fails in a branch; same image, different network.
Root cause: Site-specific firewall/ACL allows UDP/53 but blocks TCP/853, or routes 853 differently.
Fix: Align network policy across sites. If you cannot, keep opportunistic mode for that site or terminate DoT at a local resolver inside the branch network.
4) “It works on Ethernet but fails on Wi‑Fi/captive portal”
Symptoms: On public Wi‑Fi you cannot reach anything until you disable encrypted DNS; portal never appears.
Root cause: Captive portals depend on intercepting DNS/HTTP; encrypted DNS prevents the interception flow.
Fix: Provide a policy-based toggle for roaming profiles, or allow fallback to plaintext on “untrusted” networks if your risk model permits. Better: use a managed VPN that brings DNS inside the tunnel after portal sign-in.
5) “DNSSEC failures appear after turning on encryption”
Symptoms: Certain domains fail with SERVFAIL; logs mention DNSSEC or validation issues.
Root cause: Not caused by encryption directly; it exposes resolver behavior changes, EDNS0 handling, or broken middleboxes.
Fix: Confirm whether your resolver validates DNSSEC and whether middleboxes mangle responses. If your environment can’t handle it, don’t turn on DNSSEC validation at the edge casually.
6) “Some apps still use plaintext DNS even after you ‘enabled DoT’”
Symptoms: Packet capture shows UDP/53 to random servers; or app resolves even when resolved is stopped.
Root cause: Apps using their own resolvers (containers, language runtimes, VPN agents) or bypassing glibc resolver path.
Fix: Audit with ss, container runtime DNS settings, and egress controls. Enforce a single resolver path for managed endpoints, or explicitly document exceptions.
FAQ
- 1) Should I enable DoH or DoT on Debian 13 endpoints by default?
- In enterprises: default to DoT to your resolvers (or your managed forwarder). DoH is fine when you control the endpoint and proxy story. Avoid direct-to-public unless policy explicitly allows it.
- 2) What’s the safest first rollout mode?
- Opportunistic DoT. It attempts encryption but won’t take down DNS if a site blocks 853. Use that phase to discover policy gaps, then move to strict where feasible.
- 3) If we already have DNSSEC, do we still need DoT/DoH?
- Yes, if you care about query privacy and resistance to passive monitoring. DNSSEC protects integrity of answers (when validated), not confidentiality of questions.
- 4) Will encrypted DNS break split-horizon DNS?
- Not inherently. Bypassing the internal resolver breaks split-horizon. Encrypting transport to the internal resolver preserves it.
- 5) How do I keep compliance logging while using encrypted DNS?
- Keep using internal resolvers where logging and policy live. Encrypt from endpoint to those resolvers (DoT) or at least from resolvers to upstream. Don’t scatter endpoint DNS to random external providers.
- 6) What about proxies and TLS inspection?
- DoT usually avoids proxy interception by design (it’s not HTTPS), but may be blocked by egress rules. DoH often collides with interception unless the client trusts the intercepting CA or you exempt that traffic—both are policy decisions, not technical “fixes.”
- 7) How can I prove we’re actually using DoT?
-
Look for
+DNSOverTLSinresolvectl status, confirm TCP/853 sessions fromsystemd-resolveviass, and validate the resolver cert withopenssl s_client. - 8) Should I set public fallback DNS servers?
- In enterprises: generally no. Fallback to public resolvers can silently bypass internal zones and logging. If you need resilience, run multiple internal resolvers and ensure routing to them works everywhere (LAN, VPN, branches).
- 9) Does enabling DoH/DoT stop DNS-based filtering?
- It can, if filtering relies on intercepting plaintext DNS in transit. That may be desired or may violate policy. The enterprise-friendly answer is to do filtering on the resolver you control and encrypt the transport to it.
Conclusion: next steps you can execute this week
Encrypted DNS on Debian 13 is not hard. Doing it without breaking corporate networks is mostly about respecting how enterprises actually use DNS: split-horizon, policy enforcement, and predictable routing.
Next steps that pay off quickly:
- Pick a model: endpoint→internal resolver encrypted (best), or internal resolver→upstream encrypted (second-best), and avoid endpoint→public resolver unless policy says yes.
- Run the Fast diagnosis playbook on a pilot group and fix the boring blockers: TCP/853 reachability, certificates, and per-link DNS behavior.
- Roll out opportunistic DoT with monitoring, then move to strict where you can prove the path and PKI are correct.
- Write down the exception handling for captive portals and odd VPN clients. You will need it.
One paraphrased idea from W. Edwards Deming applies nicely here: Without data, you’re just another person with an opinion.
Measure before you flip strict mode.
Joke #1: DNS is the only system where “it’s always the network” is both cliché and frequently a statement of fact.
Joke #2: Encrypted DNS won’t fix your org chart, but it will reveal who owns egress policy—usually by accident.