You open Proxmox VE, try to add a Proxmox Backup Server storage, pick the datastore you know exists,
and PVE replies with the sort of confidence only computers can muster: “datastore not found”.
This error is rarely about the datastore being gone. It’s usually PVE being unable to see the datastore
through the PBS API—because of an ID mismatch, missing permissions, a namespace twist, a fingerprint issue, or
plain old connectivity. The good news: you can prove which one in minutes, and fix it cleanly.
What “datastore not found” really means (and what it doesn’t)
In PVE, a PBS datastore is discovered and used via PBS’s HTTP API (port 8007 by default). When PVE says
“datastore not found”, one of these is happening:
- The datastore ID you configured doesn’t exist (typo, case mismatch, wrong PBS node).
- The datastore exists, but your credentials can’t list or access it (permissions, realm, token).
- PVE can’t talk to PBS reliably (DNS, routing, firewall, port, proxy, TLS fingerprint).
- You’re hitting a namespace mismatch (datastore exists, but access is scoped away).
- You’re talking to the wrong PBS instance (VIP, NAT, split DNS, stale IP, cluster confusion).
What it usually is not: a storage filesystem problem on PBS. If the PBS UI shows the datastore healthy,
PVE’s complaint is almost always at the API/auth/config layer, not ZFS suddenly eating your backups.
One operational hint: the error text often gets re-used for multiple API failures. “Datastore not found”
is sometimes a polite lie that means “you’re not allowed to see it” or “I can’t authenticate.” Computers do this
because they can’t feel shame.
Joke #1 (short, relevant): A datastore “not found” is the storage equivalent of “I never got your email.” It exists; someone’s just not admitting it.
Interesting facts and context (why this error keeps showing up)
A bit of context helps because “datastore not found” is a modern failure mode with old-school roots:
identity, naming, and access control. Here are concrete facts that influence how you troubleshoot:
- PBS datastores are addressed by ID, not by path. The ID is what the API exposes; the backing directory is secondary.
- PVE integrates with PBS via the PBS API on port 8007. If 8007 is blocked or intercepted, discovery fails even if the PBS web UI works locally.
- Proxmox authentication realms matter.
root@pamandroot@pbsare different identities; tokens inherit realm semantics. - The PBS fingerprint is pinned by PVE. If PBS TLS cert changes (reinstall, hostname change), PVE may refuse to connect until updated.
- PBS permissions are path-like and object-scoped. Having “Datastore.Audit” isn’t the same as “Datastore.Backup”. Missing list permissions can hide datastores.
- Namespaces introduced an extra dimension of access. A datastore can exist, and you can still be blocked from the namespace you’re trying to use.
- PBS stores its configuration in /etc/proxmox-backup. Datastore definitions are local to the PBS node; clustering PBS nodes doesn’t magically replicate configs unless you do it.
- PVE stores PBS storage definitions in /etc/pve/storage.cfg. On a PVE cluster, that file is replicated. One wrong edit becomes everyone’s problem.
- “Not found” has been a classic security pattern. Many APIs return 404 for unauthorized resources to avoid information leaks, which is great for security and terrible for debugging.
One quote to frame the mindset. This is a paraphrased idea, because precision matters:
paraphrased idea: “Hope is not a strategy; you need evidence.”
— attributed to reliability/operations culture (commonly cited in engineering management).
Fast diagnosis playbook (check 1, 2, 3)
You want the shortest path to certainty. Don’t start by reinstalling PBS or reformatting anything.
Start by proving where visibility breaks: network, identity, or naming.
1) Prove PVE can reach PBS API (port 8007) and TLS isn’t lying
- If TCP connect fails: it’s networking/firewall/routing/DNS.
- If TLS handshake complains about cert/fingerprint: it’s trust pinning.
- If you get an HTTP response: the pipe is fine; move to auth/permissions.
2) Prove your credentials can list datastores
- If datastore listing is empty or errors: permissions, realm, token, or wrong PBS target.
- If listing shows the datastore but add still fails: datastore ID mismatch, namespace mismatch, or storage.cfg drift.
3) Prove the datastore ID matches exactly what PBS exposes
- Datastore IDs are case-sensitive. Treat them like API object names, because they are.
- Confirm you’re not using a filesystem path or a “friendly name.”
If you only have 10 minutes, do those three checks. They catch the majority of real incidents.
Failure map: where the datastore can “disappear”
Think of this as a chain. The datastore is visible only if every link holds.
Break any link and PVE will claim it’s “not found”, because from PVE’s perspective, it really isn’t.
Link A: PVE resolves the PBS hostname to the right address
Split DNS, stale /etc/hosts entries, and “temporary” NAT rules are classics. PVE might connect to a different
PBS than you think—especially in lab-to-prod migrations or DR tests.
Link B: Port 8007 is reachable, and nothing is proxying it weirdly
A reverse proxy that does HTTP but not WebSockets, a firewall doing TLS inspection, or an L7 load balancer
with mismatched health checks can all produce odd API behavior. PBS is happiest when accessed directly.
Link C: TLS fingerprint pinned in PVE matches the current PBS certificate
PVE stores and checks the PBS fingerprint. If you reinstall PBS, regenerate its certificate, or change its
hostname, PVE may start refusing connections in a way that bubbles up as “datastore not found” during add/scan.
Link D: The identity you use has permission to list and access the datastore
PVE can authenticate as a user (like backup@pbs) or via an API token. But the token has to exist,
not be expired/disabled, and have the right roles. “Not found” can be PBS telling you “not allowed.”
Link E: The datastore ID is correct and consistent
The datastore ID is not the mountpoint. It’s not the ZFS dataset name unless you made it the same on purpose.
If you change the datastore ID in PBS, PVE won’t magically follow.
Link F: Namespaces and pruning/verification expectations match
Namespaces can make a datastore look empty or inaccessible if you don’t have permissions on the namespace.
Also, if you’re expecting a namespace that doesn’t exist or is restricted, PVE behavior can look like
the datastore “doesn’t exist.”
Practical tasks with commands (12+ checks that end arguments)
These are field checks. They’re designed to be runnable and to produce outputs that drive decisions.
Run them from a PVE node first, then from PBS, because you’re debugging a conversation.
Task 1: Confirm PVE resolves PBS to the expected IP
cr0x@server:~$ getent hosts pbs01
10.20.30.40 pbs01
What it means: This is the IP PVE will use. If it’s wrong, you’re debugging the wrong machine.
Decision: If the IP is unexpected, fix DNS or /etc/hosts (and remove “temporary” overrides).
Task 2: Check basic TCP reachability to PBS API port 8007
cr0x@server:~$ nc -vz pbs01 8007
Connection to pbs01 8007 port [tcp/*] succeeded!
What it means: L3/L4 network path is open.
Decision: If it fails or times out, stop. Fix routing/firewall first; auth won’t matter yet.
Task 3: Verify TLS handshake and capture the presented certificate subject
cr0x@server:~$ echo | openssl s_client -connect pbs01:8007 -servername pbs01 2>/dev/null | openssl x509 -noout -subject -issuer -fingerprint -sha256
subject=CN = pbs01
issuer=CN = Proxmox Backup Server
sha256 Fingerprint=AA:BB:CC:DD:EE:FF:00:11:22:33:44:55:66:77:88:99:AA:BB:CC:DD:EE:FF:00:11:22:33:44:55:66:77:88:99
What it means: You can see what certificate PBS is presenting and its SHA256 fingerprint.
Decision: If PVE has a different fingerprint recorded, update it in the PVE storage definition or re-add the PBS storage cleanly.
Task 4: On PVE, inspect the PBS storage entry in storage.cfg
cr0x@server:~$ grep -nA6 -B2 "pbs:" /etc/pve/storage.cfg
12:pbs: pbs-backups
13: datastore vmbackups
14: server pbs01
15: username backup@pbs
16: fingerprint AA:BB:CC:DD:EE:FF:00:11:22:33:44:55:66:77:88:99:AA:BB:CC:DD:EE:FF:00:11:22:33:44:55:66:77:88:99
17: content backup
What it means: This is the ground truth of what PVE will attempt.
Decision: If datastore is not exactly the PBS datastore ID, fix it. If server is wrong, fix it. If username realm is wrong, fix it.
Task 5: From PVE, query PBS API version to validate auth-free connectivity
cr0x@server:~$ curl -k -s https://pbs01:8007/api2/json/version | jq .
{
"data": {
"release": "3.2-1",
"repoid": "f433c2a1",
"version": "3.2.3"
}
}
What it means: Network path and HTTP stack work at least for unauthenticated endpoints.
Decision: If this fails, do not touch permissions yet. Fix DNS, TLS interception, or firewall issues.
Task 6: On PBS, list datastores and confirm the datastore ID
cr0x@server:~$ proxmox-backup-manager datastore list
┌───────────┬──────────────┬──────────┬─────────────┐
│ name │ path │ comment │ gc-schedule │
╞═══════════╪══════════════╪══════════╪═════════════╡
│ vmbackups │ /mnt/pbs/vm │ │ daily │
└───────────┴──────────────┴──────────┴─────────────┘
What it means: The datastore ID is vmbackups. That’s what PVE must use.
Decision: If PVE uses anything else (like /mnt/pbs/vm), correct PVE config.
Task 7: On PBS, confirm the datastore is actually available and not in a weird mount state
cr0x@server:~$ proxmox-backup-manager datastore status
┌───────────┬─────────┬───────────┬─────────────┬───────────┐
│ datastore │ status │ total │ used │ avail │
╞═══════════╪═════════╪═══════════╪═════════════╪═══════════╡
│ vmbackups │ online │ 10.00 TiB │ 2.10 TiB │ 7.90 TiB │
└───────────┴─────────┴───────────┴─────────────┴───────────┘
What it means: PBS itself thinks the datastore is online.
Decision: If it’s offline, fix the underlying storage (mount/ZFS/disk) before blaming PVE.
Task 8: On PBS, verify the API service is running and listening on 8007
cr0x@server:~$ systemctl status proxmox-backup
● proxmox-backup.service - Proxmox Backup Server API and Web UI
Loaded: loaded (/lib/systemd/system/proxmox-backup.service; enabled)
Active: active (running) since Mon 2025-12-22 08:10:12 UTC; 3 days ago
Main PID: 1234 (proxmox-backup)
Tasks: 24 (limit: 38461)
Memory: 220.0M
CGroup: /system.slice/proxmox-backup.service
└─1234 /usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
What it means: The service is up.
Decision: If inactive/failed, restart and inspect logs. PVE can’t see datastores if PBS API is down.
Task 9: On PBS, check firewall status and allow 8007 if needed
cr0x@server:~$ pve-firewall status
Status: enabled/running
Service: proxmox-firewall
Rules: loaded
What it means: Proxmox firewall is active (PBS uses the same firewall tooling).
Decision: If enabled, confirm there’s an allow rule for TCP/8007 from your PVE nodes/subnets.
Task 10: From PVE, attempt to list datastores through the configured storage (PVE view)
cr0x@server:~$ pvesm status
Name Type Status Total Used Available %
pbs-backups pbs active 0 0 0 0.00
What it means: This output is sometimes unhelpful for PBS capacity, but it tells you whether PVE thinks the storage is “active”.
Decision: If it’s not active, go look at PVE logs for authentication and API errors.
Task 11: On PVE, read the task log for the failed add/scan operation
cr0x@server:~$ journalctl -u pvedaemon -u pveproxy --since "1 hour ago" | tail -n 40
Dec 26 09:11:02 pve01 pvedaemon[2211]: storage 'pbs-backups' error: datastore 'vmbackup' not found
Dec 26 09:11:02 pve01 pvedaemon[2211]: pbs-api: GET /api2/json/admin/datastore: permission denied
What it means: The first line blames the datastore ID; the second line is the real reason: permission denied.
Decision: Fix PBS permissions for the user/token. Don’t rename datastores to chase a permissions issue.
Task 12: On PBS, list users and confirm the account exists in the expected realm
cr0x@server:~$ proxmox-backup-manager user list
┌──────────────┬────────┬───────────────────────────┬──────────┐
│ userid │ enable │ comment │ expire │
╞══════════════╪════════╪═══════════════════════════╪══════════╡
│ root@pam │ true │ │ │
│ backup@pbs │ true │ used by PVE for backups │ │
└──────────────┴────────┴───────────────────────────┴──────────┘
What it means: The user exists. Note the realm: backup@pbs.
Decision: If PVE uses backup@pam but PBS has only backup@pbs, fix the username in PVE.
Task 13: On PBS, check permissions for the user on the datastore path
cr0x@server:~$ proxmox-backup-manager acl list | sed -n '1,120p'
┌───────────────┬────────────┬──────────────┬───────────────────────────┐
│ path │ ugid │ type │ role │
╞═══════════════╪════════════╪══════════════╪═══════════════════════════╡
│ /datastore │ backup@pbs │ user │ DatastoreAdmin │
│ /datastore/vmbackups │ backup@pbs │ user │ DatastoreBackup │
└───────────────┴────────────┴──────────────┴───────────────────────────┘
What it means: This user has roles granting datastore rights. The exact roles vary by org policy.
Decision: If you don’t see an ACL entry for /datastore/vmbackups (or relevant namespace paths), add one. At minimum, grant list/audit + backup permissions for PVE’s backup workflow.
Task 14: On PBS, test login with an API token strategy (recommended)
cr0x@server:~$ proxmox-backup-manager user token list backup@pbs
┌───────────┬────────┬────────────┬────────┐
│ tokenid │ enable │ expire │ comment│
╞═══════════╪════════╪════════════╪════════╡
│ pve-prod │ true │ │ │
└───────────┴────────┴────────────┴────────┘
What it means: A token exists and is enabled.
Decision: Prefer tokens over passwords for PVE-to-PBS integration. If no token exists, create one and set minimal roles.
Task 15: On PVE, verify the stored credentials reference the right token/user
cr0x@server:~$ grep -n "pbs-backups" -nA8 /etc/pve/storage.cfg
12:pbs: pbs-backups
13: datastore vmbackups
14: server pbs01
15: username backup@pbs
16: token pve-prod
17: fingerprint AA:BB:CC:DD:EE:FF:00:11:22:33:44:55:66:77:88:99:AA:BB:CC:DD:EE:FF:00:11:22:33:44:55:66:77:88:99
18: content backup
What it means: PVE will authenticate using the token, not an interactive password.
Decision: If token is missing and you intended tokens, add it. If the token name is wrong, fix it and retry storage scan.
Task 16: Confirm you are not accidentally pointing to a PBS “sync target” or a different node
cr0x@server:~$ curl -k -s https://pbs01:8007/api2/json/nodes | jq -r '.data[].node'
pbs01
What it means: You’re speaking to the PBS node you think you are.
Decision: If you see a different node name than expected, you have a DNS/LB/NAT confusion. Fix addressing before anything else.
Three corporate mini-stories from the trenches
1) The incident caused by a wrong assumption: “It’s a typo, obviously”
A mid-sized SaaS company rolled out PBS to replace a patchwork of NFS exports and scripts that had names like
backup_final_v7_really_final.sh. They set up a new datastore in PBS called vmbackups.
The storage team validated it in the PBS web UI. Green lights all around.
The virtualization team added PBS to PVE and got “datastore not found.” They did what many people do under pressure:
they assumed the datastore name was wrong, changed it in PVE to vmbackup, then to VMBackups, then to the mount path.
Same error. The incident channel filled with confident guesses, none of them helpful.
The real issue was simpler and more annoying: the PVE nodes resolved pbs01 to an old IP via a leftover
/etc/hosts entry from a staging test. That IP belonged to a decommissioned PBS VM that had been repurposed for a different test.
The “wrong PBS” didn’t have the datastore. So yes, “datastore not found” was technically correct—just not in the way anyone expected.
Fix was immediate: remove the stale hosts entry, rely on DNS, and re-add the storage to refresh fingerprint and endpoint.
The team wrote a tiny policy afterward: no static host entries for production infrastructure endpoints unless there’s a documented DR plan.
The postmortem takeaway wasn’t “be careful with typos.” It was “verify you are talking to the system you think you are.”
That’s an SRE lesson as old as pager duty.
2) The optimization that backfired: “Let’s put a proxy in front of it”
Another shop standardized on an internal TLS gateway. Everything had to be behind it: metrics, internal apps, admin panels.
Someone decided PBS should also be fronted by the gateway for “centralized certificate management.”
They terminated TLS at the gateway and re-encrypted to PBS with a different certificate.
PVE started intermittently failing to add PBS storages with “datastore not found.”
Not every time. Just enough to be maddening. Some nodes worked; some didn’t. Retries sometimes succeeded.
That’s the kind of failure that makes teams blame cosmic rays.
The root cause was fingerprint pinning and inconsistent backend selection. The gateway occasionally routed a node to a different backend
during maintenance windows, and the presented certificate/fingerprint didn’t match what PVE had stored. Some PVE nodes were pinned to the old fingerprint, others to the new.
The error surfaced as a datastore discovery failure because the API call never completed reliably.
They fixed it by removing the gateway from the PBS path (direct access from PVE to PBS), and instead managing PBS certificates on the PBS host
in a controlled way. “Centralization” wasn’t worth turning backups into a probabilistic event.
This is the kind of optimization that looks tidy on a network diagram and ugly in production. Backups don’t need elegance. They need boring.
3) The boring but correct practice that saved the day: “Least privilege, documented”
A regulated enterprise had a habit that feels old-fashioned: every service integration got a dedicated account, an API token,
a narrowly scoped role, and a one-page runbook with a copy/paste verification command. No shared root credentials. No snowflake setups.
During a PBS upgrade cycle, one datastore was moved to new storage and re-created with the same backing path but a different datastore ID.
A busy admin assumed PVE referenced the path, not the ID. PVE then started throwing “datastore not found” after the maintenance.
It looked like a permissions issue at first, because the account could still authenticate.
The runbook saved time. It had a simple check: list datastores via PBS manager, confirm the ID, and compare it to /etc/pve/storage.cfg.
No debate. No “maybe it’s the firewall.” It was an ID mismatch.
They updated the datastore ID in storage.cfg across the cluster (one change, replicated), ran a test backup, and closed the incident.
The dedicated token and least-privilege ACLs meant they didn’t have to “temporarily” grant admin rights to debug.
Boring practice: verified. Pager: quiet. Compliance: happy. That’s the trifecta.
Common mistakes: symptom → root cause → fix
1) Symptom: PBS UI shows datastore, PVE says “datastore not found”
- Root cause: Wrong datastore ID in PVE (typo, wrong case, used path instead of ID).
- Fix: On PBS run
proxmox-backup-manager datastore list. Copy the exactnameinto PVE storage config.
2) Symptom: Works from one PVE node, fails from another
- Root cause: Network ACLs, firewall, or split DNS (nodes reach different PBS endpoints).
- Fix: Compare
getent hosts pbs01andnc -vz pbs01 8007across nodes. Normalize DNS and firewall rules.
3) Symptom: After PBS reinstall or hostname change, everything breaks
- Root cause: TLS fingerprint mismatch pinned in PVE.
- Fix: Fetch the new fingerprint with
openssl, then update the PVE PBS storage fingerprint or re-add the storage.
4) Symptom: Logs show “permission denied” but UI says “not found”
- Root cause: PBS returns 404/“not found” style errors for unauthorized access, or PVE masks the underlying permission error.
- Fix: Inspect PVE logs (
journalctl -u pvedaemon -u pveproxy) and PBS ACLs (proxmox-backup-manager acl list). Grant required roles on/datastore/<id>.
5) Symptom: PVE can curl /version, but datastore scan fails
- Root cause: Connectivity is fine; authentication/authorization is not (wrong realm, disabled token, expired user, missing ACL).
- Fix: Verify user realm and token existence on PBS. Use a dedicated token and confirm ACL entries for datastore access.
6) Symptom: PVE storage add works, but backups fail with namespace-related errors
- Root cause: Namespace permissions missing or misconfigured expectations about where backups land.
- Fix: Align namespace usage and ACLs. If you use namespaces, document them and assign roles at the correct namespace path scope.
Joke #2 (short, relevant): “Datastore not found” is sometimes just PBS being shy about saying “you’re not invited.”
Checklists / step-by-step plan (do this, in this order)
Step-by-step: fix PVE ↔ PBS datastore visibility
-
Confirm endpoint identity.
From each PVE node: rungetent hosts pbs01. Ensure it resolves to the correct IP.
If not, fix DNS or remove stale/etc/hostsentries. -
Confirm network reachability.
From each PVE node:nc -vz pbs01 8007.
If blocked, fix firewall rules (PBS side and network side) and routing. -
Confirm TLS certificate/fingerprint.
From PVE: capture the fingerprint viaopenssl s_client.
Compare with the fingerprint in/etc/pve/storage.cfg. If different, update it or re-add storage. -
Confirm the datastore ID on PBS.
On PBS:proxmox-backup-manager datastore list.
Copy the datastorename(ID), not the path. -
Confirm PBS is healthy.
On PBS:systemctl status proxmox-backupandproxmox-backup-manager datastore status.
If service is down or datastore offline, fix PBS first. -
Confirm identity (realm) and token usage.
On PBS:proxmox-backup-manager user listandproxmox-backup-manager user token list backup@pbs.
Prefer token-based auth from PVE. -
Confirm ACL permissions.
On PBS:proxmox-backup-manager acl list. Ensure the user/token has roles on/datastore/<id>.
If you use namespaces, scope permissions appropriately. -
Validate PVE configuration.
On PVE: inspect/etc/pve/storage.cfgforserver,datastore,username,token,fingerprint. -
Retry from PVE and read logs immediately.
If it still fails, go straight tojournalctl -u pvedaemon -u pveproxyand look for “permission denied”, fingerprint, or connection errors.
What to avoid (because it wastes hours)
- Don’t rename datastores to “see if it helps.” You’ll just add drift and break references.
- Don’t put PBS behind a fancy proxy unless you can guarantee stable backend identity and certificate behavior.
- Don’t grant full admin rights to “test.” Create a dedicated token with scoped roles; keep the blast radius small.
- Don’t debug from your laptop first. Debug from a PVE node. That’s where the failure occurs.
FAQ
1) Does “datastore not found” always mean the datastore name is wrong?
No. It can mean wrong ID, wrong PBS endpoint, permission denied masked as not-found, namespace scoping, or fingerprint/trust failures.
Confirm ID on PBS and check logs for permission errors.
2) What exactly is the “datastore ID” in PBS?
It’s the datastore name shown by proxmox-backup-manager datastore list.
It’s an API identifier, not a filesystem path.
3) Why does it work in the PBS web UI but not from PVE?
The PBS UI is typically logged in as a privileged local user (often root@pam) and accessed locally or via a different network path.
PVE uses its own credentials and hits the API remotely; different identity, different network, different rules.
4) Do I have to use API tokens, or can I use a password?
You can use a password, but tokens are the better operational choice: revocable, scannable, and easier to scope.
In production, use a dedicated user plus a dedicated token for PVE.
5) What permissions does PVE need on PBS for backups?
At minimum, rights to perform backups into the target datastore (and often audit/list capability so it can see what exists).
The exact role names vary by your policy; the principle is: grant only what PVE needs on /datastore/<id>.
6) Can a firewall cause “datastore not found” even if ping works?
Yes. Ping proves almost nothing. PBS needs TCP/8007 end-to-end. Use nc -vz pbs01 8007 or equivalent to prove it.
7) I changed the PBS certificate. Why didn’t PVE recover automatically?
Because PVE pins the PBS fingerprint for safety. That prevents silent man-in-the-middle attacks, but it means you must update the fingerprint
when PBS cert changes.
8) How do namespaces affect datastore visibility?
Namespaces can restrict what a user/token can see or write. A user might access the datastore generally but be blocked from a specific namespace,
making things appear missing. Align namespace ACLs with your backup jobs.
9) Is it safe to edit /etc/pve/storage.cfg by hand?
Yes, if you know what you’re doing and you understand it replicates across the PVE cluster. Prefer the GUI for routine changes,
but for incident response, a careful manual edit (with change control) is sometimes the fastest fix.
10) Why does the error show up only during “Add storage” but backups worked before?
Adding storage triggers discovery/list operations that might require additional permissions compared to a previously cached configuration.
Also, certificates and DNS can change over time; the stored fingerprint or resolved IP may now differ.
Conclusion: practical next steps
“Datastore not found” is a visibility problem, not a mystical storage curse. Treat it like any other production integration:
verify the endpoint, verify the transport, verify the identity, verify the object name.
- From a PVE node: prove
pbs01:8007is reachable and the certificate is what you expect. - From PBS: confirm the datastore ID via
proxmox-backup-manager datastore listand that it’s online. - From PBS: verify the user/token exists and ACLs include the datastore (and namespace, if used).
- From PVE: align
/etc/pve/storage.cfgwith the correctserver,datastore,username/token, and fingerprint. - After the fix: run a test backup and immediately read logs if anything looks off. Quiet systems are earned, not assumed.
The best outcome is not “it works now.” It’s “the next time it breaks, the on-call can prove why in five minutes.”
Write down the datastore IDs, the expected fingerprints, the firewall rule, and the token owner. Make your future self slightly less angry.