Ubuntu 24.04: Cron Runs Manually but Not on Schedule — PATH/Env Gotchas and Fixes

Was this helpful?

You run the script by hand. It behaves. You schedule it in cron. It vanishes into the night like a coworker when the on-call phone rings.

On Ubuntu 24.04 this usually isn’t “cron is broken.” It’s that cron is ruthlessly honest: it runs your command with a minimal environment, a non-interactive shell, different working directory assumptions, and no patience for hand-wavy “it works on my terminal.” Let’s make cron boring again.

What changes when cron runs your command

When you run a command manually you bring along a lot of invisible baggage: your interactive shell, your dotfiles, your PATH tweaks, your current directory, your loaded SSH agent, maybe your Kerberos ticket, and that one exported variable from last week you forgot existed.

Cron brings none of that. Cron’s worldview is closer to: “Here is a time. Here is a command string. I will run it with a small set of environment variables and I will not guess what you meant.” That’s not cruelty. That’s reliability.

On Ubuntu, cron typically runs using /bin/sh (dash), not bash, unless you specify otherwise. It runs non-interactively. It does not source your ~/.bashrc. The default PATH is small (often /usr/bin:/bin plus a few sbin paths depending on context). And if you relied on relative paths, cron will happily run from your home directory or / depending on how the job is invoked, and your relative paths will point into the void.

Operational rule: treat cron like a tiny container with no personality. If your job needs something, declare it explicitly.

Fast diagnosis playbook

When you’re on-call, you don’t have time for a philosophical debate with cron. You need a shortest-path to truth. Here’s the order that tends to find the bottleneck fastest.

1) Prove cron tried to run it

  • Check cron service health (systemd status).
  • Check logs for the exact minute the job should have fired.
  • Confirm you edited the right crontab (user vs root vs /etc/cron.*).

2) Capture the environment and stderr/stdout

  • Redirect output to a log file with timestamps.
  • Dump env to a file when the job runs.
  • Use absolute paths for everything: interpreter, script, binaries, files.

3) Eliminate PATH/shell differences

  • Set a known PATH in the crontab (or at top of script).
  • Force the shell to bash if you wrote bash-isms.
  • Use /usr/bin/env carefully (cron’s PATH may not include what you expect).

4) Check permissions and identity

  • Confirm the job runs as the user you think.
  • Check group membership and file permissions.
  • Watch for silent sudo failures (cron has no TTY).

5) Check time and scheduling specifics

  • Time zone mismatch (system vs user expectations).
  • DST weirdness (jobs around 02:00 are cursed twice a year).
  • Cron syntax mistake: day-of-month vs day-of-week logic.

Interesting facts and historical context

  1. Cron’s “minimal environment” is a feature, not a bug. Early Unix automation assumed scripts should declare dependencies explicitly, because login shells were inconsistent across users.
  2. Ubuntu’s /bin/sh is dash, not bash, for speed. That decision has been around for years and continues to surprise scripts that “worked fine” interactively.
  3. Traditional cron predates systemd by decades. Cron’s mental model is “run a command at a time,” while systemd timers add “run with dependency ordering and journald logging.”
  4. Vixie Cron shaped the modern cron ecosystem. Many Linux distros inherited its behavior, including the “MAILTO” output delivery pattern.
  5. Cron’s schedule format is intentionally compact. It optimizes for human typing, not human debugging; that’s why mistakes like swapped fields are so common.
  6. Historically, cron logged to syslog. On modern Ubuntu, syslog may still exist, but journald is often the first place you’ll actually find the record.
  7. There are multiple cron entry points. User crontabs, /etc/crontab, and the /etc/cron.* directories behave differently (notably: user field in system crontab).
  8. Anacron exists because laptops sleep. “Run daily tasks even if the machine was off” solved a real 1990s desktop problem and still matters for intermittent servers and VMs.
  9. Some cron implementations support seconds; classic cron doesn’t. If you need sub-minute scheduling, you’re already outside cron’s comfort zone.

Practical tasks (commands, outputs, decisions)

Below are field-tested checks. Each one includes a realistic sample output and what decision you make from it. Run them in order when you can; cherry-pick when you can’t.

Task 1: Confirm the cron service is running

cr0x@server:~$ systemctl status cron
● cron.service - Regular background program processing daemon
     Loaded: loaded (/usr/lib/systemd/system/cron.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-12-29 08:11:02 UTC; 3h 14min ago
       Docs: man:cron(8)
   Main PID: 742 (cron)
      Tasks: 1 (limit: 18639)
     Memory: 2.2M
        CPU: 1.023s
     CGroup: /system.slice/cron.service
             └─742 /usr/sbin/cron -f

What it means: If it’s not active (running), your job never had a chance. If it’s running, move on.

Decision: If inactive, start/enable it. If active, the failure is inside scheduling, environment, permissions, or your script.

Task 2: Verify you edited the right crontab

cr0x@server:~$ crontab -l
# m h  dom mon dow   command
*/5 * * * * /home/cr0x/bin/report.sh

What it means: This is the current user’s crontab. Root’s crontab is separate.

Decision: If you intended root, run sudo crontab -l. If you intended system-wide, check /etc/crontab and /etc/cron.d.

Task 3: Check the job’s minute in logs (journald)

cr0x@server:~$ sudo journalctl -u cron --since "2025-12-29 10:00" --until "2025-12-29 10:10"
Dec 29 10:05:01 server CRON[18342]: (cr0x) CMD (/home/cr0x/bin/report.sh)
Dec 29 10:05:01 server CRON[18341]: (CRON) info (No MTA installed, discarding output)

What it means: Cron did run the command at 10:05. It also discarded output because there’s no mail transport agent (MTA).

Decision: If there’s no “CMD” line at the expected time, you have a scheduling/installation problem. If it ran, you now need to capture stdout/stderr yourself.

Task 4: Check syslog-style cron entries (if rsyslog is used)

cr0x@server:~$ grep -i cron /var/log/syslog | tail -n 5
Dec 29 10:05:01 server CRON[18342]: (cr0x) CMD (/home/cr0x/bin/report.sh)
Dec 29 10:00:01 server CRON[18112]: (cr0x) CMD (/home/cr0x/bin/report.sh)

What it means: Same truth as journald, just from syslog. Some environments keep both; some don’t.

Decision: If /var/log/syslog has nothing cron-related, don’t panic; rely on journald output.

Task 5: Add explicit logging to the crontab entry

cr0x@server:~$ crontab -l | tail -n 3
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/5 * * * * /home/cr0x/bin/report.sh >>/var/log/report.log 2>&1

What it means: You force the shell and PATH, and you retain output.

Decision: If the job “does nothing,” this creates an artifact. If the log stays empty, it’s not executing (or can’t open the log path).

Task 6: Confirm the log file is writable by that user

cr0x@server:~$ ls -l /var/log/report.log
-rw-r--r-- 1 root root 0 Dec 29 10:04 /var/log/report.log

What it means: Your user can’t append to a root-owned file in /var/log by default.

Decision: Log to a user-owned path (like ~/cron-logs) or set up correct permissions/logrotate. Don’t “fix” this with chmod 777 unless you enjoy incident reviews.

Task 7: Capture the cron runtime environment

cr0x@server:~$ mkdir -p /home/cr0x/cron-debug
cr0x@server:~$ crontab -l | tail -n 2
*/5 * * * * env | sort > /home/cr0x/cron-debug/env.txt
*/5 * * * * /usr/bin/date -Is >> /home/cr0x/cron-debug/ticks.log 2>&1
cr0x@server:~$ cat /home/cr0x/cron-debug/env.txt
HOME=/home/cr0x
LANG=C.UTF-8
LOGNAME=cr0x
PATH=/usr/bin:/bin
PWD=/home/cr0x
SHELL=/bin/sh
USER=cr0x

What it means: Cron’s PATH is minimal and the shell is /bin/sh.

Decision: If your script relies on /usr/local/bin, ~/.local/bin, pyenv, rbenv, nvm, conda, etc., you now know why it fails on schedule.

Task 8: Prove “command not found” is the failure

cr0x@server:~$ tail -n 5 /home/cr0x/cron-logs/report.log
/home/cr0x/bin/report.sh: line 7: jq: command not found

What it means: Your interactive shell can find jq but cron can’t.

Decision: Use absolute path (/usr/bin/jq) or set PATH in crontab/script. Also verify the package exists where you think it does.

Task 9: Locate binaries as cron would

cr0x@server:~$ command -v jq
/usr/bin/jq
cr0x@server:~$ /usr/bin/env -i PATH=/usr/bin:/bin command -v jq
/usr/bin/jq
cr0x@server:~$ /usr/bin/env -i PATH=/usr/bin:/bin command -v aws
/usr/bin/env: ‘command’: No such file or directory

What it means: With a minimal environment, even command may not exist as a standalone binary; it’s a shell builtin. Also, tools installed in user paths won’t be found.

Decision: Test your script under a scrubbed environment. Assume nothing. If you need aws from ~/.local/bin, set PATH or call it explicitly.

Task 10: Run the script as cron would (non-interactive, minimal env)

cr0x@server:~$ /usr/bin/env -i HOME=/home/cr0x USER=cr0x LOGNAME=cr0x PATH=/usr/bin:/bin SHELL=/bin/sh /bin/sh -c '/home/cr0x/bin/report.sh' ; echo $?
127

What it means: Exit code 127 is the classic “command not found.” This is the clean reproduction you want.

Decision: Fix dependencies (install missing packages, use absolute paths, or set PATH). Don’t debug this inside your interactive shell anymore; it’s lying to you.

Task 11: Confirm the script has a valid shebang and is executable

cr0x@server:~$ head -n 1 /home/cr0x/bin/report.sh
#!/usr/bin/env bash
cr0x@server:~$ ls -l /home/cr0x/bin/report.sh
-rwxr-xr-x 1 cr0x cr0x 1842 Dec 29 09:55 /home/cr0x/bin/report.sh

What it means: It’s executable and has a shebang. But note: /usr/bin/env will search PATH for bash. In cron, PATH might not include where bash is (usually it does), but be explicit if you’re hardening.

Decision: For production cron jobs, prefer #!/bin/bash if you rely on bash features. It’s boring and stable.

Task 12: Verify the schedule is what you think it is

cr0x@server:~$ grep -n . /var/spool/cron/crontabs/cr0x | sed -n '1,5p'
1 # DO NOT EDIT THIS FILE - edit the master and reinstall.
2 # m h  dom mon dow   command
3 5 * * * * /home/cr0x/bin/report.sh

What it means: This runs at minute 5 of every hour, not every 5 minutes. People misread this constantly.

Decision: Fix the schedule. If you meant every 5 minutes, use */5 * * * *.

Task 13: Check timezone assumptions

cr0x@server:~$ timedatectl
               Local time: Mon 2025-12-29 11:25:41 UTC
           Universal time: Mon 2025-12-29 11:25:41 UTC
                 RTC time: Mon 2025-12-29 11:25:41
                Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
              NTP service: active

What it means: Your server is on UTC. If humans are expecting local time, the job will “miss” by hours and everyone will blame cron.

Decision: Decide whether you want system time in UTC (usually yes) and adjust expectations, or move the schedule to match UTC.

Task 14: Detect “sudo needs a tty” failures

cr0x@server:~$ tail -n 10 /home/cr0x/cron-logs/report.log
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required

What it means: Cron has no interactive TTY, so password prompts fail.

Decision: Stop using interactive sudo in cron. Use a root-owned cron, a restricted sudoers rule for a specific command, or redesign the job so it doesn’t need privilege escalation.

Task 15: Confirm file permissions and group membership at runtime

cr0x@server:~$ id cr0x
uid=1000(cr0x) gid=1000(cr0x) groups=1000(cr0x),27(sudo),113(lxd)
cr0x@server:~$ ls -l /data/reports
drwxr-x--- 2 root analytics 4096 Dec 29 08:00 /data/reports

What it means: cr0x is not in the analytics group, so writing to /data/reports will fail.

Decision: Fix group membership and permissions properly, or write output somewhere the job user can access.

Task 16: Verify cron allow/deny rules aren’t blocking the user

cr0x@server:~$ sudo ls -l /etc/cron.allow /etc/cron.deny
ls: cannot access '/etc/cron.allow': No such file or directory
-rw-r--r-- 1 root root 0 Apr  5  2024 /etc/cron.deny

What it means: If /etc/cron.deny exists and contains the username, cron will refuse that user’s crontab.

Decision: Ensure the user is allowed. In many environments this isn’t used, but when it is, it’s a quiet footgun.

PATH and environment gotchas (and fixes)

If your job runs manually but not on schedule, start by assuming PATH/env. It’s the highest hit rate cause, and it’s usually easy to prove.

Cron doesn’t read your dotfiles

Your interactive shell might set PATH in ~/.profile, ~/.bashrc, or a corporate “shell init” that pulls in language runtimes. Cron does not source those files. It’s not being rude; it’s being deterministic.

Fix: Put required environment variables either:

  • at the top of the crontab (best when you need different settings per job), or
  • inside the script (best when you want the script to be portable outside cron).

Set PATH explicitly (and keep it boring)

Most production cron jobs should set a known PATH and then call binaries with absolute paths for the critical stuff. Yes, both. This is not redundant; it’s defense-in-depth against future package moves, admin tweaks, and “helpful” profile scripts.

Recommended PATH:

  • /usr/local/sbin:/usr/local/bin for locally-installed admin tools
  • /usr/sbin:/usr/bin:/sbin:/bin for system packages

If you rely on ~/.local/bin, add it explicitly. But ask yourself why a production cron job depends on per-user pip installs. That’s not illegal; it’s just fragile.

Locale and encoding surprises

In interactive sessions you might have a rich locale; in cron you might get LANG=C or C.UTF-8. If your script parses dates, sorts strings, or processes non-ASCII text, this matters.

Fix: Set LANG and LC_ALL explicitly in the script. If you process UTF-8, decide that up front.

Environment variables that are present manually but absent in cron

Classic examples:

  • SSH_AUTH_SOCK (your SSH agent): available in your terminal; absent in cron.
  • AWS_PROFILE / AWS_REGION: in your shell; missing in cron.
  • PYTHONPATH / virtualenv activation: present manually; not in cron.
  • HTTP_PROXY / NO_PROXY: present on desktops; absent on servers.

Fix: Inject required vars in the crontab line or inside the script. Better: don’t rely on agent state or interactive auth for scheduled jobs; use dedicated credentials with least privilege.

Joke #1: Cron doesn’t “forget” your PATH. It never learned it in the first place.

Working directory, relative paths, and umask

Relative paths are a fine convenience until they become a production strategy. Cron makes them hurt because it doesn’t promise a working directory that matches your terminal expectations.

Always use absolute paths in cron jobs

If your script does ./bin/tool, ../data, or writes to logs/output.log, it’s effectively a lottery ticket: sometimes it works (when you test from the script’s directory), sometimes it fails (when cron runs it elsewhere).

Fix options:

  • Use absolute paths everywhere.
  • Or cd to a known directory at the start of the script (cd /opt/app || exit 1).

Umask differences change file permissions

Your interactive shell might set a umask that produces group-writable files; cron might use something else. The symptoms are subtle: files get created but other processes can’t read them, or a downstream job fails.

Fix: Set umask explicitly in the script. Decide what permissions you actually need instead of inheriting vibes.

HOME and tilde expansion

Cron usually sets HOME, but don’t bet the quarter on it. Also, using ~ inside crontab entries can be inconsistent depending on the shell and quoting. Your script should not require tilde expansion to find critical files.

Fix: Use /home/username (or better: a dedicated app directory) in cron. Boring wins.

Permissions, users, groups, and sudo in cron

The fastest way to get a cron job to “work” is to run it as root. The fastest way to get an incident later is to keep it that way.

Know which user runs the job

There are several places to define cron jobs:

  • User crontab via crontab -e: runs as that user.
  • Root crontab via sudo crontab -e: runs as root.
  • /etc/crontab and /etc/cron.d/*: include an explicit user field.
  • /etc/cron.hourly, daily, weekly, monthly: scripts run as root, but the execution environment differs (often via run-parts).

Don’t use sudo inside cron unless you’ve designed for it

sudo in cron fails for predictable reasons: no TTY, password prompts, environment resets, and policy restrictions. If you truly need privilege, prefer:

  • a root-owned cron entry calling a root-owned script with tight permissions, or
  • a minimal sudoers rule allowing a specific command without a password, with explicit paths and no wildcards.

The “sudoers wildcard” trick will eventually be used as a ladder. You might not like who climbs it.

Group membership and supplementary groups

When cron runs as a user, it uses that user’s group memberships. If you added the user to a group recently, you might need a new login session for your manual testing to match reality; cron may already be using the updated group list, or your shell may not. This mismatch creates confusing “works for me” behavior.

Fix: Validate access with sudo -u user and check permissions explicitly. Don’t infer.

Shell differences: /bin/sh vs bash, and why it matters

Many “cron mysteriously fails” cases are actually “script uses bash syntax, but cron runs it under sh/dash.” The failures can be silent if you never capture stderr.

Common bash-isms that break under sh

  • [[ ... ]] tests instead of [ ... ]
  • Arrays
  • source instead of .
  • Process substitution: <( ... )
  • set -o pipefail behavior differences

Fix: Either write POSIX shell scripts and run them under /bin/sh, or declare bash and write bash. Half-and-half is how you get 3 a.m. pages.

Force the shell in crontab (when appropriate)

Add at top of crontab:

  • SHELL=/bin/bash

But don’t stop there. Still use a correct shebang in the script. The script should be runnable outside cron, and the interpreter should be explicit.

Logging: where cron output goes on Ubuntu 24.04

Cron has two ways to tell you what happened: logs about execution, and output from your job. People mix these up, then accuse cron of being “silent.” Cron is not silent; you just didn’t hook up the microphone.

Execution logs: journald and/or syslog

On Ubuntu 24.04, journalctl -u cron is often the quickest truth source: it shows that cron started a command and which user it ran as. It won’t show your script output unless you redirect it.

Job output: mail (if you have an MTA) or your redirections

Traditionally, cron mails stdout/stderr to the user (or to MAILTO). But many servers don’t have an MTA installed. In that case you’ll see the “No MTA installed, discarding output” message. That’s your cue: redirect output yourself.

My strong opinion: production cron jobs should always redirect output to a log file or to syslog/journald explicitly. Mail is fine as a second channel, not the primary one.

Prefer structured logs when you can

If your job is significant (data pipelines, backups, billing exports), do better than dumping text. Emit timestamps, exit codes, and a single-line summary. If you can, ship logs to your central logging system. But even without that, a local log with rotation is a massive upgrade over “I think it ran.”

Cron vs systemd timers: when to switch

Cron is still a workhorse. But on Ubuntu 24.04, systemd timers are often a better operational fit when you care about dependency ordering, sandboxing, and unified logging.

Stick with cron when

  • The job is simple and local.
  • You need compatibility with older patterns and existing fleet conventions.
  • You already have good logging and failure detection.

Prefer systemd timers when

  • You want logs in journald automatically.
  • You need retries, dependencies (network-online), or resource controls.
  • You want a clear unit you can systemctl status and monitor.

One useful mental model: cron is “fire and forget.” systemd timers are “declare intent and observe state.” In a production environment, observability tends to win.

A minimal timer pattern (conceptual)

Even if you stay on cron today, learn the timer approach. It’s an escape hatch when cron’s environment issues turn into recurring toil. And yes, you can still run the same script; you just get better tooling around it.

Three corporate mini-stories from production

Mini-story 1: The incident caused by a wrong assumption

A finance-adjacent team had a nightly export job: collect numbers, format a CSV, drop it into an SFTP folder. The job worked flawlessly when the developer ran it by hand. It also worked in staging. In production, it “randomly” failed twice a week.

The wrong assumption was small: the script used ~/exports and expected the working directory to be the repo root. When someone tested manually, they always ran it from the repo directory and in an interactive shell that set up PATH and a couple of variables.

In cron, the job ran with PWD=/home/user and SHELL=/bin/sh. The tilde expansion behaved differently depending on quoting, and a relative path pointed to a directory that didn’t exist. The script failed early, wrote an error to stderr, and cron tried to mail it. There was no MTA. So the error got discarded. The only “signal” was missing files, discovered by humans the next morning.

The fix was boring and decisive: absolute paths, explicit PATH and locale, and output redirection to a log file. Then they added a simple “last successful run” marker file and an alert if it wasn’t updated by 02:00. After that, the job either succeeded or failed loudly, which is the whole point.

Mini-story 2: The optimization that backfired

An infrastructure group tried to speed up a maintenance cron job that pruned old artifacts. They replaced a careful script with a faster one-liner using find piped into xargs -P for parallel deletes. It looked great in a quick test. It also reduced the runtime from minutes to seconds in a small directory.

In production, the preventable failure mode showed up: spaces and newlines in filenames. The parallel delete pipeline occasionally targeted the wrong path. Most of the time it was harmless. Once, it deleted a set of “current” artifacts because the selection logic got mangled.

The other backfire was resource contention. The parallel deletes created bursts of metadata operations that spiked I/O latency for unrelated workloads. It didn’t crash anything, but it triggered timeouts and retries in a dependent service at the exact minute the prune job ran. Classic “we optimized one thing and paid elsewhere.”

They rolled back to a slower but safe approach: find -print0 with null delimiters, no dangerous parsing, and a deliberate rate limit. It wasn’t glamorous. It stopped breaking production. The postmortem included the real lesson: if it’s a cron job that deletes things, treat “fast” as a requirement you earn, not a default.

Mini-story 3: The boring but correct practice that saved the day

A data platform team ran a daily ETL import via cron. It had one unsexy pattern: every run wrote a timestamped log and a final single-line summary that included runtime, record count, and exit code. It also wrote a “.ok” marker only after all steps succeeded. And it ran with set -euo pipefail in bash, with carefully handled exceptions.

One morning, downstream dashboards were missing fresh data. People immediately suspected the warehouse cluster. But the cron job logs told the story in under a minute: the job started, resolved DNS, then failed at a specific API call with a clear error about TLS validation. The exit code was non-zero, and the marker file wasn’t updated. No mystery.

The cause turned out to be a certificate change on an upstream endpoint and an outdated CA bundle in a container image used by the script. The team updated the bundle, reran the job, and the marker flipped to green. The incident was contained because the job was observable and deterministic.

There was no hero moment. Just a team that treated cron like production, not like a magical calendar with vibes.

Common mistakes: symptoms → root cause → fix

This is the section you’ll want open during a troubleshooting session. The patterns repeat across companies, teams, and levels of seniority.

1) Symptom: “It runs in terminal, but cron doesn’t create output files.”

Root cause: Relative paths or unexpected working directory. The script writes to ./output but cron runs from HOME or /.

Fix: Use absolute paths or cd at the top of the script. Add logging showing pwd and ls of expected directories.

2) Symptom: “Cron log shows CMD ran, but nothing happens.”

Root cause: Output was discarded because there’s no MTA, or you never redirected output.

Fix: Redirect stdout/stderr to a log file. If you want mail, install/configure an MTA, but don’t rely on it as the only channel.

3) Symptom: “/bin/sh: 1: source: not found” or “[[: not found”

Root cause: Script uses bash features but runs under /bin/sh (dash).

Fix: Add bash shebang and/or set SHELL=/bin/bash in crontab. Or rewrite script to POSIX sh.

4) Symptom: “command not found” for tools you definitely have installed

Root cause: PATH in cron is minimal; your interactive PATH is larger.

Fix: Set PATH explicitly. Prefer absolute binary paths for critical commands.

5) Symptom: job runs but behaves differently (sorting, parsing, regex)

Root cause: Locale differences (LANG/LC_ALL) between interactive sessions and cron.

Fix: Set locale explicitly in script. Don’t parse human-formatted output; use machine-readable formats where possible.

6) Symptom: “Permission denied” only under cron

Root cause: Different user, missing group membership, umask differences, or writing to root-owned directories like /var/log.

Fix: Ensure correct ownership/permissions and group membership. Write logs to user-owned paths or set up a proper log directory with controlled permissions.

7) Symptom: sudo fails in cron

Root cause: sudo prompts for password; cron has no TTY. Or sudo policy forbids it.

Fix: Don’t use interactive sudo. Run as the right user (root if needed) or create minimal sudoers entries for specific commands.

8) Symptom: job “doesn’t run” around DST changes

Root cause: DST can skip or duplicate local times; schedules near 02:00 can be missed or repeated.

Fix: Keep servers in UTC. If you must use local time, avoid fragile times and add idempotency to the job.

Joke #2: Daylight saving time: because sometimes your cron job deserves to fail twice for the same mistake.

Checklists / step-by-step plan

Checklist A: Make a cron job production-grade in 15 minutes

  1. Use absolute paths for the script and any files it reads/writes.
  2. Set the interpreter explicitly in the script: #!/bin/bash (or #!/usr/bin/python3, etc.).
  3. Set PATH either in crontab or at the top of the script to a known-good value.
  4. Set locale if you parse/sort text: LC_ALL=C.UTF-8 or a deliberate alternative.
  5. Log stdout/stderr with rotation in mind. Start with ~/cron-logs if you don’t have a logging strategy yet.
  6. Exit non-zero on failure and don’t hide errors behind || true unless you’re handling them intentionally.
  7. Add a “success marker” (file or metric) so you can alert on stale runs.

Checklist B: Debug a failing job without guessing

  1. Confirm cron fired: journalctl -u cron.
  2. Redirect output and reproduce the failure in a log file.
  3. Dump environment from cron (env | sort) and compare with your interactive environment (env | sort in a shell).
  4. Run the job with a scrubbed environment using /usr/bin/env -i.
  5. Fix PATH, shell, permissions, working directory in that order.
  6. Only then debug application logic.

Checklist C: Decide whether to move to a systemd timer

  1. If you need dependency ordering (network, mounts), prefer systemd.
  2. If you need consistent logging and easy status checks, prefer systemd.
  3. If you need “run missed jobs after downtime,” consider anacron or systemd with persistent timers.
  4. If it’s simple and stable, cron is fine—just harden the environment.

FAQ

1) Why does my cron job run fine when I paste it into a terminal?

Because your terminal session has a richer environment: PATH, locale, auth agents, and dotfile configuration. Cron runs with a minimal environment and a non-interactive shell. Make dependencies explicit.

2) Where do I see cron logs on Ubuntu 24.04?

Start with journalctl -u cron. Depending on logging configuration, cron may also write to /var/log/syslog. Your script output won’t appear there unless you redirect it.

3) Why am I seeing “No MTA installed, discarding output”?

Cron tried to mail your job’s stdout/stderr but there’s no mail transport agent installed/configured. Redirect output to a file (or install an MTA if mail delivery is part of your ops model).

4) Should I set PATH in crontab or in the script?

If multiple cron jobs share the same environment, setting PATH at the top of the crontab is convenient. If the script might run outside cron (CI, systemd, manual), set PATH inside the script too. In production, I often do both and still use absolute paths for critical binaries.

5) My script uses bash arrays. How do I make cron use bash?

Use a bash shebang (#!/bin/bash) and/or set SHELL=/bin/bash in the crontab. Then capture stderr so you’ll actually see syntax errors if anything still goes wrong.

6) Why does cron fail to access network resources that work manually?

Common causes: missing proxy variables, missing credentials, DNS not ready at boot, or the job runs before a network mount is available. Cron doesn’t do dependency ordering; systemd timers do.

7) How do I test exactly what cron will do without waiting?

Use a scrubbed environment reproduction: /usr/bin/env -i with a minimal set of variables and run the same interpreter cron would. This catches PATH and locale issues fast.

8) Is it safe to write logs to /var/log from a user cron job?

Not by default. /var/log is typically root-owned. Either log to a user-owned directory, or create a dedicated log directory with controlled permissions and logrotate rules. Avoid world-writable logs.

9) My job runs twice sometimes. Is cron duplicating it?

Cron itself won’t intentionally duplicate a single entry, but you might have defined the job in two places (user crontab plus /etc/cron.d, for example). DST can also duplicate local times. Search your cron configuration locations and keep servers on UTC.

10) When should I stop fighting cron and use a systemd timer?

When you need dependency ordering, consistent logging, easy status inspection, resource controls, or “run after downtime” semantics. Cron is fine for simple periodic tasks; systemd is better for production-grade scheduling with observability.

Conclusion: practical next steps

If your Ubuntu 24.04 cron job runs manually but not on schedule, don’t start by rewriting the script. Start by proving what cron did, capturing the environment, and making execution deterministic: absolute paths, explicit PATH, explicit shell, explicit logging.

Then make it observable: keep a log, record exit codes, and add a success marker or alert. Reliability is mostly about removing mystery, not adding cleverness.

One guiding idea worth keeping on a sticky note: “Hope is not a strategy.” — Gene Kranz

← Previous
Tooltips Without Libraries: Positioning, Arrow, and ARIA-Friendly Patterns
Next →
10 GPU Myths That Refuse to Die

Leave a comment