File Logging¶
Log entries can be written to a plain text file in addition to the dashboard. This makes it easy to use standard tools like tail -f, grep, and log shippers (Loki, Datadog, Fluentd, CloudWatch) alongside the dashboard.
Setup with FileLogger¶
Pass a FileLogger instance to loggers= on TaskManager:
from fastapi_taskflow import FileLogger, TaskManager
task_manager = TaskManager(
snapshot_db="tasks.db",
loggers=[FileLogger("tasks.log", log_lifecycle=True)],
)
Setup with the log_file shorthand¶
For a single file logger, use the log_file parameter directly on TaskManager:
Both approaches produce the same output. Use loggers= directly when you need multiple observers or want to combine FileLogger with StdoutLogger. See Observability for the full observer system.
Log format¶
Every task_log() call and retry separator is written to the file. Each line has the format:
For example:
[abc12345] [send_email] 2026-01-01T12:00:00 Connecting to SMTP server
[abc12345] [send_email] 2026-01-01T12:00:00 Sending to user@example.com
[abc12345] [send_email] --- Retry 1 ---
[abc12345] [send_email] 2026-01-01T12:00:02 Connecting to SMTP server
Lifecycle events¶
Set log_lifecycle=True to also write a line for each task status transition (RUNNING, SUCCESS, FAILED, INTERRUPTED):
Output:
[abc12345] [send_email] 2026-01-01T12:00:00 -- RUNNING
[abc12345] [send_email] 2026-01-01T12:00:00 Connecting to SMTP server
[abc12345] [send_email] 2026-01-01T12:00:01 -- SUCCESS
Log level filtering¶
Set min_level= to suppress entries below a certain severity:
Valid levels from lowest to highest: "debug", "info", "warning", "error".
FileLogger parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
path |
str |
required | File path to write to. Created if it does not exist. |
max_bytes |
int |
10485760 |
Maximum file size (10 MB) before rotating. Ignored in watched mode. |
backup_count |
int |
5 |
Number of rotated backup files to keep. Ignored in watched mode. |
mode |
str |
"rotate" |
"rotate" for automatic rotation; "watched" for external rotation. |
log_lifecycle |
bool |
False |
Write a line on each task status transition. |
min_level |
str |
"info" |
Minimum log level to write. |
Multi-instance deployments¶
Why "rotate" mode is not safe with multiple processes¶
"rotate" mode uses Python's RotatingFileHandler. When the file reaches max_bytes, the handler does this sequence:
- Close
tasks.log - Rename
tasks.logtotasks.log.1 - Open a new
tasks.log - Write the line
These are three separate OS calls with no lock across processes. If two processes hit the size limit at the same time, they both try to rename tasks.log to tasks.log.1. One rename wins silently and the other process's backup is overwritten.
Safe strategies¶
Separate file per instance (recommended)
Give each instance its own path. Each file rotates independently with no coordination needed:
# Instance 1
task_manager = TaskManager(
snapshot_db="tasks.db",
loggers=[FileLogger("tasks-1.log", log_lifecycle=True)],
)
# Instance 2
task_manager = TaskManager(
snapshot_db="tasks.db",
loggers=[FileLogger("tasks-2.log", log_lifecycle=True)],
)
External rotation with "watched" mode
Set mode="watched" and let an external tool such as logrotate manage rotation. WatchedFileHandler does not rotate at all. On every write it checks whether the file it has open still matches the inode and device of the path on disk. If logrotate has replaced the file, the handler detects the mismatch and reopens the path before writing.
task_manager = TaskManager(
snapshot_db="tasks.db",
loggers=[
FileLogger(
"/var/log/myapp/tasks.log",
mode="watched",
log_lifecycle=True,
)
],
)
Setting up logrotate¶
Create /etc/logrotate.d/myapp:
/var/log/myapp/tasks.log {
daily
rotate 7
compress
missingok
notifempty
create 0644 www-data www-data
postrotate
kill -HUP $(cat /var/run/myapp.pid)
endscript
}
Key directives:
| Directive | What it does |
|---|---|
daily |
Rotate once a day (also: weekly, monthly, size 100M) |
rotate 7 |
Keep 7 backup files |
compress |
Gzip old files |
create 0644 www-data www-data |
Create a new empty file after rotation with the right owner |
postrotate |
Signal your app after rotation (optional with WatchedFileHandler) |
The postrotate block is optional. WatchedFileHandler detects the replaced file on the next write regardless.
If you do not have a PID file, use pkill:
Testing the config:
# Dry run
logrotate --debug /etc/logrotate.d/myapp
# Force a rotation now
sudo logrotate --force /etc/logrotate.d/myapp
Multiple hosts (Redis)¶
Each host writes its own file. "rotate" mode is safe here because no two processes share a file path. Use a log shipper to aggregate them:
task_manager = TaskManager(
snapshot_backend=RedisBackend("redis://localhost:6379/0"),
loggers=[FileLogger("tasks.log", log_lifecycle=True)],
)
Docker / no logrotate¶
Use separate files per instance and let your log driver handle rotation. Set a unique path per worker using an environment variable: