question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Capture logs for individual jobs

See original GitHub issue

It should be possible to easily inspect logs of a single job that has run.

Option 1

Update the logger formatting to include the task name & id so that backend logs can be easily filtered.

Option 2

  • Dynamically create new logging handlers & filters when a job is run to collect logs only for the job.
  • Save these in a new table associated with the job.
  • Provide a way to access logs from job instances
job = app.job_manager.list_job(id=1)
for attempt in jobs.attempts:
    print(attempt.logs)
CREATE TABLE procrastinate_logs (
    id BIGSERIAL PRIMARY KEY,
    job_id integer NOT NULL REFERENCES procrastinate_jobs ON DELETE CASCADE,
    attempt_id integer NOT NULL,
    logs TEXT,
);

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:12 (12 by maintainers)

github_iconTop GitHub Comments

1reaction
ewjoachimcommented, Aug 3, 2021

Do you have a roadmap?

Not really at the point. I never found myself in an position that I’d need a roadmap for an opensource project. The roadmap is “eventually, let’s tackle all the issues” 😄

1reaction
ewjoachimcommented, Aug 2, 2021

Nice code !

I believe you don’t have to attach a new handler for every task, and remove it afterwards. Your approach of using a QueueHandler is spot on for separation of concerns. You could have a single queue handler. On the queue consumer side, you would store logs based on the job id, and when you receive a finish_task log, you know it’s over. (Additionally, if you’re afraid of missing a task end, you could always track the worker id and check whenever the same worker_id is associated to a different job id.

The only thing is where you would store the tasks, and I believe it’s perfectly appropriate to store them in the same database as procrastinate (even re-use the connection if you want), but I’m still unsure about Procrastinate doing it itself, this really seems like a specific usecase.

Let’s say that as long as we don’t store task results, I’m not comfortable storing logs 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

Monitoring Jobs
Monitoring Jobs. There are several ways to monitor Scheduler jobs: Viewing the job log. The job log includes the data dictionary views *_SCHEDULER_JOB_LOG ......
Read more >
Capturing job log information - IBM
On the source system, run a small number of jobs to capture the job log information. Note: Run both server and parallel jobs,...
Read more >
How to capture logs from workers from a Dask-Yarn job?
To write to this logger, get it via import logging logger = logging.getLogger("distributed.worker") logger.info("Writing with the worker ...
Read more >
How to display the Job execution logs - 7.3 - Talend Help Center
To see the log corresponding to the execution of your Job from the Job Conductor page, select the task in the list and...
Read more >
KB1832: How to Collect Logs for Veeam Backup & Replication
For Backup, Replication, and other non-restore jobs, select Export logs for this job. If multiple jobs are affected, use Ctrl+Click to select ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found