Missing "Celery worker not running" warning when trying to sync git repositories without a celery worker running
See original GitHub issueEnvironment
- Python version: 3.10.4
- Nautobot version: 1.3.5
Steps to Reproduce
- Configure secrets / secret group
gitlab
to authenticate to gitlab server - “Extensibility” > “Git Repositories” > “+”
- Fill in URL, git branch, secrets group, click “Create and Sync”
Expected Behavior
Git repo will be cloned, Synchronization Status tab of git repo view would show job log entries
Observed Behavior
Synchronization Status tab of git repo “hangs” with status of Pending
indefinitely (forseveral hours at least), no logs appear in the logs section of this view. Clicking the blue Sync button causes the “Started at” field to update (which seems to imply it’s restarting the sync), but still no logs appear, status remains Pending
.
I’m running nautobot in DEBUG mode, and I’m not seeing anything useful in the logs.
Here's a sample of my logs around the time i click "Sync"
14:45:28.805 DEBUG nautobot.core.celery __init__.py default() :
Performing nautobot serialization on <nautobot.utilities.utils.NautobotFakeRequest object at 0x7f29596aee30> for type nautobot.utilities.utils.NautobotFakeRequest
14:45:29.074 INFO django.server :
"POST /extras/git-repositories/github-utsc-utoronto-ca/sync/ HTTP/1.1" 302 0
14:45:30.854 INFO django.server :
"GET /extras/git-repositories/github-utsc-utoronto-ca/result/ HTTP/1.1" 200 225896
14:45:31.547 INFO django.server :
"GET /extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/log-table/ HTTP/1.1" 200 1758
14:45:31.549 INFO django.server :
"GET /api/extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/ HTTP/1.1" 200 580
14:45:31.752 INFO django.server :
"GET /extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/log-table/ HTTP/1.1" 200 1758
14:45:32.179 INFO django.server :
"GET /__debug__/history_sidebar/?store_id=7077e682abaf4111b53de1426f162cb0 HTTP/1.1" 200 10346
14:45:32.802 INFO django.server :
"GET /api/extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/ HTTP/1.1" 200 580
14:45:33.001 INFO django.server :
"GET /extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/log-table/ HTTP/1.1" 200 1758
14:45:33.345 INFO django.server :
"GET /__debug__/history_sidebar/?store_id=cce901b8773848d284770c7421b137b0 HTTP/1.1" 200 10346
14:45:35.041 INFO django.server :
"GET /api/extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/ HTTP/1.1" 200 580
14:45:35.236 INFO django.server :
"GET /extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/log-table/ HTTP/1.1" 200 1758
14:45:35.377 INFO django.server :
"GET /__debug__/history_sidebar/?store_id=972f8a8cf1ab4142b95cfcf5a32f5334 HTTP/1.1" 200 10294
14:45:35.648 INFO django.server :
"GET /__debug__/history_sidebar/?store_id=b43a2f3c60544ed8808baac7f8eb133f HTTP/1.1" 200 10346
14:45:38.286 INFO django.server :
"GET /api/extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/ HTTP/1.1" 200 580
14:45:38.496 INFO django.server :
"GET /extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/log-table/ HTTP/1.1" 200 1758
14:45:38.629 INFO django.server :
"GET /__debug__/history_sidebar/?store_id=f816e59f43dc48e2b20fa10bb945b265 HTTP/1.1" 200 10294
14:45:38.834 INFO django.server :
"GET /__debug__/history_sidebar/?store_id=313e38ab30934be28c0abece119fdc2f HTTP/1.1" 200 10346
14:45:42.534 INFO django.server :
"GET /api/extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/ HTTP/1.1" 200 580
14:45:42.731 INFO django.server :
"GET /extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/log-table/ HTTP/1.1" 200 1758
14:45:43.069 INFO django.server :
"GET /__debug__/history_sidebar/?store_id=f7a2a0399fb74505b715c6874ac13bb5 HTTP/1.1" 200 10346
14:45:47.783 INFO django.server :
"GET /api/extras/job-results/5dc324b8-8561-4cb5-8a9b-cf40af8aaea8/ HTTP/1.1" 200 580
I have no idea what the problem is, and no clear path forward to troubleshoot… Any help would be greatly appreciated!
Issue Analytics
- State:
- Created a year ago
- Comments:5 (5 by maintainers)
Top Results From Across the Web
Celery worker state changes to offline after a while · Issue #4758
Actual behavior. After several hours running, some workers appear as offline in flower and celery -A proj events --dump records no heartbeats ...
Read more >Troubleshooting DAGs | Cloud Composer
Solution: verify in Airflow worker logs that there are no errors raised by Airflow worker related to missing DAG or DAG parsing errors....
Read more >Share a link to this question - Stack Overflow
Celery seems to be picking both my old code and new code. I have tried clearing cache, clearing the broker queue(redis), restarting celery...
Read more >Full Text Bug Listing - Red Hat Bugzilla
Summary: Error during sync of suse errata, missing drpms ... 4. run pulp-admin rpm repo sync run --repo-id=test6 5. output or task details...
Read more >Nautobot v1.1 - Nautobot Documentation
Warning. If you are running plugins that use background tasks requiring the RQ worker, you will need to run both the RQ and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
There’s code that’s supposed to add a banner to the view in that case (https://github.com/nautobot/nautobot/blob/develop/nautobot/extras/views.py#L791). If that’s not showing up, then we may have a bug here.
I’m seeing the warning banner on v1.4.3. @alextremblay please confirm if you’re still encountering this problem.