Use apscheduler to dynamically add jobs I use it in a wrong way?
See original GitHub issueStep 1. this is my add.py to add jobs
from datetime import datetime, timedelta
import sys
import os
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.jobstores.redis import RedisJobStore
import logging
jobstores = {
'default': RedisJobStore(host='localhost', port=6379)
}
scheduler = BackgroundScheduler(jobstores=jobstores)
def alarm():
time = datetime.now()
os.system("echo \"Alarm! This alarm was scheduled at %s\" >> /var/log/test.log"%(time))
def add(task_func, task_id):
#print task_func,task_id
scheduler.add_job(task_func, 'interval', id=task_id, name=task_id, seconds=10)
if __name__ == '__main__':
add(alarm, 'my_job1')
add(alarm, 'my_job2')
scheduler.start()
scheduler.shutdown()
I added two jobs first, then start and shutdown
- I just used add_job but it will not really add it.
- So I did start and shutdown. But I don’t really sure that it need shut down or not.
Step 2 .and then start test.py ; python test.py
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.jobstores.redis import RedisJobStore
jobstores = {
'default': RedisJobStore(host='localhost', port=6379)
}
scheduler = BackgroundScheduler(jobstores=jobstores)
if __name__ == '__main__':
#scheduler.add_job(alarm, 'interval', id='my_job0', seconds=10)
try:
scheduler.start()
# This is here to simulate application activity (which keeps the main thread alive).
while True:
time.sleep(2)
except (KeyboardInterrupt, SystemExit):
# Not strictly necessary if daemonic mode is enabled but should be done if possible
print "shutdown scheduler"
scheduler.shutdown()
it works, but when I add a new job id=my_job3 it did nothing.
Expected Behavior
I want to dynamic add, get , remove jobs when apsheduler is starting. get jobs -> it works, but not sure I use it in a right way. Can you give me a best way? or more use case?
from apscheduler.jobstores.redis import RedisJobStore
RedisJobStore().get_all_jobs()
remove jobs -> it works, but not sure I use it in a right way. Can you give me a best way? or more use case?
from apscheduler.jobstores.redis import RedisJobStore
RedisJobStore().remove_all_jobs()
Current Behavior
Steps to Reproduce
what I am trying is: step 1. add two jobs step 2. starting apsheduler -> successful step 3. add a new job -> fail
Context (Environment)
APScheduler==3.6.3 redis==3.5.3 ubuntu 16.04
Detailed Description
I try others way like: step1. -> step2. -> kill python test.py -> step 3. -> restart test.py (it works) But I am not sure it is a good way to use. When add new job must kill main thread? any option to restart? must kill and then restart?
Issue Analytics
- State:
- Created 3 years ago
- Comments:15 (6 by maintainers)
Top GitHub Comments
You never said that you’re adding jobs from a different process than where the scheduler is running. This is not supported in APScheduler 3. The FAQ explains why this won’t work and provides a workaround.
OK, thank you very much