KeyError on listjobs request after scrapyd restart
See original GitHub issueSeems self.root.poller.queues
is not updated on scrapyd daemon start. It leads to error on listjobs and schedule requests for all deployed projects before new scrapyd-deploy.
For fixing listjobs request I’ve added self.root.poller.update_projects()
before line 124 in webservice.py https://github.com/scrapy/scrapyd/blob/master/scrapyd/webservice.py#L124.
I’m not sure that’s the best deal but it works.
This is the error that I got:
2017-05-25T16:21:15+0300 [_GenericHTTPChannelProtocol,0,127.0.0.1] Unhandled Error
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/twisted/web/http.py", line 1906, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/lib64/python3.4/site-packages/twisted/web/http.py", line 771, in requestReceived
self.process()
File "/usr/lib64/python3.4/site-packages/twisted/web/server.py", line 190, in process
self.render(resrc)
File "/usr/lib64/python3.4/site-packages/twisted/web/server.py", line 241, in render
body = resrc.render(self)
--- <exception caught here> ---
File "/usr/lib/python3.4/site-packages/scrapyd/webservice.py", line 21, in render
return JsonResource.render(self, txrequest).encode('utf-8')
File "/usr/lib/python3.4/site-packages/scrapyd/utils.py", line 20, in render
r = resource.Resource.render(self, txrequest)
File "/usr/lib64/python3.4/site-packages/twisted/web/resource.py", line 250, in render
return m(request)
File "/usr/lib/python3.4/site-packages/scrapyd/webservice.py", line 124, in render_GET
queue = self.root.poller.queues[project]
builtins.KeyError: 'medse'
Issue Analytics
- State:
- Created 6 years ago
- Comments:7 (3 by maintainers)
Top Results From Across the Web
Scrapy key error - python - Stack Overflow
So, make a following change in your code: ... I tried that by myself and when spider class inherits from object that KeyError...
Read more >Release Scrapy developers
Before you start scraping, you will have set up a new Scrapy project. ... tutorial/: the project's python module, you'll later import your ......
Read more >Release notes — Scrapy 2.7.1 documentation
The output of Request callbacks defined as coroutines is now processed ... which causes Scrapy to only start item delivery after the crawl...
Read more >reading-notes | 张俊的读书笔记 - GitHub Pages
Scrapy consists of the following: The scheduler : This is where multiple Request get queued until the downloader is ready to process them....
Read more >scrapy - Bountysource
Created 9 years ago in scrapy/scrapyd with 14 comments. ... We should store job data (in a sqlite or something) so that it...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Could it be #70? What init/systemd script do you use? In which directory do you run scrapyd?
I have also the same problem, can you help me to fix it. how to use --rundir