question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

scrapy LOG_LEVEL setting in Spider.custom_settings does not work

See original GitHub issue

I set the LOG_LEVEL setting to INFO in the Spider class via the custom_settings attribute but I still see the DEBUG messages in the console.

When I set it on the settings.py file or via the command line option --loglevel, it works.

I thought any settings could be set via the custom_settings attribute. Is that a bug? (Scrapy 1.0.3 and python 2.7.10)

class TestSpider(scrapy.Spider):
    name = "Test"
    ...
    custom_settings = {
        'LOG_LEVEL': 'INFO',
    }

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Reactions:2
  • Comments:13 (7 by maintainers)

github_iconTop GitHub Comments

3reactions
kmikecommented, Sep 19, 2016

+1 to fix it; it looks like a bug for me.

0reactions
ianhhhhhhhhecommented, Feb 19, 2017

About settings, i think maybe you setup but you don’t know how to import or activate them. And that’s why your logging doesn’t work.

So here are some solutions might help.

You need to sure that settings url is corret in root/scrapy.cfg. Then you can set your logging in settings.py. I suggest at least setup LOG_FILE and LOG_LEVEL.

Then you can run scrapy crawl crawler_name

If you want to run the spider from a script, you will need from scrapy.utils.project import get_project_settings and from scrapy.utils.log import configure_logging . Then put configure_logging(get_project_settings) to where you need to activating logging.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Scrapy CrawlProcess() is not changing LOG_LEVEL when ...
I am still getting default DEBUG logs why is this happening? When I run the same spider with scrapy crawl firstSpider -a COUNT="four"...
Read more >
Logging — Scrapy 2.7.1 documentation
Logging works out of the box, and can be configured to some extent with the Scrapy settings listed in Logging settings. Scrapy calls...
Read more >
Scrapy - Settings - Tutorialspoint
It is a module where a new spider is created using genspider command. Default value: ''. 50. RANDOMIZE_DOWNLOAD_DELAY. It defines a random amount...
Read more >
Scrapy Documentation - Read the Docs
At this point Python 2.7 and pip package manager must be working, ... This is because Scrapy core requirement Twisted does not support....
Read more >
Using Scrapy in Jupyter notebook | JJ's World
The pipeline is set in the custom_settings parameter ... file needs to be defined in FEED_URI inside the custom settings of the spider....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found