question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

TypeError: __init__() got an unexpected keyword argument 'server'

See original GitHub issue

What does this mean?

C:\Users\dell\AppData\Local\Programs\Python\Python35-32\python.exe D:/scrapyspider/tutorial/main.py
2016-07-17 01:04:49 [scrapy] INFO: Scrapy 1.2.0dev2 started (bot: tutorial)
2016-07-17 01:04:49 [scrapy] INFO: Overridden settings: {'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2774.3 Safari/537.36', 'SPIDER_MODULES': ['tutorial.spiders'], 'NEWSPIDER_MODULE': 'tutorial.spiders', 'DOWNLOAD_DELAY': 1, 'BOT_NAME': 'tutorial', 'SCHEDULER': 'scrapy_redis.scheduler.Scheduler'}
2016-07-17 01:04:49 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.logstats.LogStats']
2016-07-17 01:04:49 [dmoz] INFO: Reading start URLs from redis key 'dmoz:start_urls' (batch size: 16)
2016-07-17 01:04:49 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-07-17 01:04:49 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-07-17 01:04:50 [scrapy] INFO: Enabled item pipelines:
['tutorial.pipelines.DmozPipeline']
2016-07-17 01:04:50 [scrapy] INFO: Spider opened
2016-07-17 01:04:50 [scrapy] INFO: Closing spider (shutdown)
Unhandled error in Deferred:
2016-07-17 01:04:50 [twisted] CRITICAL: Unhandled error in Deferred:


Traceback (most recent call last):
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
    self.crawler_process.crawl(spname, **opts.spargs)
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\crawler.py", line 163, in crawl
    return self._crawl(crawler, *args, **kwargs)
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\crawler.py", line 167, in _crawl
    d = crawler.crawl(*args, **kwargs)
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\twisted\internet\defer.py", line 1273, in unwindGenerator
    return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\twisted\internet\defer.py", line 1125, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\twisted\python\failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\crawler.py", line 87, in crawl
    yield self.engine.close()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 100, in close
    return self._close_all_spiders()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 340, in _close_all_spiders
    dfds = [self.close_spider(s, reason='shutdown') for s in self.open_spiders]
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 340, in <listcomp>
    dfds = [self.close_spider(s, reason='shutdown') for s in self.open_spiders]
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 298, in close_spider
    dfd = slot.close()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 44, in close
    self._maybe_fire_closing()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 51, in _maybe_fire_closing
    self.heartbeat.stop()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\twisted\internet\task.py", line 202, in stop
    assert self.running, ("Tried to stop a LoopingCall that was "
builtins.AssertionError: Tried to stop a LoopingCall that was not running.
2016-07-17 01:04:50 [twisted] CRITICAL: 
Traceback (most recent call last):
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy_redis\scheduler.py", line 120, in open
    debug=spider.settings.getbool('DUPEFILTER_DEBUG'),
TypeError: __init__() got an unexpected keyword argument 'server'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\crawler.py", line 74, in crawl
    yield self.engine.open_spider(self.spider, start_requests)
ValueError: ("Failed to instantiate dupefilter class '%s': %s", 'scrapy.dupefilters.RFPDupeFilter', TypeError("__init__() got an unexpected keyword argument 'server'",))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\twisted\internet\defer.py", line 1125, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\twisted\python\failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\crawler.py", line 87, in crawl
    yield self.engine.close()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 100, in close
    return self._close_all_spiders()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 340, in _close_all_spiders
    dfds = [self.close_spider(s, reason='shutdown') for s in self.open_spiders]
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 340, in <listcomp>
    dfds = [self.close_spider(s, reason='shutdown') for s in self.open_spiders]
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 298, in close_spider
    dfd = slot.close()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 44, in close
    self._maybe_fire_closing()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\scrapy\core\engine.py", line 51, in _maybe_fire_closing
    self.heartbeat.stop()
  File "C:\Users\dell\AppData\Local\Programs\Python\Python35-32\lib\site-packages\twisted\internet\task.py", line 202, in stop
    assert self.running, ("Tried to stop a LoopingCall that was "
AssertionError: Tried to stop a LoopingCall that was not running.

Process finished with exit code 0

my settings.py

SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_ORDER = 'BFO'
SCHEDULER_PERSIST = True
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderPriorityQueue'
REDIS_URL = None
REDIS_HOST = '127.0.0.1'
REDIS_PORT = 6379

Issue Analytics

  • State:open
  • Created 7 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

3reactions
rmaxcommented, Jul 16, 2016

By the way, using RedisSpider class is optional. If you set the scheduler settings and don’t use RedisSpider the requests will be sent through redis and you can start additional spiders to consume those requests.

0reactions
sept44commented, Apr 27, 2019

my settings.py

DUPEFILTER_CLASS = “scrapy_redis.dupefilter.RFPDupeFilter” SCHEDULER = “scrapy_redis.scheduler.Scheduler” SCHEDULER_ORDER = ‘BFO’ SCHEDULER_PERSIST = True SCHEDULER_QUEUE_CLASS = ‘scrapy_redis.queue.SpiderPriorityQueue’ REDIS_URL = None REDIS_HOST = ‘127.0.0.1’ REDIS_PORT = 6379

run: D:\swipers\snowball_spider.env\Scripts\python.exe D:/swipers/snowball_spider/baike/start.py 2019-04-27 08:11:21 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: baike) 2019-04-27 08:11:21 [scrapy.utils.log] INFO: Versions: lxml 4.3.2.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 18.9.0, Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 08:06:12) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Windows-7-6.1.7601-SP1 2019-04-27 08:11:21 [scrapy.crawler] INFO: Overridden settings: {‘BOT_NAME’: ‘baike’, ‘DOWNLOAD_DELAY’: 4, ‘DUPEFILTER_CLASS’: ‘scrapy_redis.dupefilter.RFPDupeFilter’, ‘FEED_EXPORT_ENCODING’: ‘utf-8’, ‘NEWSPIDER_MODULE’: ‘baike.spiders’, ‘SCHEDULER’: ‘scrapy_redis.scheduler.Scheduler’, ‘SPIDER_MODULES’: [‘baike.spiders’]} 2019-04-27 08:11:22 [scrapy.extensions.telnet] INFO: Telnet Password: 522bd44e06c36134 2019-04-27 08:11:22 [scrapy.middleware] INFO: Enabled extensions: [‘scrapy.extensions.corestats.CoreStats’, ‘scrapy.extensions.telnet.TelnetConsole’, ‘scrapy.extensions.logstats.LogStats’] 2019-04-27 08:11:22 [mybaike] INFO: Reading start URLs from redis key ‘mybaike:start_url’ (batch size: 16, encoding: utf-8 2019-04-27 08:11:22 [scrapy.middleware] INFO: Enabled downloader middlewares: [‘scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware’, ‘scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware’, ‘scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware’, ‘scrapy.downloadermiddlewares.useragent.UserAgentMiddleware’, ‘scrapy.downloadermiddlewares.retry.RetryMiddleware’, ‘scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware’, ‘scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware’, ‘scrapy.downloadermiddlewares.redirect.RedirectMiddleware’, ‘scrapy.downloadermiddlewares.cookies.CookiesMiddleware’, ‘scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware’, ‘scrapy.downloadermiddlewares.stats.DownloaderStats’] 2019-04-27 08:11:22 [scrapy.middleware] INFO: Enabled spider middlewares: [‘scrapy.spidermiddlewares.httperror.HttpErrorMiddleware’, ‘scrapy.spidermiddlewares.offsite.OffsiteMiddleware’, ‘scrapy.spidermiddlewares.referer.RefererMiddleware’, ‘scrapy.spidermiddlewares.urllength.UrlLengthMiddleware’, ‘scrapy.spidermiddlewares.depth.DepthMiddleware’] Unhandled error in Deferred: 2019-04-27 08:11:22 [twisted] CRITICAL: Unhandled error in Deferred:

Traceback (most recent call last): File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\crawler.py”, line 172, in crawl return self._crawl(crawler, *args, **kwargs) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\crawler.py”, line 176, in _crawl d = crawler.crawl(*args, **kwargs) File “D:\swipers\snowball_spider.env\lib\site-packages\twisted\internet\defer.py”, line 1613, in unwindGenerator return _cancellableInlineCallbacks(gen) File “D:\swipers\snowball_spider.env\lib\site-packages\twisted\internet\defer.py”, line 1529, in _cancellableInlineCallbacks _inlineCallbacks(None, g, status) — <exception caught here> — File “D:\swipers\snowball_spider.env\lib\site-packages\twisted\internet\defer.py”, line 1418, in _inlineCallbacks result = g.send(result) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\crawler.py”, line 80, in crawl self.engine = self._create_engine() File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\crawler.py”, line 105, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\core\engine.py”, line 70, in init self.scraper = Scraper(crawler) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\core\scraper.py”, line 71, in init self.itemproc = itemproc_cls.from_crawler(crawler) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\middleware.py”, line 53, in from_crawler return cls.from_settings(crawler.settings, crawler) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\middleware.py”, line 35, in from_settings mw = create_instance(mwcls, settings, crawler) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\utils\misc.py”, line 140, in create_instance return objcls.from_crawler(crawler, *args, **kwargs) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy_redis\pipelines.py”, line 58, in from_crawler return cls.from_settings(crawler.settings) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy_redis\pipelines.py”, line 54, in from_settings return cls(**params) builtins.TypeError: init() got an unexpected keyword argument ‘server’

2019-04-27 08:11:22 [twisted] CRITICAL: Traceback (most recent call last): File “D:\swipers\snowball_spider.env\lib\site-packages\twisted\internet\defer.py”, line 1418, in _inlineCallbacks result = g.send(result) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\crawler.py”, line 80, in crawl self.engine = self._create_engine() File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\crawler.py”, line 105, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\core\engine.py”, line 70, in init self.scraper = Scraper(crawler) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\core\scraper.py”, line 71, in init self.itemproc = itemproc_cls.from_crawler(crawler) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\middleware.py”, line 53, in from_crawler return cls.from_settings(crawler.settings, crawler) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\middleware.py”, line 35, in from_settings mw = create_instance(mwcls, settings, crawler) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy\utils\misc.py”, line 140, in create_instance return objcls.from_crawler(crawler, *args, **kwargs) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy_redis\pipelines.py”, line 58, in from_crawler return cls.from_settings(crawler.settings) File “D:\swipers\snowball_spider.env\lib\site-packages\scrapy_redis\pipelines.py”, line 54, in from_settings return cls(**params) TypeError: init() got an unexpected keyword argument ‘server’

@rmax I read your suggestion and modified it, but still can’t succeed. Do you still need to modify other places? please!

Read more comments on GitHub >

github_iconTop Results From Across the Web

TypeError: __init__() got an unexpected keyword argument ...
While trying to return a query-set using generic views, I get an error. TypeError: init() got an unexpected keyword argument 'many'.
Read more >
TypeError: __init__() got an unexpected keyword argument ...
After completing all the procedure, the following error was thrown while running run.py TypeError: init() got an unexpected keyword argument ...
Read more >
TypeError: __init__() got an unexpected keyword ... - GitHub
Request generates error TypeError: init() got an unexpected keyword argument 'strict'. Problem. This is the line of code that is erroring out:
Read more >
init__() got an unexpected keyword argument 'max_iter'?
TypeError : init() got an unexpected keyword argument 'max_iter'. I m running the linear regression code in Community edition. Google says reinstall --....
Read more >
Python console does not load: __init__() got an ...
Error starting server with host: 127.0.0.1, port: 60145, client_port: 60146 ... TypeError: __init__() got an unexpected keyword argument 'allow_none'.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found