TypeError: can't pickle _thread.lock objects
See original GitHub issueDjango 2.2.11 python 3.7.0 django-q 1.2.1 windows 10
Hello, when i run manage.py qcluster i get error, does somebody know what could be source of it and how to resolve it?
File "manage.py", line 21, in <module>
main()
File "manage.py", line 17, in main
execute_from_command_line(sys.argv)
File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
utility.execute()
File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\base.py", line 364, in execute
output = self.handle(*args, **options)
File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
q.start()
File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django_q\cluster.py", line 65, in start
self.sentinel.start()
File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
Issue Analytics
- State:
- Created 4 years ago
- Reactions:2
- Comments:48 (7 by maintainers)
Top Results From Across the Web
TypeError: can't pickle _thread.lock objects - Stack Overflow
The problem here is that self in function run_parallel() can't be pickled as it is a class instance. Moving this parallelized function ...
Read more >Resolve `TypeError can't pickle thread.lock objects`
logger object cannot be dumped by Pickle in Python2.7. The logging module implements a thread-safe logging mechanism with thread.lock in it.
Read more >Can't pickle _thread.lock objects error when object of a class ...
lock object is created in my Alldevices class.I never used thread.lock in my class. What I have tried:.
Read more >cannot pickle '_thread.lock' object - You.com | The AI Search ...
I have a problem using pool.map and an instance method in my class, and I get this TypeError: cannot pickle '_thread.lock' object ....
Read more >can't pickle _thread.lock objects" mean? : r/learnpython - Reddit
this error occurs when im attempting to post a a new thread to a specific subreddit. ive looked online about possible solutions but...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I didn’t see issue #389 before, so I’m also commenting in this thread as it may help others. Mentioned issue provided a workaround for me. I ended up adding the following snippet to my
manage.py
and it works 😃My setup is macOS 10.15.6, django-q 1.3.3 with redis, python 3.8.5
I tested this out on Windows Py 3.8, it doesn’t work. The issue is that the broker connection object is unpickleable, possibly due to the network connection being process local. In spawn context multiprocessing (MacOS/Windows) new processes are not child processes like in Linux, they’re separate. You should be able to test this environment in Linux with
multiprocessing.set_start_method('spawn')