Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Problems with parallelization within a batch using `const` token

See original GitHub issue

Hello Again,

So I’ve been trying to get the parallelization working for this, and when I set n_cores_batch = 2 in the config.json file it keeps giving me the error below. I’m not sure what is causing this issue, and it persists with any other value for n_cores_batch other than 1. Do you have any insight into why this might be?

Traceback (most recent call last):
  File "/home/software/anaconda/3/envs/dso-sw/lib/python3.7/multiprocessing/", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/home/software/anaconda/3/envs/dso-sw/lib/python3.7/multiprocessing/", line 44, in mapstar
    return list(map(*args))
  File "/nas0/tluchko/sandbox/deep-symbolic-optimization/dso/dso/", line 28, in work
    optimized_constants = p.optimize()
  File "/nas0/tluchko/sandbox/deep-symbolic-optimization/dso/dso/", line 393, in optimize
    optimized_constants = Program.const_optimizer(f, x0)
TypeError: 'NoneType' object is not callable

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "", line 9, in <module>
  File "/nas0/tluchko/sandbox/deep-symbolic-optimization/dso/dso/", line 90, in train
  File "/nas0/tluchko/sandbox/deep-symbolic-optimization/dso/dso/", line 278, in learn
    results =, programs_to_optimize)
  File "/home/software/anaconda/3/envs/dso-sw/lib/python3.7/multiprocessing/", line 268, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/home/software/anaconda/3/envs/dso-sw/lib/python3.7/multiprocessing/", line 657, in get
    raise self._value
TypeError: 'NoneType' object is not callable

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

brendenpetersencommented, Aug 20, 2021

Hi @Sean-Reilly, as a temporary hack, you can add pool = None in at the beginning of the learn() function. That will break some other use cases (namely, the control task when using PyBullet envs), but should be just fine for regression.

A real fix will be incoming.

brendenpetersencommented, Aug 20, 2021

Hi @Sean-Reilly , thanks for the config! I am able to reproduce the bug. Looks like an issue when using n_cores_batch > 1 and the const token. I can reproduce the bug with a simplified config:

"task" : {
    "function_set" : ["add", "mul", "div", "sub", "const"]
"training" : {
        "n_samples" : 100,
        "batch_size" : 10,
        "n_cores_batch" : 2

I will look into this and report back! Thanks.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Parallelize a Bash FOR Loop - Unix & Linux Stack Exchange
We use file descriptor 3 as a semaphore by pushing (= printf ) and poping (= read ) tokens ( '000' ). By...
Read more >
.net - How to limit the Maximum number of parallel tasks in c#
In my programs, it seems that when using Parallel.Foreach, the list is cut in n sets (MaxDegreeOfParallelism). All sets are processed in ......
Read more >
Running the tokeniser in parallel does not gain a lot from more ...
The setup is I read in a newline json and tokenized all texts. It includes (I have recalculated the number of tokens a...
Read more >
Analyzing 3 Ways to Run Batch Tasks in Parallel Using ...
Let's demonstrate this solution with a simple problem — given 10 tasks where the essence of each task consumes a constant amount of...
Read more >
Massive Parallel Processing with Lambda functions and when ...
AWS allows you to trigger lambda for each message or batch of messages and lambda just scales like anything until no messages are...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found