Default -j and -p values are too high on systems with many cores
See original GitHub issueFrom the documentation:
By default catkin build will build up to N packages in parallel and pass -jN -lN to make where N is the number of cores in your computer.
That means that catkin build
will spawn up to 64 parallel gcc instances on an 8-core machine, which is enough to fill 16GB of RAM and cause the OOM killer to step in.
Is there a chance to enforce a global limit on the number of parallel make jobs? It seems GNU make has an interface for an external job server, maybe we could use that.
Otherwise, please consider lowering the default values for -p and -j. On a 4-core machine 16 processes might be okay, but 64 processes on an 8-core machine is too much.
Issue Analytics
- State:
- Created 9 years ago
- Comments:28 (27 by maintainers)
Top Results From Across the Web
The Heuristic Value of p in Inductive Statistical Inference - PMC
The p-value has been subjected to much speculation, analysis, and criticism. We explore how well the p-value predicts what researchers ...
Read more >Evolution of Reporting P Values in the Biomedical Literature ...
Objective To evaluate in large scale the P values reported in the abstracts and full text of biomedical research articles over the past...
Read more >An investigation of the false discovery rate and the ... - Journals
We ask how to interpret a single p-value, the outcome of a test of significance. All of the assumptions of the test are...
Read more >Clipper: p-value-free FDR control on high-throughput data ...
The most common goal of analyzing high-throughput data is to contrast two conditions so as to reliably screen “interesting features,” where “ ...
Read more >The search for significant others: p-values rarely engage
It is conventional in the social sciences to report p-values when communicating the results of statistical analyses.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I researched a bit and I believe I have found a solution which uses GNU make’s jobserver.
A bit of background to understand the following: GNU make creates a UNIX pipe for job management. To initialize the pipe,
N
tokens are written into it. All processes share the pipe, and when they want to execute something, they read a token from the pipe. After execution, they write it back into the pipe. In this way, it’s guaranteed that there are no more thanN
jobs running at a time.Of course, the pipe has to be shared with the make subprocesses. To do this,
make
passes a--jobserver-fds=X,Y
argument to its subprocesses.X
andY
are the fd numbers of the pipe ends, which are not closed before spawning the subprocess and thus still available in the child.To exploit this, while keeping
catkin build
in control of the actualmake
invocation in the packages (so we don’t loose the nice logging and progress percentage features), we could introduce an intermediatemake
before starting the individual package builds. This is what the process tree could look like:with
catkin_makefile
:The root
catkin build
process just execs the rootmake
process here. Everything else is shifted into thebuild-helper
process. I think that is actually doable and portable. No black magic involved 😉I’ll try to draw up a PR soon.
This can be closed now that #155 has been merged.