Don't specify number of jobs when building with Ninja
See original GitHub issueOr, at least default to not specifying.
I have a RyZen 2700x (8 core, 2thread/core, so 16 logical cores). I use Linux and am frequently close to nearly-out-of-memory. I can use Ninja on its own just fine, but probably at least once a day, if I build within VS Code via this extension, X.org will stop responding. I can sometimes get in via a virtual console or ssh, which usually reveals that something prompted running out of memory (swap is somehow uselessly slow) or something else. I suspect it has to do with the fact that this extension calls cmake --build
passing -j 18
on my 16-thread machine.
When building with Ninja, even via cmake --build
, it automatically defaults to “maximum parallelism”, so passing -j 18
is unnecessary to get parallel builds. Additionally, it’s somehow more efficient than make, so specifying the “little bit more than the cores” value that works with make ends up overwhelming systems (I’ve done this on Windows too, even with more RAM). The line I pointed out above could be modified slightly to just not pass the jobs argument to ninja generators. There’s already a setting for build args and build tool args, so users could override (presumably per-workspace) job counts if they wanted something less than max parallelism. (Ninja appears to honor the last -j
argument on the command line, so I can work around this temporarily by setting buildToolArgs
to something like ["-j", "14"]
in my user preferences, but that would break as soon as I set build tool args in a workspace for any reason - fortunately unlikely.)
I’m quite certain by now that this is the cause of the issue, as I’ve been digging into it for months and it always happens when building a CMake project via VSCode and this extension.
Hope this makes sense, let me know if I can help somehow.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:9
- Comments:10 (3 by maintainers)
Top GitHub Comments
@MaxCreator:
cmake.parallelJobs
should configure the-j
setting for you.Using the flag was easy enough once I figured out what the issue was and once I knew about the flag. The problem is that it’s not obvious that it’s cmake-tools that’s invoking non-default ninja behavior and that that’s what what causing all of my memory to run out.
This problem had been plaguing me for a couple of weeks–first it wasn’t obvious that I was running out of memory. Whenever I would kick off a build the IT mandated anti-virus software would go “nuts” (briefly spin two processes up to like 400% cpu utilization). I thought that was the issue (and I’m sure it plays a role). It wasn’t until later that I realized the crashing was caused by running out of memory. Once I figured that out and that the memory was being exhausted because too many build jobs were being kicked off, I thought it was cmake itself passing the
-j14
flag to ninja. So I spent time messing with theJOBS_POOL
variable (which is documented terribly by cmake) to no avail. I had about given up when a coworker mentioned he thought there was a variable in cmake tools that controlled the number of parallel jobs. Googling that led me to this issue, which was the first I had heard about thecmake.paralleljobs
variable and that it was set to a non-standard value.So once I found out about the setting, using it was easy enough, it was all the time wasted in troubleshooting and figuring out what was going on that was a little frustrating.
Maybe one other thing that’s not really good about it is that for at least one project I’m on, the
.vscode/settings.json
is under version control so that settings can be shared among our team, however setting that variable is specific to each developer’s machine, depending on what processor they have. The-j12
I set it at isn’t appropriate for a co-worker that only has a quad-core machine.I’m not trying to be critical but this sort of has the same flavor as developers that try to outsmart their compiler by trying to force it to optimize their code in specific ways. Ninja was built with parallelism in mind. Seems reasonable to let it do the job it was designed to do.