Improve control over number of threads used in an mne call
See original GitHub issueDescription
I propose to use threadpoolctl to improve control over the number of threads used throughout mne and in calls to external libraries like numpy. This is apparently the direction that numpy has moved to as discussed here: https://github.com/numpy/numpy/issues/11826
Reasoning
I have had trouble completely controlling the number of threads used by various mne functions. Many mne functions have n_jobs arguments that control the number of threads used in that function, but there are cases where code within that function can escape this limit due to externally defined reference values. And then there are functions like mne.chpi.filter_chpi that do not have the n_jobs argument, but can still parallelize. It is possible to control this with environment variables as dicussed here, but that only works before you import the respective library, e.g. numpy. The easiest way to control thread limits after an import has happened appears to be threadpoolctl.
Proposed Implementation
I have successfully use the syntax
from threadpoolctl import threadpool_limits
with threadpool_limits(limits=n_jobs, user_api="blas"):
mne.do_something()
to control threads used in an mne call. The same could be used internally to make better use of the existing n_jobs argument without forcing the user to do it themselves. If this proves successful, it might make sense to add the n_jobs argument in even more places.
Issue Analytics
- State:
- Created a year ago
- Comments:15 (9 by maintainers)

Top Related StackOverflow Question
yes it’s the sklearn behavior.
@larsoner thanks for the offer. I would love to, but I am afraid it would take some time. I already have a few PRs on my todo list, one of them already for
mneand I haven’t gotten to do any of them yet… So if it can wait for a few weeksTM, maybe. But I won’t mind, if someone else did it until then.