[Bug] Under utilizing CPU usage
See original GitHub issueI trained my model on a 36 core CPU and set n_jobs=-1
and it worked.
automl = autosklearn.classification.AutoSklearnClassifier(
n_jobs=-1
)
However, from the perspective of htop
, auto-sklearn only occupies one or two cores most of the time. Is there any way to improve CPU utilization?
- python 3.8.5
- auto-sklearn 0.14.2
- Ubuntu 20.04
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (2 by maintainers)
Top Results From Across the Web
How to Fix High CPU Usage - Intel
Updating your drivers may eliminate compatibility issues or bugs that cause increased CPU usage. Open the Start menu, then Settings.
Read more >How to fix 100% CPU Utilization Bug (When no processes are ...
How to fix 100% CPU Utilization Bug (When no processes are using the CPU.) ... Support me on Patreon to help me keep...
Read more >[SOLVED] How to fix CPU usage 100% issue - Driver Easy
A complete guide to troubleshooting high CPU usage issue on Windows 10 · Try these fixes · Fix potential Windows stability issues ·...
Read more >How to Fix High CPU Usage in Windows - MakeUseOf
Sometimes, however, a bug can cause CPU usage to spin out of control, such as the infamous WmiPrvSE.exe. In most cases, you can...
Read more >Guidance for troubleshooting high CPU usage - Windows Server
Select the CPU column header to sort the list by CPU usage. Make sure that the arrow that appears on the header points...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
This is most likely due to https://github.com/automl/SMAC3/issues/774, which basically says that getting new configurations (i.e. which model with which hyperparameters to try next) is not executed in parallel. When running in parallel, and evaluating configurations is faster than the suggestion mechanism, you’ll observe the pattern reported here, namely that auto-sklearn uses only a single core. Up to iteration 30, auto-sklearn suggests configurations via meta-learning (in a single batch), which explains why parallelism works in the beginning. Unfortunately, there is not really anything that can be done about this. In such cases you might be better of using random search as it can make full use of the parallel setting.
OK, I just know a little about meta-learning, but it sounds like: after meta-learning (found nice configuration), it doesn’t need to continue parallel, does it?
In my tests and in my case, parallelism will not affect the accuracy, (and combined with my understanding above) so I think this is a not-real bug that doesn’t need to be paid attention to. Therefore, I will close this issue in 3 days if there is no objection.
By the way, I actually encounter the bug mentioned #1236 (but not every time). I think there may be some relationship between the two issues (meta-learning?)
Finally, thank the auto-sklearn team for your contribution. Auto-sklearn is really an awesome and great nuclear weapon. 😄