question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Runtime Error running HistGradientBoostingRegressor on GPU

See original GitHub issue

I’m getting an Runtime Error, if I’m trying to run an HistGradientBoostingRegressor on an CUDA GPU.

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Here is the short code snippet that I run to produce this Error:

from sklearn.datasets import make_regression
from hummingbird.ml import convert
from sklearn.ensemble import HistGradientBoostingRegressor

X, y = make_regression(n_samples=1000, n_features=8, n_informative=5, n_targets=1, random_state=42, shuffle=True)
clf = HistGradientBoostingRegressor(random_state=42)
clf.fit(X, y)

prediction_sk = clf.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
print("Prediction sk: ", prediction_sk)

model = convert(clf, 'pytorch')

prediction_pt_cpu = model.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
print("Prediction pt CPU: ", prediction_pt_cpu)

model.to('cuda')

prediction_pt_gpu = model.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
print("Prediction pt GPU: ", prediction_pt_gpu)

At the line prediction_pt_gpu = model.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) I get following Error:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Full Traceback:

Traceback (most recent call last): File “D:/Repositories/masterarbeit/RandForest_Test/Hummingbird_test.py”, line 19, in <module> prediction_pt_gpu = model.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml_container.py”, line 256, in predict return self._run(self._predict, *inputs) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml_container.py”, line 83, in _run return function(*inputs) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml_container.py”, line 349, in _predict return self.model.forward(inputs).cpu().numpy().ravel() File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml_topology.py”, line 475, in forward pytorch_outputs = pytorch_op((variable_map[input] for input in operator.input_full_names)) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\torch\nn\modules\module.py”, line 727, in _call_impl result = self.forward(*input, **kwargs) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml\operator_converters_tree_implementations.py”, line 246, in forward output = self.aggregation(output) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml\operator_converters_tree_implementations.py”, line 522, in aggregation return self.post_transform(output) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml\operator_converters_gbdt_commons.py”, line 115, in apply x += base_prediction RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8

github_iconTop GitHub Comments

1reaction
speedfreakwcommented, Dec 16, 2020

Haha yes the speed of Hummingbird is also incredible 😄 Currently it cuts the time of the Prediction from 400ms to only 22ms and this is huge!

1reaction
interesaaatcommented, Dec 16, 2020

I was hoping for “The speed with which you have fixed all the bugs I’ve found of Hummingbird is incredible!” 😄 .

Glad to hear we helped!

Read more comments on GitHub >

github_iconTop Results From Across the Web

nvrtc: error: invalid value for --gpu-architecture (-arch) #47658
Hi, I got same error. RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch). my env : rtx 3070 : cuda 11.0 : ...
Read more >
sklearn.ensemble.HistGradientBoostingRegressor
A meta-estimator that begins by fitting a regressor on the original dataset and then fits additional copies of the regressor on the same...
Read more >
Supported scikit-learn Models - ONNX
Name Package Supported ARDRegression linear_model Yes AdaBoostClassifier ensemble Yes AdaBoostRegressor ensemble Yes
Read more >
Graphics Runtime Detected a Crash or Loss of Device - Reddit
game crashes and this error massage appears and AMD bug report tool appears and says: We detected a drive timeout has occurred on...
Read more >
Boosting Showdown: Scikit-Learn vs XGBoost vs LightGBM vs ...
Unsurprisingly it seems that AdaBoost and random forest both lagged quite a bit in run time, but I was rather disappointed in the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found