Runtime Error running HistGradientBoostingRegressor on GPU
See original GitHub issueI’m getting an Runtime Error, if I’m trying to run an HistGradientBoostingRegressor on an CUDA GPU.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Here is the short code snippet that I run to produce this Error:
from sklearn.datasets import make_regression
from hummingbird.ml import convert
from sklearn.ensemble import HistGradientBoostingRegressor
X, y = make_regression(n_samples=1000, n_features=8, n_informative=5, n_targets=1, random_state=42, shuffle=True)
clf = HistGradientBoostingRegressor(random_state=42)
clf.fit(X, y)
prediction_sk = clf.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
print("Prediction sk: ", prediction_sk)
model = convert(clf, 'pytorch')
prediction_pt_cpu = model.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
print("Prediction pt CPU: ", prediction_pt_cpu)
model.to('cuda')
prediction_pt_gpu = model.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
print("Prediction pt GPU: ", prediction_pt_gpu)
At the line prediction_pt_gpu = model.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
I get following Error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Full Traceback:
Traceback (most recent call last): File “D:/Repositories/masterarbeit/RandForest_Test/Hummingbird_test.py”, line 19, in <module> prediction_pt_gpu = model.predict([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml_container.py”, line 256, in predict return self._run(self._predict, *inputs) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml_container.py”, line 83, in _run return function(*inputs) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml_container.py”, line 349, in _predict return self.model.forward(inputs).cpu().numpy().ravel() File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml_topology.py”, line 475, in forward pytorch_outputs = pytorch_op((variable_map[input] for input in operator.input_full_names)) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\torch\nn\modules\module.py”, line 727, in _call_impl result = self.forward(*input, **kwargs) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml\operator_converters_tree_implementations.py”, line 246, in forward output = self.aggregation(output) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml\operator_converters_tree_implementations.py”, line 522, in aggregation return self.post_transform(output) File “D:\Repositories\masterarbeit\venvPycharm\lib\site-packages\hummingbird\ml\operator_converters_gbdt_commons.py”, line 115, in apply x += base_prediction RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Issue Analytics
- State:
- Created 3 years ago
- Comments:8
Haha yes the speed of Hummingbird is also incredible 😄 Currently it cuts the time of the Prediction from 400ms to only 22ms and this is huge!
I was hoping for “The speed
with which you have fixed all the bugs I’ve foundof Hummingbird is incredible!” 😄 .Glad to hear we helped!