error when using rmse loss function with tabnet regressor
See original GitHub issueI want to use RMSE as a loss function for regression problems but when I try to do it, it gives me an error. below is the code
def loss_fn(x,y):
criterion = nn.MSELoss()
return torch.sqrt(criterion(x, y))
clf1 = TabNetRegressor(optimizer_fn=torch.optim.Adam,
cat_idxs=[0,1,2,3,4,5,6,7,8,9], cat_dims=[2,2,2,4,4,4,8,8,7,15],lambda_sparse=0,
optimizer_params=dict(lr=2e-2),
scheduler_params={"step_size":10, "gamma":0.9},
scheduler_fn=torch.optim.lr_scheduler.StepLR, mask_type='entmax')
# fit the model
clf1.fit(
x_train,y_train,
eval_set=[(x_train, y_train), (x_val, y_val)],loss_fn=loss_fn,
eval_name=['train', 'valid'],
eval_metric=['mse','rmse'],
max_epochs=50 , patience=100,
batch_size=100000, virtual_batch_size=50000,
)```
below is the error
![Screenshot from 2021-08-27 16-11-28](https://user-images.githubusercontent.com/81755307/131115420-e6748758-3e70-414b-be3b-493d10da527c.png)
Issue Analytics
- State:
- Created 2 years ago
- Comments:11
Top Results From Across the Web
RMSE loss for multi output regression problem in PyTorch
Here is why the above method works - MSE Loss means mean squared error loss. So you need not have to implement square...
Read more >pytorch_tabnet package - GitHub Pages
Implements unsupervised loss function. This differs from orginal paper as it's scaled to be batch size independent and number of features reconstructed ...
Read more >pytorch-tabnet - PyPI
A self supervised loss greater than 1 means that your model is reconstructing worse than predicting the mean for each feature, a loss...
Read more >CITEseq AmbrosM's X+[Keras+lgbm+TabNet] | Kaggle
The loss function: The competition is scored by the average Pearson correlation coefficient between the predictions and the ground truth. As ...
Read more >Mean Squared Error (MSE), Mean Absolute Error (MAE), RMSE
Error / Loss Functions for Regression : Mean Squared Error (MSE), Mean Absolute ... If you do choose to make purchases through these...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
well I guess it’s totally ok to switch with
loss = loss - something
but I’d be curious to understand what’s the problem behind that.i did some testing and its the problem with torch.sqrt other techniques work fine