question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Evaluation Metric Modification

See original GitHub issue

I would like to use an evaluation metric as follows, where I would like to compute the pearson correlation coefficient (pcc) between y_true and y_score (for a multi output regression task). Now, since I perform the fitting on a low dimensional output, I would like to use the PCA components to get back the original dimensions and then compute the coefficient.

In the code snippet below, I use a variable passed through the function, but I wanted to know the way to pass additional arguments (like the pca) to the metric class

# pca_y is a variable passed and this class here is defined locally which kind of isn't the right way to do it

class PCC(Metric):
    def __init__(self):
        self._name = "pcc"
        self._maximize = True

    def __call__(self, y_true, y_score):
        y_true, y_score = y_true @ pca_y.components_, y_score @ pca_y.components_
        corrsum = 0
        for i in range(len(y_true)):
            corrsum += np.corrcoef(y_true[i], y_score[i])[1, 0]        
        return corrsum / y_true.shape[0]

I will be passing this to the model.fit(..., eval_metric=[PCC]). Since it takes classes as inputs, I am finding it difficult to understand how to give this as an input to the metric evaluation class. It could be a naive doubt, but would like your inputs on it.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6

github_iconTop GitHub Comments

1reaction
Optimoxcommented, Oct 23, 2022

I think this is beyond the scope of the library and would probably create a lot of complexity for a very rare case scenario.

1reaction
Optimoxcommented, Oct 19, 2022

You won’t be able to easily add more inputs to the MetricContainer https://github.com/dreamquark-ai/tabnet/blob/4fa545da50796f0d16f49d0cb476d5a30c2a27c1/pytorch_tabnet/metrics.py#L122

But since the PCA transform is fixed I think it would do the job to define it during initialization of the class.

I’m sorry but I don’t think there is an easier way to do what you want.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Evaluation Metrics Machine Learning - Analytics Vidhya
Learn different model evaluation metrics for machine learning like cross validation, confusion matrix, AUC-ROC, RMSE, Gini coefficients and ...
Read more >
Custom Objective and Evaluation Metric
This document introduces implementing a customized elementwise evaluation metric and objective for XGBoost. ... Breaking change was made in XGBoost 1.6.
Read more >
Metrics for Measuring Change Management - Prosci
Measuring the success of your change management initiative can help you determine its effectiveness. Use these metrics as a guide for your initiative....
Read more >
Performance Evaluation Metrics - Medium
These metrics help us to change the weights of our model to get best results. The choice of performance metrics is very crucial,...
Read more >
Model Evaluation Metrics in Machine Learning - KDnuggets
A detailed explanation of model evaluation metrics to evaluate a classification machine learning model.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found