question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Prediction issue (Predictions from evaluate and from predict don't line up)

See original GitHub issue

When I print out predictions by editing the metrics.py file to print labels and predictions inside the normalized_discounted_cumulative_gain function, the predictions given aren’t the same as those I get when printing all values in the generator given by estimator.predict. The labels also look to be in a different order than the input order, despite removing all shuffling from the input function which is pretty much taken directly from the example files.

Is there a reason this might be, that the labels get shuffled and the predictions look different? For example, when I have fewer examples than the list size, the predictions printed from evaluate have 0’s for the padded values, but the predictions given by estimator.predict all have a different decimal value. I’m using a custom hook on evaluate and predict to warm start from the correct directory since my warm start directory and model directory are different, and since the estimator is having trouble reading the checkpoints if I just use the checkpoint_path= setting:

` class InitHook(tf.train.SessionRunHook): def init(self, checkpoint_dir): self.modelPath = checkpoint_dir self.initialized = False

def begin(self):
    if not self.initialized:
        checkpoint = tf.train.latest_checkpoint(self.modelPath)
        tf.train.warm_start(checkpoint)
        print('warmstart from checkpoint {} from path {}'.format(checkpoint, self.modelPath))
        self.initialized = True
    else:
        pass  # no checkpoint to warmup from

`

(Hook taken from: https://stackoverflow.com/questions/49846207/tensorflow-estimator-warm-start-from-and-model-dir)

This doesn’t seem to be causing issues though, so I’m not sure what’s going on.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
DianeHucommented, Jul 9, 2019

I figured it out. Inside _libsvm_generate in data.py, there’s an np.random.shuffle put in before the doc list is trimmed to the max list size. I think this is supposed to make it so that the doc list is trimmed at random, but on prediction this is an issue if libsvm_generator is used in the input_fn, because it means the output predictions don’t match the input order within each query group. Since the labels aren’t output on prediction, you can’t match the output probabilities back to the original query-document pair.

On training or evaluation this isn’t an issue because the features and predictions are run through those functions together so it doesn’t matter if the original order is jumbled.

0reactions
xuanhuiwangcommented, Jul 9, 2019

Great to know that you got to the bottom of the issue. My general advice is to NOT use libsvm generator now since it is only for demo purpose. Given that there are so many exceptions of the LibSVM generator, we are considering removing it in our future versions.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Different results between model.evaluate() and model.predict()
I'm wondering whether the issue is different predictions or different metrics calculations. If you do this, which version does it match? If it ......
Read more >
Keras: model.prediction does not match model.evaluation loss
It turns out that model.predict doesn't return predictions in the same order generator.labels does, and that is why MSE was much larger when...
Read more >
Making Predictions with Regression Analysis - Statistics By Jim
Learn how to use regression analysis to make predictions and determine whether they are both unbiased and precise.
Read more >
How to Make Predictions with Keras - Machine Learning Mastery
We can predict the class for new data instances using our finalized classification model in Keras using the predict_classes() function. Note ...
Read more >
Ways to Evaluate Regression Models - Towards Data Science
We can understand the bias in prediction between two models using the arithmetic mean of the predicted values. For example, The mean of...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found