Non-deterministic results in tf_ranking_tfrecord.py
See original GitHub issueHello,
When I run tf_ranking_tfrecord.py, I get each time different nDCG metrics.
I have already tried the following:
- tf.random.set_seed(1)
- tf.compat.v1.random.set_seed(1)
shuffle=Falsein_input_fn()- and I have not modified
group_size=1
Is it possible to make the results deterministic? And, if so, how?
Thanks.
Issue Analytics
- State:
- Created 3 years ago
- Comments:14 (4 by maintainers)
Top Results From Across the Web
python - Non-deterministic results serving a Keras model with ...
I built and trained a model using TensorFlow 2.2. When loading the model and using model.predict , I get deterministic results back.
Read more >Non Deterministic results although did everything in order for it ...
The issue, of course, is that the results are different although training with the same seed and the above settings. Reproducing is easy...
Read more >A Workaround for Non-Determinism in TensorFlow - Two Sigma
A Two Sigma researcher demonstrates a workaround to attain repeatable results. Key factors in machine learning research are the speed of the computations...
Read more >nondeterministic results on an indexed view in SQL Server
Nondeterministic results is a very common error message. Adding an index on a view can be done but there are some restrictions around...
Read more >Non-deterministic results between different machines - v4
Using a standard dictionary here results in non-deterministic behavior. You should use an OrderedDict if you are using Python 2.7 (collections.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

@nishathussain Adding the
tf_random_seedto RunConfig did the job - the results are now deterministic. Many thanks.@davidmosca one more thing I can think of is
tf_random_seedvalue in runConfig.tf.estimator.RunConfig( model_dir=None, tf_random_seed=None, save_summary_steps=100, save_checkpoints_steps=_USE_DEFAULT, save_checkpoints_secs=_USE_DEFAULT, session_config=None, keep_checkpoint_max=5, keep_checkpoint_every_n_hours=10000, log_step_count_steps=100, train_distribute=None, device_fn=None, protocol=None, eval_distribute=None, experimental_distribute=None, experimental_max_worker_delay_secs=None, session_creation_timeout_secs=7200 )
https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig