tf.summary.text fails keeping summaries
See original GitHub issueThis issue had been migrated from https://github.com/tensorflow/tensorflow/issues/10204:
I got following issues when I use
tf.summary.text
and view the summaries on tensorboard.
- It shows me text summaries in random order.
- It randomly removes existing summaries and show me only a few (Is there a configuration for maximum number of summaries to keep?)
- I can usually see only around 5 summaries on tensorboard even if I added summaries 100+ times.
- Other summaries work properly when I use summaries like below.
summary_op = tf.summary.merge(summaries) # Other scalar, distribution, histogram summaries valid_summary_op = tf.summary.merge([valid_sentence_summary]) # text summary with tf.summary.text
I can reproduce this problem in two different environments.
- Ubuntu 14.04 / CUDA 8.0 / Cudnn 5.1 / TF 1.1.0rc2 / Bazel 0.4.5 / GPU TITAN X Pascal (use 0 gpus~4gpus)
- Mac OSx Sierra / TF 1.1.0rc2 / Bazel 0.4.5 / No GPU
Below is sample code to reproduce this issue.
import tensorflow as tf text_list = ['this is the first text', 'this is 2nd text', 'this is random text'] id2sent = {id:sent for id, sent in enumerate(text_list)} sent2id = {sent:id for id, sent in id2sent.items()} tf.reset_default_graph() outer_string = tf.convert_to_tensor('This is string outside inner scope.') outer_summary = tf.summary.text('outside_summary', outer_string) with tf.name_scope('validation_sentences') as scope: id_list = tf.placeholder(tf.int32, shape=[3], name='sent_ids') valid_placeholder = tf.placeholder(tf.string, name='valid_summaries') inner_summary = tf.summary.text('sent_summary', valid_placeholder) summaries = [outer_summary, inner_summary] summary_op = tf.summary.merge(summaries) sess = tf.Session() summary_writer = tf.summary.FileWriter(logdir='./text_summary', graph=sess.graph) for step in range(10): predicted_sents_ids = sess.run( id_list, feed_dict={ id_list: [0, 1, 2] }) # list of string predicted_sents = [id2sent[id] for id in predicted_sents_ids] valid_summary = sess.run(summary_op, feed_dict={ valid_placeholder: predicted_sents }) summary_writer.add_summary(valid_summary, global_step=step) # summary_writer.flush() # summary_writer.flush() # flush() didn't help..
And below is the result on tensorboard.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:5
- Comments:8
Top Results From Across the Web
How can I use tf.summary.text properly? - Stack Overflow
@Engine No, it's just an example. I need to log a list containing strings. In my case I have a list with several...
Read more >tf.summary.write | TensorFlow v2.11.0
Writes a generic summary to the default SummaryWriter if one exists. ... This exists primarily to support the definition of type-specific summary ......
Read more >TB Write summaries - Easy TensorFlow
To visualize the parameters, we will use tf.summary class to write the summaries of parameters. Notice tf.summary.histogram() __ functions added to the code ......
Read more >tf.summary.text - TensorFlow 1.15 - W3cubDocs
tf.summary.text( name, tensor, collections=None ). Text data summarized via this plugin will be visible in the Text Dashboard in TensorBoard.
Read more >Summarization - Hugging Face Course
However, when done well, text summarization is a powerful tool that can ... Clearly, applying some sort of exact match between the generated...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
FYI, PR #1138 added a
--samples_per_plugin
flag that can be used to set the number of samples retained on a per-plugin basis. So e.g.--samples_per_plugin=text=100
should set the text dashboard to retain 100 samples for each series.@bsautermeister - the random skipping is a result of reservoir sampling applied to all the event file data that TensorBoard processes. The sampling process means that for each tag’s tensors, TensorBoard displays only a random subsample of up to N values. The value of N varies - it’s 1000 for the scalars dashboard charts, for example - but it’s set to just 10 by default for the images, audio, and text dashboards (the former two are set explicitly in
DEFAULT_TENSOR_SIZE_GUIDANCE
while the latter inherits the overall default fromDEFAULT_SIZE_GUIDANCE
): https://github.com/tensorflow/tensorboard/blob/0.4.0-rc3/tensorboard/backend/application.py#L50I’m not sure if there’s a good way right now to override those values, others on the project might know better - if not, we can at least create a feature request for that.
In terms of writing out argparse parameters, I’m guessing that the limit of 10 is coming from the reservoir size as discussed above, but I’d expect that to apply to a single tag over all steps, rather than to a single step across tags. How exactly are you calling tf.summary.text()? If you can show a minimal reproduction we might be able to diagnose more closely. If it does turn out to be the reservoir size limit again, one option there might be doling a single tf.summary.text() call and passing in a rank-1 tensor with a list of all your parameters, and a unique tag name (like “parameters”) that you don’t use for any other summary ops.