Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
886 views
in Technique[技术] by (71.8m points)

tensorflow - Tensorboard scalars and graphs duplicated

I'm using TensorBoard to visualize network metrics and graph.

I create a session sess = tf.InteractiveSession() and build the graph in Jupyter notebook.

In the graph, I include two summary scalars:

with tf.variable_scope('summary') as scope:    
   loss_summary = tf.summary.scalar('Loss', cross_entropy)
   train_accuracy_summary = tf.summary.scalar('Train_accuracy', accuracy)

I then create a summary_writer = tf.summary.FileWriter(logdir, sess.graph) and run:

_,loss_sum,train_accuracy_sum=sess.run([...],feed_dict=feed_dict)

I write the metrics:

summary_writer.add_summary(loss_sum, i)
summary_writer.add_summary(train_accuracy_sum, i) 

I run the code three times.

Each time I run, I re-import TF and create a new interactive session.

But, in Tensorboard, a separate scalar window is created for each run:

enter image description here

Also, the graph appears to be duplicated if I check data for the last run:

enter image description here

How do I prevent duplication of the graph and scalar window each time I run?

  • I want all data to appear in the same scalar plots (with multiple series / plot).
  • I want each run to reference a single graph visualization.
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I suspect the problem arises because you are running the code three times in the process (same script, Jupyter notebook, or whatever), and those invocations share the same "default graph" in TensorFlow. TensorFlow needs to give each node in the graph a unique name, so it appends "_1" and "_2" to the names of the summary nodes in the second and third invocations.

How do you avoid this? The easiest way is to create a new graph each time you run the code. There are (at least) three ways to do this:

  • Wrap the code in a with tf.Graph().as_default(): block, which constructs a new tf.Graph object and sets it is the default graph for the extent of the with block.

  • If you construct your session before creating the graph, you can construct your session as sess = tf.InteractiveSession(graph=tf.Graph()). The newly constructed tf.Graph object remains as the default graph until you call sess.close().

  • Call tf.reset_default_graph() between invocations of the code.

The with-block approach is the "most structured" way to do things, and might be best if you are writing a standalone script. However, since you are using tf.InteractiveSession, I assume you are using an interactive REPL of some kind, and the other two approaches are probably more useful (e.g. for splitting the execution across multiple cells).


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...