Assuming I have a bunch of summaries defined like:
loss = ...
tf.scalar_summary("loss", loss)
# ...
summaries = tf.merge_all_summaries()
I can evaluate the summaries
tensor every few steps on the training data and pass the result to a SummaryWriter
.
The result will be noisy summaries, because they're only computed on one batch.
However, I would like to compute the summaries on the entire validation dataset.
Of course, I can't pass the validation dataset as a single batch, because it would be too big.
So, I'll get summary outputs for each validation batch.
Is there a way to average those summaries so that it appears as if the summaries have been computed on the entire validation set?
question from:
https://stackoverflow.com/questions/40788785/how-to-average-summaries-over-multiple-batches 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…