But now we have things like
tf.contrib.summary.record_summaries_every_n_global_steps, which can be used like this:
import tensorflow.contrib.summary as tfsum summary_writer = tfsum.create_file_writer(logdir, flush_millis=3000) summaries =  # First we create one summary which runs every n global steps with summary_writer.as_default(), tfsum.record_summaries_every_n_global_steps(30): summaries.append(tfsum.scalar("train/loss", loss)) # And then one that runs every single time? with summary_writer.as_default(), tfsum.always_record_summaries(): summaries.append(tfsum.scalar("train/accuracy", accuracy)) # Then create an optimizer which uses a global step step = tf.create_global_step() train = tf.train.AdamOptimizer().minimize(loss, global_step=step)
And now come a few questions:
session.run(summaries)in a loop, I assume that the accuracy summary would get written every single time, while the loss one wouldn't, because it only gets written if the global step is divisible by 30?
session.run([accuracy, summaries])but can just run,
session.run(summaries)since they have a dependency in the graph, right?
tf.contrib.summary.scalar(and others) take in a
By adding a control dependency in 3) I mean doing this:
tf.control_dependencies(summaries): train = tf.train.AdamOptimizer().minimize(loss, global_step=step)
answer moved from edit to self-answer as requested
I just played around with this a little bit, and it seems that if one combines
tf.record_summaries_every_n_global_steps it behaves as expected and the summary only gets recorded every nth step. But if they are run together within a session, such as
session.run([train, summs]), the summaries are stored every once in a while, but not exactly every nth step. I tested this with n=2 and with the second approach the summary was often written at odd steps, while with the control dependency approach it was always on an even step.