Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
186 views
in Technique[技术] by (71.8m points)

python - TensorFlow random_shuffle_queue is closed and has insufficient elements

I'm reading batch of images by getting idea here from tfrecords(converted by this)

My images are cifar images, [32, 32, 3] and as you can see while reading and passing images the shapes are normal (batch_size=100)

the 2 most notable problems stated in the log, as far as I know is

  1. Shape of 12228, which I don't know from where I get this. All my tensors are either in shape [32, 32, 3] or [None, 3072]
  2. Running out of sample

Compute status: Out of range: RandomSuffleQueue '_2_input/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)

How can I solve this?

Logs:

1- image shape is  TensorShape([Dimension(3072)])
1.1- images batch shape is  TensorShape([Dimension(100), Dimension(3072)])
2- images shape is  TensorShape([Dimension(100), Dimension(3072)])

W tensorflow/core/kernels/queue_ops.cc:79] Invalid argument: Shape mismatch in tuple component 0. Expected [3072], got [12288]
W tensorflow/core/common_runtime/executor.cc:1027] 0x7fa72abc89a0 Compute status: Invalid argument: Shape mismatch in tuple component 0. Expected [3072], got [12288]
     [[Node: input/shuffle_batch/random_shuffle_queue_enqueue = QueueEnqueue[Tcomponents=[DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input/shuffle_batch/random_shuffle_queue, input/sub, input/Cast_1)]]
W tensorflow/core/kernels/queue_ops.cc:79] Invalid argument: Shape mismatch in tuple component 0. Expected [3072], got [12288]
W tensorflow/core/common_runtime/executor.cc:1027] 0x7fa72ab9d080 Compute status: Invalid argument: Shape mismatch in tuple component 0. Expected [3072], got [12288]
     [[Node: input/shuffle_batch/random_shuffle_queue_enqueue = QueueEnqueue[Tcomponents=[DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input/shuffle_batch/random_shuffle_queue, input/sub, input/Cast_1)]]
W tensorflow/core/kernels/queue_ops.cc:79] Invalid argument: Shape mismatch in tuple component 0. Expected [3072], got [12288]
W tensorflow/core/common_runtime/executor.cc:1027] 0x7fa7285e55a0 Compute status: Invalid argument: Shape mismatch in tuple component 0. Expected [3072], got [12288]
     [[Node: input/shuffle_batch/random_shuffle_queue_enqueue = QueueEnqueue[Tcomponents=[DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input/shuffle_batch/random_shuffle_queue, input/sub, input/Cast_1)]]
W tensorflow/core/kernels/queue_ops.cc:79] Invalid argument: Shape mismatch in tuple component 0. Expected [3072], got [12288]
W tensorflow/core/common_runtime/executor.cc:1027] 0x7fa72aadb080 Compute status: Invalid argument: Shape mismatch in tuple component 0. Expected [3072], got [12288]
     [[Node: input/shuffle_batch/random_shuffle_queue_enqueue = QueueEnqueue[Tcomponents=[DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input/shuffle_batch/random_shuffle_queue, input/sub, input/Cast_1)]]
W tensorflow/core/common_runtime/executor.cc:1027] 0x7fa72ad499a0 Compute status: Out of range: RandomSuffleQueue '_2_input/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
     [[Node: input/shuffle_batch = QueueDequeueMany[component_types=[DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input/shuffle_batch/random_shuffle_queue, input/shuffle_batch/n)]]
Traceback (most recent call last):
  File "/Users/HANEL/Documents/my_cifar_train.py", line 110, in <module>
    tf.app.run()
  File "/Users/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/default/_app.py", line 11, in run
    sys.exit(main(sys.argv))
  File "/Users/HANEL/my_cifar_train.py", line 107, in main
    train()
  File "/Users/HANEL/my_cifar_train.py", line 76, in train
    _, loss_value = sess.run([train_op, loss])
  File "/Users/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 345, in run
    results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
  File "/Users/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 419, in _do_run
    e.code)
tensorflow.python.framework.errors.OutOfRangeError: RandomSuffleQueue '_2_input/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
     [[Node: input/shuffle_batch = QueueDequeueMany[component_types=[DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input/shuffle_batch/random_shuffle_queue, input/shuffle_batch/n)]]
Caused by op u'input/shuffle_batch', defined at:
  File "/Users/HANEL/my_cifar_train.py", line 110, in <module>
    tf.app.run()
  File "/Users/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/default/_app.py", line 11, in run
    sys.exit(main(sys.argv))
  File "/Users/HANEL/my_cifar_train.py", line 107, in main
    train()
  File "/Users/HANEL/my_cifar_train.py", line 39, in train
    images, labels = my_input.inputs()
  File "/Users/HANEL/my_input.py", line 157, in inputs
    min_after_dequeue=200)
  File "/Users/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 453, in shuffle_batch
    return queue.dequeue_many(batch_size, name=name)
  File "/Users/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 245, in dequeue_many
    self._queue_ref, n, self._dtypes, name=name)
  File "/Users/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 319, in _queue_dequeue_many
    timeout_ms=timeout_ms, name=name)
  File "/Users/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op
    op_def=op_def)
  File "/Users
/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1710, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/Users/HANEL/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 988, in __init__
    self._traceback =

_extract_stack()
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I had a similar problem. Digging around the web, it turned out that if you use some num_epochs argument, you have to initialize all the local variables, so your code should end up looking like:

with tf.Session() as sess:
    sess.run(tf.local_variables_initializer())
    sess.run(tf.global_variables_initializer())
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    # do your stuff here

    coord.request_stop()
    coord.join(threads)

If you post some more code, maybe I could take a deeper look into it. In the meantime, HTH.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...