own dataset: Shapes (64, 4, 4, 256) and (64, 3, 3, 256) are not compatible
See original GitHub issueHi there,
When using my own dataset (28x28, grayscale), I get an error for this line in model.py
:
g_optim = tf.train.AdamOptimizer(config.learning_rate, beta1=config.beta1) \
.minimize(self.g_loss, var_list=self.g_vars)
error:
Traceback (most recent call last): File "...tensorflow/python/framework/tensor_shape.py", line 575, in merge_with new_dims.append(dim.merge_with(other[i]))
File "...tensorflow/python/framework/tensor_shape.py", line 133, in merge_with self.assert_is_compatible_with(other)
File "...tensorflow/python/framework/tensor_shape.py", line 108, in assert_is_compatible_with % (self, other))
ValueError: Dimensions 4 and 3 are not compatible
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 74, in <module>
tf.app.run()
File "...tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "main.py", line 59, in main
dcgan.train(FLAGS)
File "...DCGAN-tensorflow/model.py", line 139, in train
.minimize(self.g_loss, var_list=self.g_vars)
File "...tensorflow/python/training/optimizer.py", line 196, in minimize
grad_loss=grad_loss)
File "...tensorflow/python/training/optimizer.py", line 253, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "...tensorflow/python/ops/gradients.py", line 491, in gradients
in_grad.set_shape(t_in.get_shape())
File "...tensorflow/python/framework/ops.py", line 408, in set_shape
self._shape = self._shape.merge_with(shape)
File "...tensorflow/python/framework/tensor_shape.py", line 579, in merge_with
(self, other))
ValueError: Shapes (64, 4, 4, 256) and (64, 3, 3, 256) are not compatible
Any advice on how to fix this?
Issue Analytics
- State:
- Created 7 years ago
- Comments:5 (3 by maintainers)
Top GitHub Comments
@hercky Oh I see. The problem was the size of input is too small for 4 layer networks and it’s not dividable. I think it’s tricky problem that a depth of network highly depends on input width, height and there is no one answer for it. I’ll just add a
raise Exception
code if the input size is “too small” for the current network design and suggest people to change it.I’m not sure but the fast simple bypass might be adding tf.image.resize_images right after
is defined.