dimension of sample_inputs and self.inputs
See original GitHub issueI got a problem that
tf.app.run()
File "/home/jd730/p3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 114, in main
dcgan.train(FLAGS)
File "/home/jd730/Creative-Adversarial-Networks/model.py", line 402, in train
self.y:sample_labels,
File "/home/jd730/p3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/jd730/p3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1100, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (64, 64, 64, 3) for Tensor 'real_images:0', which has shape '(36, 64, 64, 3)'
I think its because difference of dimension of self.inputs and samples_inputs. In model.py
According to line 117, self.inputs should be [batch_size, image_dims(ex. 64 64 3)]
self.inputs = tf.placeholder(tf.float32, [self.batch_size] + image_dims, name='real_images')
However, on line 237, sample_inputs
which is a factor of self.inputs
on line 398 is defined by
sample = [
get_image(sample_file,
input_height=self.input_height,
input_width=self.input_width,
resize_height=self.output_height,
resize_width=self.output_width,
crop=self.crop,
grayscale=self.grayscale) for sample_file in sample_files]
if (self.grayscale):
sample_inputs = np.array(sample).astype(np.float32)[:, :, :, None]
else:
sample_inputs = np.array(sample).astype(np.float32)
So, its dimension is [sample_num(=sample_size), image_dim] .
I thought it should change to batch_size
rather sample_num
. But, carpedm20’s DCGAN also used that code.
May I ask if you can explain this situation?
Issue Analytics
- State:
- Created 6 years ago
- Comments:8 (2 by maintainers)
Top Results From Across the Web
deepmask/TrainerSharpMask.lua at master - GitHub
function: copy inputs/labels to CUDA tensor. function Trainer:copySamples(sample). self.inputs:resize(sample.inputs:size()):copy(sample.inputs).
Read more >How to run a trained PyTorch Model on my sample inputs?
The problem with shape is that the model expects input of shape [BatchSize, Channel, Height, Width] and the current input was [Channel, ...
Read more >Source code for GPy.testing.model_tests
[docs] def test_setXY(self): m = GPy.models.GPRegression(self.X, self.Y) m. ... 1 dimensional example # sample inputs and outputs self.
Read more >A joint learning framework for Gaussian processes regression ...
In this paper, we propose a novel GPR model, where multi-dimensional sample inputs are viewed as signals generated over an underlying graph.
Read more >| notebook.community
image is normalized CIELAB, depth is not normalized. self.image, self.depth ... entropy): labels.fill(0) for b in range(batch_size): self.sample(inputs, ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
If you pull the new code from the repository, which has been updated since you asked your question, and run train.sh with different values for batch_size and sample_size, the code should not throw an error and should behave as expected. There shouldn’t be any dimensionality issues (there weren’t when I tested it).
The original DCGAN code assumed that batch_size and sample_size were the same.
@galoisgroupcn Hi, I am good.
Thank you. Jaedong