question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Minimicrobatch size error, only function when the minibatchsize = = 1

See original GitHub issue

I implemented the other neural network model and take loss function by dp_optimizer.DPGradientDescentGaussianOptimizer.

In that time, I successed when the num_microbatch is 1. But when the num_microbatch is over 1, I got an error.

Traceback (most recent call last):
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 686, in _call_cpp_shape_fn_impl
    input_tensors_as_shapes, status)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension size must be evenly divisible by 2 but is 1 for 'Reshape' (op: 'Reshape') with input shapes: [], [2] and with input tensors computed as partial shapes: input[1] = [2,?].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "autoencoder_dp.py", line 70, in <module>
    population_size=60000).minimize(cost)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 399, in minimize
    grad_loss=grad_loss)
  File "/home/madono/madono/test2/dpgan/privacy/optimizers/dp_optimizer.py", line 68, in compute_gradients
    microbatches_losses = tf.reshape(loss, [self._num_microbatches, -1])
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 5782, in reshape
    "Reshape", tensor=tensor, shape=shape, name=name)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3292, in create_op
    compute_device=compute_device)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3332, in _create_op_helper
    set_shapes_for_outputs(op)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2496, in set_shapes_for_outputs
    return _set_shapes_for_outputs(op)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2469, in _set_shapes_for_outputs
    shapes = shape_func(op)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2399, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn
    require_shape_fn)
  File "/home/madono/.pyenv/versions/anaconda3-2018.12/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Dimension size must be evenly divisible by 2 but is 1 for 'Reshape' (op: 'Reshape') with input shapes: [], [2] and with input tensors computed as partial shapes: input[1] = [2,?].

I implement like

optimizer = dp_optimizer.DPGradientDescentGaussianOptimizer(
          l2_norm_clip=1.0,
          noise_multiplier=1.1,
          num_microbatches=2,
          learning_rate=0.0002,
population_size=60000).minimize(cost)
#optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.initialize_all_variables()

# Explore trainable variables (weight_bias)
var = [v for v in tf.trainable_variables() if 'mimiciii/fc/autoencoder' in v.name] # (784, 128), (128,), (128, 784), (784,)
var_grad = tf.gradients(cost, var) # gradient of cost w.r.t. trainable variables, len(var_grad): 8, type(var_grad): list
norm_gradient_variables = []

# Launch the graph
with tf.Session() as sess:
    writer = tf.summary.FileWriter("./graph/my_graph", sess.graph)
    sess.run(init)
    total_batch = int(mnist.train.num_examples/batch_size)
    # Training cycle
    for epoch in range(training_epochs):
        # Loop over all batches
        for i in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            # Run optimization op (backprop) and cost op (to get loss value)
            _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
            var_grad_val = sess.run(var_grad, feed_dict={X: batch_xs})

            # var_grad_val = [var_grad_val[0], var_grad_val[2]] # no bias, change for different network
            if type(var_grad_val) != type([0]):  # if a is not a list, which indicate it contains only one weight matrix
                var_grad_val = [var_grad_val]
            norm_gradient_variables.append(norm_w(var_grad_val))  # compute the norm of all trainable variables
        # Display logs per epoch step
        if epoch % display_step == 0:
            print("Epoch:", '%04d' % (epoch+1),
                  "cost=", "{:.9f}".format(c))

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:10

github_iconTop GitHub Comments

1reaction
schien1729commented, Feb 18, 2019

The issue might be that by default, tf.losses.mean_squared_error aggregates the losses over all the examples it’s given. Can you try this instead?

cost = tf.losses.mean_squared_error(labels=y_true,
    predictions=y_pred, reduction=Reduction.NONE
)
0reactions
Viserion-nlpercommented, Jul 1, 2020

I also met the same problem How to solve this problem, please

Read more comments on GitHub >

github_iconTop Results From Across the Web

error in faster rcnn matlab code with the minibatch size
Hi, I've seen this error (the one requring a minimum minibach of 4 instead of only 1) before when training faster RCNNs across...
Read more >
When the data set size is not a multiple of the mini-batch size ...
1 Answer 1 · Is this true? If the number is different, more weight will be put on each of the indivudual samples...
Read more >
The number of MiniBatchSize always 1 even when i change it ...
But the number of Batch size in output is always 1 as mbq = minibatchqueue with 3 outputs and properties: Mini-batch creation: ...
Read more >
Increasing Mini-batch Size without Increasing Memory - Medium
The most popular technique used to train neural networks today is gradient descent, where the error of the network is minimized by ...
Read more >
privacy from tensorflow - Giter VIP
Minimicrobatch size error, only function when the minibatchsize = = 1. I implemented the other neural network model and take loss function by...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found