question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ValueError: 'total size of new array must be unchanged'

See original GitHub issue

Am I doing something wrong here:

net1 = NeuralNet(
    layers=[  # three layers: one hidden layer
        ('input', layers.InputLayer),
        ('conv1', layers.Conv2DLayer),
        ('pool1', layers.MaxPool2DLayer),
        ('dropout1', layers.DropoutLayer),
        ('hidden', layers.DenseLayer),
        ('output', layers.DenseLayer),
        ],
    # layer parameters:
    input_shape=(32, 1, 300, 400),  # 32 images per batch times
    hidden_num_units=100,  # number of units in hidden layer
    output_nonlinearity=None,  # output layer uses identity function
    output_num_units=len(classes), 

    # optimization method:
    upate=nesterov_momentum,
    update_learning_rate=0.01,
    update_momentum=0.9,

    regression=False,  # flag to indicate we're not dealing with regression problem
    use_label_encoder=True,
    max_epochs=400,  # we want to train this many epochs
    verbose=1,
    batch_iterator=LoadBatchIterator(batch_size=32),

    conv1_num_filters=4, conv1_filter_size=(3, 3), pool1_ds=(2, 2),
    dropout1_p=0.1,
    )

leads to:

/home/ubuntu/git/nolearn/nolearn/lasagne.pyc in fit(self, X, y)
    155 
    156         try:
--> 157             self.train_loop(X, y)
    158         except KeyboardInterrupt:
    159             pdb.set_trace()

/home/ubuntu/git/nolearn/nolearn/lasagne.pyc in train_loop(self, X, y)
    193 
    194             for Xb, yb in self.batch_iterator(X_train, y_train):
--> 195                 batch_train_loss = self.train_iter_(Xb, yb)
    196                 train_losses.append(batch_train_loss)
    197 

/home/ubuntu/git/Theano/theano/compile/function_module.pyc in __call__(self, *args, **kwargs)
    603                     gof.link.raise_with_op(
    604                         self.fn.nodes[self.fn.position_of_error],
--> 605                         self.fn.thunks[self.fn.position_of_error])
    606                 else:
    607                     # For the c linker We don't have access from

/home/ubuntu/git/Theano/theano/compile/function_module.pyc in __call__(self, *args, **kwargs)
    593         t0_fn = time.time()
    594         try:
--> 595             outputs = self.fn()
    596         except Exception:
    597             if hasattr(self.fn, 'position_of_error'):

/home/ubuntu/git/Theano/theano/gof/op.pyc in rval(p, i, o, n)
    751 
    752         def rval(p=p, i=node_input_storage, o=node_output_storage, n=node):
--> 753             r = p(n, [x[0] for x in i], o)
    754             for o in node.outputs:
    755                 compute_map[o][0] = True

/home/ubuntu/git/Theano/theano/sandbox/cuda/basic_ops.pyc in perform(self, node, inp, out_)
   2349             else:
   2350                 raise ValueError("total size of new array must be unchanged",
-> 2351                                  x.shape, shp)
   2352 
   2353         out[0] = x.reshape(tuple(shp))

ValueError: ('total size of new array must be unchanged', (31, 4, 298, 398), array([128,   1, 298, 398]))
Apply node that caused the error: GpuReshape{4}(GpuElemwise{Composite{[mul(i0, add(i1, Abs(i1)))]},no_inplace}.0, TensorConstant{[128   1 298 398]})
Inputs types: [CudaNdarrayType(float32, 4D), TensorType(int64, vector)]
Inputs shapes: [(31, 4, 298, 398), (4,)]
Inputs strides: [(474416, 118604, 398, 1), (8,)]
Inputs values: ['not shown', array([128,   1, 298, 398])]

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint of this apply node.

Issue Analytics

  • State:closed
  • Created 9 years ago
  • Reactions:2
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

2reactions
dnouricommented, Dec 29, 2014

Ah, it looks like you’re using the Conv2DLayer implementation that doesn’t like batches that aren’t exactly of size batch_size. The cuda_convnet-based implementation (used in the tutorial) doesn’t have this problem. See here for a discussion with two possible solutions.

1reaction
dnouricommented, Jan 2, 2015

It turns out that there’s a much easier solution to this problem. In the tutorial, I made an error and falsely set the input layer’s shape[0] (the batch size) to be 128. This should have been None. I verified that with this setting the “legacy” Theano convnet layer (for CPU) is happy and every other layer that I tested was too.

So that means I could undo the forced_even change again. Please update your code.

Read more comments on GitHub >

github_iconTop Results From Across the Web

ValueError: total size of new array must be unchanged
The error message seems to indicate that you try to reshape in a size 128 array something that is more or less than...
Read more >
ValueError: total size of new array must be unchanged
Hi, I'm trying to train a image classification model generated by EON tuner. The model input image size is 96x96x3.
Read more >
ValueError: total size of new array must be ... - Google Groups
ValueError : total size of new array must be unchanged. Apply node that caused the error: Reshape{4}(sigmoid.0, TensorConstant{[ 1 5 49 166]})
Read more >
ValueError: total size of new array must be unchanged · Issue #6
On running segnet.py , I get the above error on line autoencoder.add(Reshape((12,data_shape), input_shape=(12360480))) Any solution to this ...
Read more >
total size of new array must be unchanged' : r/scipy - Reddit
np.reshape returns error 'ValueError: total size of new array must be unchanged'. My array that I'm trying to enter has length 240.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found