question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Float64 dtype variable is created by nolearn

See original GitHub issue

I have this warning when trying to run a simple network:

lib/python2.7/site-packages/nolearn-0.6a0.dev0-py2.7.egg/nolearn/lasagne/base.py:472: UserWarning: You are creating a TensorVariable with float64 dtype. You requested an action via the Theano flag warn_float64={ignore,warn,raise,pdb}. accuracy = T.mean(T.eq(predict, y_batch))

The input is numpy.float32, the target variable is uint8 and an identical network created only in Lasagne does not raise this error.

This is the nolearn network definition:

    def perform_nn_nolearn(data_train, label_train, data_test, label_test):
    def regularization_objective(layers,*args, **kwargs):

        target = kwargs['target'][:,np.newaxis]
        kwargs['target'] = target
        losses = nolearn.lasagne.objective(layers,*args, **kwargs)
        return losses

    class Custom_TrainSplit(object):
        def __init__(self,test_data, test_labels):
            self.test_data = test_data
            self.test_labels = test_labels

        def __call__(self, X, y, net):
            X_train, y_train = X, y
            X_valid, y_valid = self.test_data, self.test_labels
            return X_train, X_valid, y_train, y_valid

    no_feats=data_train.shape[1]
    batch_size = 32


    nn_layers = [
        (lasagne.layers.InputLayer, {'shape': (batch_size, no_feats)}),
        (lasagne.layers.DenseLayer,{'num_units': 1, 'nonlinearity':lasagne.nonlinearities.sigmoid})
        ]

    net = nolearn.lasagne.NeuralNet(
        layers=nn_layers,
        max_epochs=1,
        objective_loss_function = lasagne.objectives.binary_crossentropy,
        update=lasagne.updates.nesterov_momentum,
        update_learning_rate=0.01,
        update_momentum=0.9,
        objective=regularization_objective,
        regression=False,
        use_label_encoder = False,
        batch_iterator_train=nolearn.lasagne.BatchIterator(batch_size=32),
        train_split=Custom_TrainSplit(data_test,label_test),
        verbose=2
    )

    rez = net.fit(data_train, label_train)
    nn_pred_proba = net.predict_proba(data_test)
    print "Network output shape: {}".format(nn_pred_proba.shape)
    return nn_pred_proba

The warning is raised when starting fit() but before the network statistics. Also, there is a long list of warnings, from theano/compile to lasagne/objectives. I isolated the one regarding nolearn because, again, an identical network+data runs without warnings in “pure” lasagne.

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:6

github_iconTop GitHub Comments

1reaction
BenjaminBossancommented, Apr 6, 2016

Multiplying or dividing float32 and int32 results in a float64 in theano, which is why you see these results. Still, as long as it does not cause any trouble, I would just ignore it.

0reactions
cristi-zzcommented, Apr 15, 2016

Ok, thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Theano TensorType error - python 2.7 - Stack Overflow
When I am using nolearn to implement multi-label classification, I got this error: 'Bad input argument to theano function with name "/Users/lm/Documents/ ...
Read more >
np.mean raises warning when input is float64 in jax 0.2.1 #4490
To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable.
Read more >
Theano base.py Error: Int Object has no attribute DType
I am running some code using the NeuralNet() function from nolearn.lasagne however I am getting an error being thrown from Theano. Essentially this...
Read more >
Theano/Lasagne/Nolearn Neural Network Image Input
I am working on image classification tasks and decided to use Lasagne + Nolearn for neural networks prototype. All standard examples like MNIST ......
Read more >
Input contains infinity or a value too large for dtype('float64').
linear_model import LinearRegression #initiate linear regression model model = LinearRegression() #define predictor and response variables X, y ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found