question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Errors when evaluating a tensor in custom loss function?

See original GitHub issue

I’ve been trying to the shape of a tensor in the custom loss function but Keras has been giving me this error at model.compile(loss=custom_loss, ...):

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor ‘dense_4_target’ with dtype float [[Node: dense_4_target = Placeholderdtype=DT_FLOAT, shape=[], _device=“/job:localhost/replica:0/task:0/gpu:0”]]

and here is my code for it:

def custom_loss(y_true, y_pred):
    print "Reached"
    print K.eval(K.shape(y_true))
    print "Not reached"

I understand that a Tensor is a virtual place holder that only has values (and shapes) when it is filled, but shouldn’t it be filled when the loss function is called? Thank you very much for your help!

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Reactions:4
  • Comments:13

github_iconTop GitHub Comments

15reactions
JimBiardCicscommented, Apr 17, 2018

@alyato There was/is no bug here. The problem is a lack of understanding. When you write a custom loss function, you are writing a function that generates a function. Your function is called before any data is prepared. The arguments to your function are placeholder objects, not actual data arrays. As an example, here is a custom loss function I wrote that applies class weights. (It’s ugly and depends on a global, but it shows what’s going on.)

def categoricalCrossentropy(y_true, y_pred):
    '''
    Calculate the class-weighted categorical cross-entropy for the given
    predicted and true sets.
    
    y_true [in] The truth set to test against. This is a Tensor with a last
                dimension that contains a set of 1-of-N selections.
    y_pred [in] The predicted set to test against. This is a Tensor with a last
                dimension that contains a set of 1-of-N selections.
    returns     A Tensor function that will calculate the weighted categorical
                cross-entropy on the inputs.
    '''
    
    # If weights are defined, multiply the truth values by the class weights.
    #
    if __lossWeights is not None:
        # Wrap the loss weights in a tensor object.
        #
        theWeights = backend.constant(__lossWeights, shape = __lossWeights.shape)
        
        y_true *= theWeights
        
    # Get the cross-entropy and return it.
    #
    crossEntropy = backend.categorical_crossentropy(y_true, y_pred)
    
    return crossEntropy

Keras only calls this function once, while compiling the model. It appears that a data array is being returned, but it is actually a tensor function that will be called to do the actual calculation while the model is being run. Each statement in this function is, in essence, being recorded and used to build the function that will be called.

Here is another example. I used the Keras backend to write an application that calculates a bunch of metrics on some truth and predicted data. Here’s the function that created a custom categorical cross-entropy function that allowed for a spatial mask (locations where the truth data had no class selected).

def create_categorical_crossentropy():
    '''
    Create the categorical cross-entropy function.
    
    returns     A Tensor function that will calculate the categorical
                cross-entropy on the inputs.
    '''
    
    # Create the truth and predicted output tensors.
    #
    pred  = backend.placeholder(ndim = 4, dtype = 'float32', name = 'pred')
    truth = backend.placeholder(ndim = 4, dtype = 'float32', name = 'truth')
    
    # Clip zeros to 1.0e-7 to avoid numerical instability.
    #
    pred2 = backend.clip(pred, 1.0e-7, 1.0)
    
    # Get the element-wise categorical cross-entropy tensor, then get the sum
    # of all the elements.
    #
    crossentropy = backend.categorical_crossentropy(truth, pred2)
    crossentropy = backend.sum(crossentropy)
    
    # Get the number of valid (non-zero) truth entries.
    #
    mask = backend.any(truth, axis = -1, keepdims = True)
    
    valid = backend.sum(mask)
    
    # Get the mean cross-entropy value.
    #
    crossentropy /= backend.cast(valid, 'float32')
    
    # Return the Tensor function that will calculate the cross-entropy.
    #
    return backend.function((truth, pred), crossentropy)

Here’s how I called the function.

lossFunc  = create_categorical_crossentropy()

And here’s how I used the generated loss function.

theLoss = lossFunc((truthArray, predArray))

The arguments truthArray and predArray are numpy arrays.

Notice that the first function has no arguments. This is an extreme example, but I wrote it for a specific case where I knew the dimensionality of my inputs. The last statement in the function causes a theano or tensorflow function to be created that will perform the calculations called out by the previous six statements. The statements creating the tensor placeholders and creating the theano or tensorflow function are normally handled by Keras during model compilation.

I call create_categorical_crossentropy in order to get the function that will do the actual calculation. I call the generated function with actual arguments and get an actual result.

15reactions
JimBiardCicscommented, Feb 3, 2017

Keep in mind that the python function you write (custom_loss) is called to generate and compile a C function. The compiled function is what is called during training. When your python custom_loss function is called, the arguments are tensor objects that don’t have data attached to them. The K.eval call will fail, as will the K.shape call. The only thing you can really know about your arguments is the number of dimensions. You must write your function in such a fashion that it deals with things in a more symbolic fashion. Look at the source for the different loss functions provided by Keras for inspiration.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Keras custom loss function not printing value of tensor
1 Answer 1 ... The print statement is redundant. print_tensor will already print the values. From the documentation of print_tensor: "Note that ...
Read more >
How To Build Custom Loss Functions In Keras For Any Use ...
Evaluating a machine learning project is very essential. There are different types of evaluation metrics such as 'Mean Squared Error', 'Accuracy', 'Mean ...
Read more >
PyTorch Loss Functions: The Ultimate Guide - neptune.ai
The Mean Squared Error (MSE ), also called L2 Loss, computes the average of the squared differences between actual values and predicted values....
Read more >
Solving the TensorFlow Keras Model Loss Problem
The problem is that the loss function must have the signature loss = fn(y_true, y_pred) , where y_pred is one of the outputs...
Read more >
Dummies Guide to Writing a Custom Loss Function in ...
This article will teach us how to write a custom loss function in Tensorflow. We will write the custom code to implement the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found