Errors when evaluating a tensor in custom loss function?
See original GitHub issueI’ve been trying to the shape of a tensor in the custom loss function but Keras has been giving me this error at model.compile(loss=custom_loss, ...):
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor ‘dense_4_target’ with dtype float [[Node: dense_4_target = Placeholderdtype=DT_FLOAT, shape=[], _device=“/job:localhost/replica:0/task:0/gpu:0”]]
and here is my code for it:
def custom_loss(y_true, y_pred):
print "Reached"
print K.eval(K.shape(y_true))
print "Not reached"
I understand that a Tensor is a virtual place holder that only has values (and shapes) when it is filled, but shouldn’t it be filled when the loss function is called? Thank you very much for your help!
Issue Analytics
- State:
- Created 7 years ago
- Reactions:4
- Comments:13
Top Results From Across the Web
Keras custom loss function not printing value of tensor
1 Answer 1 ... The print statement is redundant. print_tensor will already print the values. From the documentation of print_tensor: "Note that ...
Read more >How To Build Custom Loss Functions In Keras For Any Use ...
Evaluating a machine learning project is very essential. There are different types of evaluation metrics such as 'Mean Squared Error', 'Accuracy', 'Mean ...
Read more >PyTorch Loss Functions: The Ultimate Guide - neptune.ai
The Mean Squared Error (MSE ), also called L2 Loss, computes the average of the squared differences between actual values and predicted values....
Read more >Solving the TensorFlow Keras Model Loss Problem
The problem is that the loss function must have the signature loss = fn(y_true, y_pred) , where y_pred is one of the outputs...
Read more >Dummies Guide to Writing a Custom Loss Function in ...
This article will teach us how to write a custom loss function in Tensorflow. We will write the custom code to implement the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@alyato There was/is no bug here. The problem is a lack of understanding. When you write a custom loss function, you are writing a function that generates a function. Your function is called before any data is prepared. The arguments to your function are placeholder objects, not actual data arrays. As an example, here is a custom loss function I wrote that applies class weights. (It’s ugly and depends on a global, but it shows what’s going on.)
Keras only calls this function once, while compiling the model. It appears that a data array is being returned, but it is actually a tensor function that will be called to do the actual calculation while the model is being run. Each statement in this function is, in essence, being recorded and used to build the function that will be called.
Here is another example. I used the Keras backend to write an application that calculates a bunch of metrics on some truth and predicted data. Here’s the function that created a custom categorical cross-entropy function that allowed for a spatial mask (locations where the truth data had no class selected).
Here’s how I called the function.
And here’s how I used the generated loss function.
The arguments truthArray and predArray are numpy arrays.
Notice that the first function has no arguments. This is an extreme example, but I wrote it for a specific case where I knew the dimensionality of my inputs. The last statement in the function causes a theano or tensorflow function to be created that will perform the calculations called out by the previous six statements. The statements creating the tensor placeholders and creating the theano or tensorflow function are normally handled by Keras during model compilation.
I call create_categorical_crossentropy in order to get the function that will do the actual calculation. I call the generated function with actual arguments and get an actual result.
Keep in mind that the python function you write (custom_loss) is called to generate and compile a C function. The compiled function is what is called during training. When your python custom_loss function is called, the arguments are tensor objects that don’t have data attached to them. The K.eval call will fail, as will the K.shape call. The only thing you can really know about your arguments is the number of dimensions. You must write your function in such a fashion that it deals with things in a more symbolic fashion. Look at the source for the different loss functions provided by Keras for inspiration.