Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

Use of constant tensor in custom loss fails #224

Open
efournie opened this issue Jan 31, 2019 · 4 comments
Open

Use of constant tensor in custom loss fails #224

efournie opened this issue Jan 31, 2019 · 4 comments

Comments

@efournie
Copy link

Hello,

I am trying to use a mask in a custom loss function but with the mxnet backend, the program fails with the following error:

Traceback (most recent call last):
  File "poc.py", line 17, in <module>
    model.compile(optimizer=Adam(0.001), loss=my_loss)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\keras\backend\mxnet_backend.py", line 5390, in compile
    fixed_param_names=self._fixed_weights)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\mxnet\module\bucketing_module.py", line 84, in __init__
    _check_input_names(symbol, fixed_param_names, "fixed_param", True)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\mxnet\module\base_module.py", line 53, in _check_input_names
    raise ValueError(msg)
ValueError: You created Module with Module(..., fixed_param_names=['loss/conv2d_1_loss/constant1']) but input with name 'loss/conv2d_1_loss/constant1' is not found in symbol.list_arguments(). Did you mean one of:
        /input_11
        conv2d_1/kernel1
        conv2d_1/bias1

Switching to the tensorflow backend removes the error and the program can run as expected.
The issue can be reproduced with this minimal example:

from keras.layers import Input, Conv2D
from keras.models import Model
from keras.optimizers import Adam
import keras.backend as K
import numpy as np

def my_loss(y_pred, y_true):
    mask = K.constant(np.ones((32,32,1)))
    return K.abs(y_pred * mask - y_true * mask)

g_input = Input((32,32,1))
g_output = Conv2D(1,3, padding="same")(g_input)
model = Model(g_input, g_output)
model.summary()
model.compile(optimizer=Adam(0.001), loss=my_loss)
print("done")

I also tried to use Multiply layers in the model definition to work around the issue without success. Unfortunately, I can't pinpoint the exact cause of the problem in the Keras backend.

@efournie
Copy link
Author

I can reproduce the problem both on windows with CUDA and on linux / CPU.

@roywei
Copy link

roywei commented Mar 6, 2019

Hi @efournie, unfortunately custom loss is not supported now due to we can't register this loss with mxnet backend. But you can always add it in the losses.py and import it in your script. refer to this custom loss: https://github.com/awslabs/keras-apache-mxnet/blob/master/keras/losses.py#L77

@sdbonte
Copy link

sdbonte commented Jul 17, 2019

I have the same problem. I added my custom loss function to losses.py, but still I get the following error message:

ValueError: ?[91mYou created Module with Module(..., fixed_param_names=['loss/activation_1_loss/variable1']) but input with name 'loss/activation_1_loss/variable1' is not found in symbol.list_arguments(). Did you mean one of:
/input_11
conv3d_1/kernel1
conv3d_1/bias1
batch_normalization_1/gamma1
batch_normalization_1/beta1
conv3d_2/kernel1
conv3d_2/bias1
batch_normalization_2/gamma1
batch_normalization_2/beta1
conv3d_3/kernel1
conv3d_3/bias1
batch_normalization_3/gamma1
batch_normalization_3/beta1
conv3d_4/kernel1
conv3d_4/bias1
batch_normalization_4/gamma1
batch_normalization_4/beta1
conv3d_5/kernel1
conv3d_5/bias1
batch_normalization_5/gamma1
batch_normalization_5/beta1
conv3d_6/kernel1
conv3d_6/bias1
batch_normalization_6/gamma1
batch_normalization_6/beta1
conv3d_7/kernel1
conv3d_7/bias1
batch_normalization_7/gamma1
batch_normalization_7/beta1
conv3d_8/kernel1
conv3d_8/bias1
batch_normalization_8/gamma1
batch_normalization_8/beta1
conv3d_9/kernel1
conv3d_9/bias1
batch_normalization_9/gamma1
batch_normalization_9/beta1
conv3d_10/kernel1
conv3d_10/bias1
batch_normalization_10/gamma1
batch_normalization_10/beta1
conv3d_11/kernel1
conv3d_11/bias1
batch_normalization_11/gamma1
batch_normalization_11/beta1
conv3d_12/kernel1
conv3d_12/bias1
batch_normalization_12/gamma1
batch_normalization_12/beta1
conv3d_13/kernel1
conv3d_13/bias1
batch_normalization_13/gamma1
batch_normalization_13/beta1
conv3d_14/kernel1
conv3d_14/bias1
batch_normalization_14/gamma1
batch_normalization_14/beta1
conv3d_15/kernel1
conv3d_15/bias1
batch_normalization_15/gamma1
batch_normalization_15/beta1
conv3d_16/kernel1
conv3d_16/bias1
conv3d_17/kernel1
conv3d_17/bias1
batch_normalization_16/gamma1
batch_normalization_16/beta1
conv3d_18/kernel1
conv3d_18/bias1
batch_normalization_17/gamma1
batch_normalization_17/beta1
conv3d_19/kernel1
conv3d_19/bias1
batch_normalization_18/gamma1
batch_normalization_18/beta1
conv3d_20/kernel1
conv3d_20/bias1
conv3d_21/kernel1
conv3d_21/bias1
batch_normalization_19/gamma1
batch_normalization_19/beta1
conv3d_22/kernel1
conv3d_22/bias1
batch_normalization_20/gamma1
batch_normalization_20/beta1
conv3d_23/kernel1
conv3d_23/bias1
batch_normalization_21/gamma1
batch_normalization_21/beta1
conv3d_24/kernel1
conv3d_24/bias1?[0m

@pranaydoshi
Copy link

@roywei can you please elaborate on your solution here? I tried adding it to the losses.py locally and am still running into the same issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants