Cannot use type of tf.string in model
See original GitHub issue-
Check that you are up-to-date with the master branch of Keras. You can update with: pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
-
If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
-
If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with: pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
-
Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
The following code which attempts to use tf.string as the dtype for a layer fails:
import tensorflow as tf
from keras.models import Model
from keras.layers import Input
from keras.layers import Lambda
def decode_images(images):
return tf.map_fn(lambda x: tf.image.decode_jpeg(tf.squeeze(x), channels=3), images, dtype=tf.uint8)
i = Input(batch_shape=(None, 1), dtype=tf.string, name="input")
o = Lambda(decode_images, dtype=tf.uint8)(i)
m = Model(inputs=i, outputs=o)
fnames = ["test01.jpg", "test02.jpg"]
images = []
for fname in fnames:
with open(fname, mode='rb') as f:
images.append(f.read())
encoded = np.array(images, dtype=object)
m.predict(encoded[:, np.newaxis])
The error is as follows:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
509 as_ref=input_arg.is_ref,
--> 510 preferred_dtype=default_dtype)
511 except TypeError as err:
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx)
925 if ret is None:
--> 926 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
927
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py in _TensorTensorConversionFunction(t, dtype, name, as_ref)
773 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
--> 774 (dtype.name, t.dtype.name, str(t)))
775 return t
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("lambda_1_2/map_1/while/Squeeze:0", shape=(), dtype=float32)'
It appears that the request for tf.string was not recognized and therefore automatically converted into float32? Note that the same code using the version of keras inside of tensorflow works fine. It appears that the backend understands ‘string’ as a type:
>>> from keras import backend as K
>>> import tensorflow as tf
>>> K.dtype(K.placeholder(dtype=tf.string))
'string'
so i’m not sure why it is getting converted to a float32.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:1
- Comments:6 (2 by maintainers)
Note that I can work around this by explicitly casting the input to tf.string inside the squeeze:
tf.squeeze(tf.cast(x, tf.string))
. I’m still unclear why this is necessary. I would think it would automatically get the tf.string type from the input layer.hi i am getting some thing similar issue and no idea how to resolve it. Please give me some hints.
Create model in inference mode
with tf.device(DEVICE): model = modellib.MaskRCNN(mode=“inference”, model_dir=LOGS_DIR,config=config)
Error mesage:
ValueError Traceback (most recent call last) /miniconda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords) 509 as_ref=input_arg.is_ref, –> 510 preferred_dtype=default_dtype) 511 except TypeError as err:
/miniconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx) 1106 if ret is None: -> 1107 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) 1108
/miniconda/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py in _autopacking_conversion_function(v, dtype, name, as_ref) 959 return NotImplemented –> 960 return _autopacking_helper(v, inferred_dtype, name or “packed”) 961
/miniconda/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py in _autopacking_helper(list_or_tuple, dtype, name) 921 elems_as_tensors.append( –> 922 constant_op.constant(elem, dtype=dtype, name=str(i))) 923 return gen_array_ops.pack(elems_as_tensors, name=scope)
/miniconda/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name, verify_shape) 195 tensor_util.make_tensor_proto( –> 196 value, dtype=dtype, shape=shape, verify_shape=verify_shape)) 197 dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype)
/miniconda/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py in make_tensor_proto(values, dtype, shape, verify_shape) 423 if values is None: –> 424 raise ValueError(“None values not supported.”) 425 # if dtype is provided, forces numpy array to be the type
ValueError: None values not supported.