How to use "predict" to perform prediction on a single image?
See original GitHub issueI am trying to perform the prediction on a single image using the following script. import argparse import tensorflow as tf import numpy as np
def load_graph(frozen_graph_filename):
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we can use again a convenient built-in function to import a graph_def into the
# current default Graph
with tf.Graph().as_default() as graph:
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="prefix",
op_dict=None,
producer_op_list=None
)
return graph
def getImage(path):
with open(path, 'rb') as img_file:
img = img_file.read()
print(img)
return img
frozen_model_filename = "exported-model/frozen_graph.pb"
graph = load_graph(frozen_model_filename)
def ocrImage(image):
x = graph.get_tensor_by_name('prefix/input_image_as_bytes:0')
y = graph.get_tensor_by_name('prefix/prediction:0')
allProbs = graph.get_tensor_by_name('prefix/probability:0')
img = getImage(image)
with tf.Session(graph=graph) as sess:
(y_out, probs_output) = sess.run([y,allProbs], feed_dict={
x: [img]
})
# print(y_out)
# print(allProbsToScore(probs_output))
return {
"predictions": [{
"ocr": str(y_out),
"confidence": probs_output
}]
};
if __name__ == '__main__':
# Let's allow the user to pass the filename as an argument
parser = argparse.ArgumentParser()
# parser.add_argument("--frozen_model_filename", default="checkpoints_pruned/frozen_model.pb", type=str, help="Frozen model file to import")
parser.add_argument("--image", default="0_15.png", type=str, help="Path to image")
args = parser.parse_args()
predictions = ocrImage(args.image)
print(str(predictions))
When I run the script I am getting below error:
(/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env) ahmad@irs-aeye-ll-014:/data/work/tvs-part-rec/loc-aocr/aocr_50k$ python predict.py
WARNING:tensorflow:From predict.py:21: calling import_graph_def (from tensorflow.python.framework.importer) with op_dict is deprecated and will be removed in a future version.
Instructions for updating:
Please file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature.
2019-08-06 14:47:13.028800: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-08-06 14:47:13.090489: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-06 14:47:13.090737: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705
pciBusID: 0000:01:00.0
totalMemory: 5.94GiB freeMemory: 5.72GiB
2019-08-06 14:47:13.090754: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-08-06 14:47:13.408287: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-06 14:47:13.408323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-08-06 14:47:13.408330: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-08-06 14:47:13.408416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5494 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1317, in _run_fn
self._extend_graph()
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1352, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation prefix/Rank: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Registered kernels:
device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_HALF, DT_UINT32, DT_UINT64]
device='XLA_GPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, ..., DT_QINT32, DT_BFLOAT16, DT_HALF, DT_UINT32, DT_UINT64]
device='XLA_CPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL]
device='XLA_GPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL, DT_BFLOAT16]
device='CPU'
device='GPU'; T in [DT_BOOL]
device='GPU'; T in [DT_INT32]
device='GPU'; T in [DT_VARIANT]
device='GPU'; T in [DT_COMPLEX128]
device='GPU'; T in [DT_COMPLEX64]
device='GPU'; T in [DT_INT8]
device='GPU'; T in [DT_UINT8]
device='GPU'; T in [DT_INT16]
device='GPU'; T in [DT_UINT16]
device='GPU'; T in [DT_INT64]
device='GPU'; T in [DT_DOUBLE]
device='GPU'; T in [DT_FLOAT]
device='GPU'; T in [DT_BFLOAT16]
device='GPU'; T in [DT_HALF]
[[{{node prefix/Rank}} = Rank[T=DT_STRING, _device="/device:GPU:0"](prefix/input_image_as_bytes)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "predict.py", line 61, in <module>
predictions = ocrImage(args.image)
File "predict.py", line 43, in ocrImage
x: [img]
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation prefix/Rank: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Registered kernels:
device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_HALF, DT_UINT32, DT_UINT64]
device='XLA_GPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, ..., DT_QINT32, DT_BFLOAT16, DT_HALF, DT_UINT32, DT_UINT64]
device='XLA_CPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL]
device='XLA_GPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL, DT_BFLOAT16]
device='CPU'
device='GPU'; T in [DT_BOOL]
device='GPU'; T in [DT_INT32]
device='GPU'; T in [DT_VARIANT]
device='GPU'; T in [DT_COMPLEX128]
device='GPU'; T in [DT_COMPLEX64]
device='GPU'; T in [DT_INT8]
device='GPU'; T in [DT_UINT8]
device='GPU'; T in [DT_INT16]
device='GPU'; T in [DT_UINT16]
device='GPU'; T in [DT_INT64]
device='GPU'; T in [DT_DOUBLE]
device='GPU'; T in [DT_FLOAT]
device='GPU'; T in [DT_BFLOAT16]
device='GPU'; T in [DT_HALF]
[[node prefix/Rank (defined at predict.py:21) = Rank[T=DT_STRING, _device="/device:GPU:0"](prefix/input_image_as_bytes)]]
Caused by op 'prefix/Rank', defined at:
File "predict.py", line 32, in <module>
graph = load_graph(frozen_model_filename)
File "predict.py", line 21, in load_graph
producer_op_list=None
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def
_ProcessNewOps(graph)
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 234, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3440, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3440, in <listcomp>
for c_op in c_api_util.new_tf_operations(self)
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3299, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Cannot assign a device for operation prefix/Rank: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Registered kernels:
device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_HALF, DT_UINT32, DT_UINT64]
device='XLA_GPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, ..., DT_QINT32, DT_BFLOAT16, DT_HALF, DT_UINT32, DT_UINT64]
device='XLA_CPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL]
device='XLA_GPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL, DT_BFLOAT16]
device='CPU'
device='GPU'; T in [DT_BOOL]
device='GPU'; T in [DT_INT32]
device='GPU'; T in [DT_VARIANT]
device='GPU'; T in [DT_COMPLEX128]
device='GPU'; T in [DT_COMPLEX64]
device='GPU'; T in [DT_INT8]
device='GPU'; T in [DT_UINT8]
device='GPU'; T in [DT_INT16]
device='GPU'; T in [DT_UINT16]
device='GPU'; T in [DT_INT64]
device='GPU'; T in [DT_DOUBLE]
device='GPU'; T in [DT_FLOAT]
device='GPU'; T in [DT_BFLOAT16]
device='GPU'; T in [DT_HALF]
[[node prefix/Rank (defined at predict.py:21) = Rank[T=DT_STRING, _device="/device:GPU:0"](prefix/input_image_as_bytes)]]
Can someone explain the process to perform recognition on a single image.
Issue Analytics
- State:
- Created 4 years ago
- Comments:19
Top Results From Across the Web
Keras: model.predict for a single image - Stack Overflow
When predicting, you have to respect this shape even if you have only one image. Your input should be of shape: [1, image_width, ......
Read more >How to predict an image using CNN with Keras?
Load an image. · Resize it to a predefined size such as 224 x 224 pixels. · Scale the value of the pixels...
Read more >TensorFlow Tutorial 11 - Make Prediction on a Single Image
LSTM Time Series Forecasting Tutorial in Python · Image Classification using CNN Keras | Full implementation · Detect Text in Images with Python...
Read more >How to use a model to do predictions with Keras - ActiveState
Keras models can be used to detect trends and make predictions, using the model.predict() class and it's variant, ...
Read more >Image Prediction Using a Pre-trained Model - Analytics Vidhya
Now, let's implement a pre-trained model to recognize objects and images. I'll be using the Keras library as all of the pre-trained models...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Modify line 263 in main.py file. Hope it helps.
Execute with this command:
python main.py predict (Two under score before main and two underscore after main)
There is some problem with your GPU config or kernel is not registered in tensorflow for some operation for example data types in static hash tables.