JSON examples for SageMaker / TF serving
See original GitHub issuecols = [
tf.feature_column.numeric_column("age"),
tf.feature_column.categorical_column_with_vocabulary_list("gender", ["m", "f", "other"]),
tf.feature_column.categorical_column_with_hash_bucket("city", hash_bucket_size=15000)
]
example_spec = tf.feature_column.make_parse_example_spec
print(example_spec)
# {'gender': VarLenFeature(dtype=tf.string), 'age': FixedLenFeature(shape=(1,), dtype=tf.float32, default_value=None), 'city': VarLenFeature(dtype=tf.string)}
srv_fun = tf.estimator.export.build_parsing_serving_input_receiver_fn(example_spec)()
print(str(srv_fun))
#ServingInputReceiver(features={'city': <tensorflow.python.framework.sparse_tensor.SparseTensor object at 0x7f83c7509ad0>, 'age': <tf.Tensor 'ParseExample_7/ParseExample:6' shape=(?, 1) dtype=float32>, 'gender': <tensorflow.python.framework.sparse_tensor.SparseTensor object at 0x7f83c75093d0>}, receiver_tensors={'examples': <tf.Tensor 'input_example_tensor_8:0' shape=(?,) dtype=string>}, receiver_tensors_alternatives=None)
What format can we use to send predict
requests using the Sagemaker SDK for input functions like the above?
The JSON serializer only handles arrays… so it seems like tf_estimator.predict({"city":"Paris", "gender":"m", "age":22})
is out. I tried variations of Array input and get cryptic errors from the TF serving proxy client (that source code is not available to my knowledge)
Looking at the TF Iris DNN example notebook: it uses a syntax like iris_predictor.predict([6.4, 3.2, 4.5, 1.5])
though the FeatureSpec is like {'input': IrisArrayData}
. So perhaps the feature spec needs a top level?
Issue Analytics
- State:
- Created 6 years ago
- Comments:5 (5 by maintainers)
Top Results From Across the Web
Deploying to TensorFlow Serving Endpoints
Python-based TensorFlow serving on SageMaker has support for Elastic Inference, ... The Endpoint will accept simplified JSON input that doesn't match the ...
Read more >Performing batch inference with TensorFlow Serving in ...
In this post, you learn how to use Amazon SageMaker batch transform to perform inferences on large datasets. The example in this post...
Read more >How to add custom preprocessing code to a Sagemaker ...
How to customize input/output of a Sagemaker Endpoint running a Tensorflow ... In our example, Tensorflow Serving runs a BERT NLP model, ...
Read more >Deploy a Pre-trained TensorFlow Model in AWS Using ...
tf_framework_version = tf.__version__import h5py import numpy as np import os import tarfilefrom sagemaker.tensorflow.serving import Model.
Read more >sagemaker invoke_endpoint signature_def feature prep
https://docs.aws.amazon.com/sagemaker/latest/dg/tf-training-inference-code- ... object that will be used by TensorFlow serving as input.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@nkconnor,
predictor.predict
should be able to handle a dictionary sotf_estimator.predict({"city":"Paris", "gender":"m", "age":22})
is the correct way to send requests like you mentioned above. The predictor logic defined here needs to handle that case.The container source code is not open sourced yet, we are planning on releasing it very soon. I will open a bug in the container side as well, to improve the logging when the prediction fails.
I have a temporary solution to unblock you until this issue is fixed. The solution is to create a instance of a
RealTimePredictor
that takes atensorflow_serving.apis
request as parameter instead of a dictionary:The
tensorflow_serving
classification
request above will be send direct to the tensorflow serving hosted inside the hosting container.Hi @nkconnor,
Thanks for your report, I’ve confirmed that it is a bug. I’m investigating it and will give you a better update tomorrow.