Global partial Dependence plots not working
See original GitHub issueSummary I’m haveing troubles getting the Global partial dependence plots to work, neither from Jupyter or Tensorboard. However, the partial dependence plots work when using “Selected datapoint”. (screenshots below)
Info
- TensorBoard version: 1.12.0
- WitWidget version: couldn’t find a version, but the latest you get from running:
pip install --upgrade witwidget
as of 2019-02-01. - OS Platform and version: from
uname -a
: Linux omitted.google.com 4.19.12-1rodete1-amd64 #1 SMP Debian 4.19.12-1rodete1 (2018-12-26) x86_64 GNU/Linux - Python version: 2.7.15
Description As per the issue summary above, Global partial dependencies plot don’t work, while the ones for “selected” datapoint do:
Global dependencies plots:
Selected datapoint dependencies plots:
On mouse hover in the broken plots you get this kind of tooltip:
So, the model I used is a canned estimator DNNLinearCombinedClassifier and I tried 2 different serving input functions but the result didn’t change:
# first try
def what_if_serving_input_fn():
feature_columns = featurizer.create_feature_columns()
input_feature_columns = [
feature_columns[feature_name] for feature_name in metadata.INPUT_FEATURE_NAMES]
feat = tf.feature_column.make_parse_example_spec(input_feature_columns)
return tf.estimator.export.build_parsing_serving_input_receiver_fn(feat)
# second try
def example_serving_input_fn():
feature_columns = featurizer.create_feature_columns()
input_feature_columns = [
feature_columns[feature_name] for feature_name in metadata.INPUT_FEATURE_NAMES]
example_bytestring = tf.placeholder(
shape=[None],
dtype=tf.string,
)
feature_scalars = tf.parse_example(
example_bytestring,
tf.feature_column.make_parse_example_spec(input_feature_columns)
)
features = {
key: tensor
for key, tensor in feature_scalars.iteritems()
}
return tf.estimator.export.ServingInputReceiver(
features=process_features(features),
receiver_tensors={'examples': example_bytestring}
)
And the export is performed by the following code:
estimator.export_saved_model(
export_dir_base=os.path.join(extended_estimator.model_dir, 'what_if'),
serving_input_receiver_fn=input.what_if_serving_input_fn()
)
# or if using the other input fn
estimator.export_saved_model(
export_dir_base=os.path.join(extended_estimator.model_dir, 'what_if'),
serving_input_receiver_fn=input.example_serving_input_fn
)
The code is deployed in a local docker container (tf serving) by running
docker run -p 8500:8500 -p 8501:8501 --cpus=4 --memory=4g \
--mount type=bind,source=$MODEL_DIR_LOCAL/what_if,target=/models/what_if \
-e MODEL_NAME=what_if \
-e TF_CPP_MIN_VLOG_LEVEL=0 \
-t tensorflow/serving
And even by running the container with verbose logging no warnings/error were returned by tf serving.
Finally, when the jupyter notebooks code used to start the WitWidget is the following:
config_builder = WitConfigBuilder(examples) \
.set_inference_address('localhost:8500') \
.set_model_name('what_if') \
.set_model_signature('classification') \
.set_label_vocab(['low_affinity', 'high_affinity'])
WitWidget(config_builder, height=tool_height_in_px)
But as mentioned before, the other parts of the what_if tool work properly.
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (3 by maintainers)
Top GitHub Comments
No worries. 😃
@jameswex: Perhaps we could display a warning when
NaN
s cause the plot to appear broken, to help people more easily understand the problem?@sidSingla Is your question related to the What-If Tool? If not, can you open a separate issue and give some more details as to the problem you are seeing, including logs to reproduce?