question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Global partial Dependence plots not working

See original GitHub issue

Summary I’m haveing troubles getting the Global partial dependence plots to work, neither from Jupyter or Tensorboard. However, the partial dependence plots work when using “Selected datapoint”. (screenshots below)

Info

  • TensorBoard version: 1.12.0
  • WitWidget version: couldn’t find a version, but the latest you get from running: pip install --upgrade witwidget as of 2019-02-01.
  • OS Platform and version: from uname -a: Linux omitted.google.com 4.19.12-1rodete1-amd64 #1 SMP Debian 4.19.12-1rodete1 (2018-12-26) x86_64 GNU/Linux
  • Python version: 2.7.15

Description As per the issue summary above, Global partial dependencies plot don’t work, while the ones for “selected” datapoint do:

Global dependencies plots: image

Selected datapoint dependencies plots: image

On mouse hover in the broken plots you get this kind of tooltip: image

So, the model I used is a canned estimator DNNLinearCombinedClassifier and I tried 2 different serving input functions but the result didn’t change:


# first try
def what_if_serving_input_fn():
  feature_columns = featurizer.create_feature_columns()
  input_feature_columns = [
      feature_columns[feature_name] for feature_name in metadata.INPUT_FEATURE_NAMES]
  feat = tf.feature_column.make_parse_example_spec(input_feature_columns)
  return tf.estimator.export.build_parsing_serving_input_receiver_fn(feat)

# second try
def example_serving_input_fn():
    feature_columns = featurizer.create_feature_columns()
    input_feature_columns = [
      feature_columns[feature_name] for feature_name in metadata.INPUT_FEATURE_NAMES]

    example_bytestring = tf.placeholder(
        shape=[None],
        dtype=tf.string,
    )

    feature_scalars = tf.parse_example(
        example_bytestring,
        tf.feature_column.make_parse_example_spec(input_feature_columns)
    )

    features = {
        key: tensor
        for key, tensor in feature_scalars.iteritems()
    }

    return tf.estimator.export.ServingInputReceiver(
        features=process_features(features),
        receiver_tensors={'examples': example_bytestring}
    )

And the export is performed by the following code:


    estimator.export_saved_model(
      export_dir_base=os.path.join(extended_estimator.model_dir, 'what_if'),
      serving_input_receiver_fn=input.what_if_serving_input_fn()
    )
    # or if using the other input fn
    estimator.export_saved_model(
      export_dir_base=os.path.join(extended_estimator.model_dir, 'what_if'),
      serving_input_receiver_fn=input.example_serving_input_fn
    )

The code is deployed in a local docker container (tf serving) by running


    docker run -p 8500:8500 -p 8501:8501 --cpus=4 --memory=4g \
    --mount type=bind,source=$MODEL_DIR_LOCAL/what_if,target=/models/what_if \
    -e MODEL_NAME=what_if \
    -e TF_CPP_MIN_VLOG_LEVEL=0 \
    -t tensorflow/serving

And even by running the container with verbose logging no warnings/error were returned by tf serving.

Finally, when the jupyter notebooks code used to start the WitWidget is the following:

config_builder = WitConfigBuilder(examples) \
    .set_inference_address('localhost:8500') \
    .set_model_name('what_if') \
    .set_model_signature('classification') \
    .set_label_vocab(['low_affinity', 'high_affinity'])
WitWidget(config_builder, height=tool_height_in_px)

But as mentioned before, the other parts of the what_if tool work properly.

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
wchargincommented, Feb 1, 2019

Apologies for the spam. Of course I found the problem as soon as I posted.

No worries. 😃

@jameswex: Perhaps we could display a warning when NaNs cause the plot to appear broken, to help people more easily understand the problem?

0reactions
manivaradarajancommented, May 29, 2019

@sidSingla Is your question related to the What-If Tool? If not, can you open a separate issue and give some more details as to the problem you are seeing, including logs to reproduce?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Plotting issues -Partial dependence plots - Stack Overflow
I tried %>% plot(color = "red") and %>% plot(col = "red"), but both do not seem to work. Anyone knows how to fix...
Read more >
8.1 Partial Dependence Plot (PDP) | Interpretable Machine ...
This problem is easily solved by showing a rug (indicators for data points on the x-axis) or a histogram. The assumption of independence...
Read more >
Partial Dependence Plots | Kaggle
We'll start with 2 partial dependence plots showing the relationship (according to our model) between Price and a couple variables from the Melbourne ......
Read more >
Partial Dependence Plots (Opening the Black Box) - YouTube
How do we open the infamous black box in machine learning?My Patreon : https://www.patreon.com/user?u=49277905.
Read more >
Explainable AI (XAI) Methods Part 1 — Partial Dependence ...
Primer on Partial Dependence Plot, its advantages and disadvantages, ... There are three additional limitations or issues with this method.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found