question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[docs] Issue on `rllib.rst`

See original GitHub issue

Documentation Problem/Question/Comment

I’m getting started on the RLlib tutorial. I attempted to run a python script:

from ray import tune
from ray.rllib.agents.ppo import PPOTrainer
tune.run(PPOTrainer, config={"env": "CartPole-v0"})  # "eager": True for eager execution

And I got the following error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_summary_ops.py", line 851, in write_summary
    tensor, tag, summary_metadata)
tensorflow.python.eager.core._FallbackException: This function does not handle the case of the path where all inputs are not already EagerTensors.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 540, in _process_trial
    result, terminate=(decision == TrialScheduler.STOP))
  File "/usr/local/lib/python3.7/site-packages/ray/tune/trial.py", line 386, in update_last_result
    self.result_logger.on_result(self.last_result)
  File "/usr/local/lib/python3.7/site-packages/ray/tune/logger.py", line 333, in on_result
    _logger.on_result(result)
  File "/usr/local/lib/python3.7/site-packages/ray/tune/logger.py", line 193, in on_result
    "/".join(path + [attr]), value, step=step)
  File "/usr/local/lib/python3.7/site-packages/tensorboard/plugins/scalar/summary_v2.py", line 65, in scalar
    metadata=summary_metadata)
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/ops/summary_ops_v2.py", line 646, in write
    _should_record_summaries_v2(), record, _nothing, name="summary_cond")
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/smart_cond.py", line 54, in smart_cond
    return true_fn()
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/ops/summary_ops_v2.py", line 640, in record
    name=scope)
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_summary_ops.py", line 856, in write_summary
    writer, step, tensor, tag, summary_metadata, name=name, ctx=_ctx)
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_summary_ops.py", line 893, in write_summary_eager_fallback
    attrs=_attrs, ctx=_ctx, name=name)
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 76, in quick_execute
    raise e
  File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 61, in quick_execute
    num_outputs)
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
  @tf.function
  def has_init_scope():
    my_constant = tf.constant(1.)
    with tf.init_scope():
      added = my_constant * 2
The graph tensor has name: create_file_writer/SummaryWriter:0

Not sure if this is a bad environment setup on my part or if the tutorial is outdated.

(Created directly from the docs)

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:1
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
RedTachyoncommented, Oct 16, 2019

I’m getting the same error with TF 2, but not with TF 1.14. I’m using ray 0.7.5

0reactions
RedTachyoncommented, Oct 17, 2019

Like in #5934 upgrading to 0.8.0.dev6 fixes this issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

[docs] Issue on `rllib.rst` · Issue #7345 · ray-project/ray - GitHub
Documentation Problem /Question/Comment The “Tuned Examples” show many YAML files which look quite interesting as example to run.
Read more >
Getting Started with RLlib — Ray 2.2.0 - the Ray documentation
The quickest way to run your first RLlib algorithm is to use the command line ... API provides the needed flexibility for applying...
Read more >
RLlib: Industry-Grade Reinforcement Learning — Ray 2.2.0
We first create a config for the algorithm, which sets the right environment, and defines all training parameters we want. Next, we build...
Read more >
rllib-env.rst.txt - the Ray documentation
include:: /_includes/rllib_announcement.rst .. include:: ... OpenAI Gym ---------- RLlib uses Gym as its environment interface for single-agent training.
Read more >
rllib-env.rst.txt - the Ray documentation
include:: /_includes/rllib/announcement.rst .. include:: ... where a top-level policy issues high level actions that are executed at finer timescales by a ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found