question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Allow to disable all loggings

See original GitHub issue

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
  • Ray installed from (source or binary): pip install ray==0.7.1
  • Ray version: 0.7.1
  • Python version: 3.6.7
  • Exact command to reproduce:
config = rllib.agents.ppo.DEFAULT_CONFIG.copy()
config['env'] = 'CartPole-v0'
tune.run(
    run_or_experiment=rllib.agents.ppo.PPOTrainer,
    name='RegressionTest',
    stop={"timesteps_total": 10000},
    config=config,
    num_samples=1,
    local_dir='logs',
    checkpoint_at_end=True,
    max_failures=0,
    verbose=0,
    reuse_actors=False,
    resume=False,
)

Describe the problem

ray loggings below are polluting the console when I run my tests. I want to be able to run the previous code with noting printed in the console (except for exceptions of course). Attempts:

logging.disable(logging.CRITICAL)

ray.init(logging_level=logging.FATAL)

logging.basicConfig(level=logging.CRITICAL)

for logger_name in logging.root.manager.loggerDict:
    logger = logging.getLogger(logger_name)
    logger.propagate = False
    logger.disabled = True

caplog fixture

Source code / logs

(pid=30511) 2019-06-27 11:49:05,863	WARNING ppo.py:153 -- FYI: By default, the value function will not share layers with the policy model ('vf_share_layers': False).
(pid=30511) 2019-06-27 11:49:05,866	INFO policy_evaluator.py:311 -- Creating policy evaluation worker 0 on CPU (please ignore any CUDA init errors)
(pid=30511) 2019-06-27 11:49:05.866802: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=30511) 2019-06-27 11:49:05.873373: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
(pid=30511) 2019-06-27 11:49:05.873424: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] retrieving CUDA diagnostic information for host: fontana
(pid=30511) 2019-06-27 11:49:05.873434: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:170] hostname: fontana
(pid=30511) 2019-06-27 11:49:05.873481: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:194] libcuda reported version is: 390.116.0
(pid=30511) 2019-06-27 11:49:05.873501: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:198] kernel reported version is: 390.116.0
(pid=30511) 2019-06-27 11:49:05.873506: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:305] kernel version seems to match DSO: 390.116.0
(pid=30511) 2019-06-27 11:49:05,965	INFO dynamic_tf_policy.py:265 -- Initializing loss function with dummy input:
(pid=30511) 
(pid=30511) { 'action_prob': <tf.Tensor 'default_policy/action_prob:0' shape=(?,) dtype=float32>,
(pid=30511)   'actions': <tf.Tensor 'default_policy/actions:0' shape=(?,) dtype=int64>,
(pid=30511)   'advantages': <tf.Tensor 'default_policy/advantages:0' shape=(?,) dtype=float32>,
(pid=30511)   'behaviour_logits': <tf.Tensor 'default_policy/behaviour_logits:0' shape=(?, 2) dtype=float32>,
(pid=30511)   'dones': <tf.Tensor 'default_policy/dones:0' shape=(?,) dtype=bool>,
(pid=30511)   'new_obs': <tf.Tensor 'default_policy/new_obs:0' shape=(?, 4) dtype=float32>,
(pid=30511)   'obs': <tf.Tensor 'default_policy/observation:0' shape=(?, 4) dtype=float32>,
(pid=30511)   'prev_actions': <tf.Tensor 'default_policy/action:0' shape=(?,) dtype=int64>,
(pid=30511)   'prev_rewards': <tf.Tensor 'default_policy/prev_reward:0' shape=(?,) dtype=float32>,
(pid=30511)   'rewards': <tf.Tensor 'default_policy/rewards:0' shape=(?,) dtype=float32>,
(pid=30511)   'value_targets': <tf.Tensor 'default_policy/value_targets:0' shape=(?,) dtype=float32>,
(pid=30511)   'vf_preds': <tf.Tensor 'default_policy/vf_preds:0' shape=(?,) dtype=float32>}
(pid=30511) 
(pid=30511) /home/federico/Desktop/repos/trading-gym/.venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:112: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
(pid=30511)   "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
(pid=30511) 2019-06-27 11:49:06,548	INFO policy_evaluator.py:731 -- Built policy map: {'default_policy': <ray.rllib.policy.tf_policy_template.PPOTFPolicy object at 0x7f6a926613c8>}
(pid=30511) 2019-06-27 11:49:06,548	INFO policy_evaluator.py:732 -- Built preprocessor map: {'default_policy': <ray.rllib.models.preprocessors.NoPreprocessor object at 0x7f6a92661080>}
(pid=30511) 2019-06-27 11:49:06,548	INFO policy_evaluator.py:343 -- Built filter map: {'default_policy': <ray.rllib.utils.filter.NoFilter object at 0x7f6ceb1dce48>}
(pid=30511) 2019-06-27 11:49:06,562	INFO multi_gpu_optimizer.py:80 -- LocalMultiGPUOptimizer devices ['/cpu:0']
(pid=30504) 2019-06-27 11:49:06,566	INFO policy_evaluator.py:311 -- Creating policy evaluation worker 2 on CPU (please ignore any CUDA init errors)
(pid=30504) 2019-06-27 11:49:06.574578: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=30504) 2019-06-27 11:49:06.581124: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
(pid=30504) 2019-06-27 11:49:06.581160: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] retrieving CUDA diagnostic information for host: fontana
(pid=30504) 2019-06-27 11:49:06.581166: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:170] hostname: fontana
(pid=30504) 2019-06-27 11:49:06.581197: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:194] libcuda reported version is: 390.116.0
(pid=30504) 2019-06-27 11:49:06.581216: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:198] kernel reported version is: 390.116.0
(pid=30504) 2019-06-27 11:49:06.581221: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:305] kernel version seems to match DSO: 390.116.0
(pid=30506) 2019-06-27 11:49:06,562	INFO policy_evaluator.py:311 -- Creating policy evaluation worker 1 on CPU (please ignore any CUDA init errors)
(pid=30506) 2019-06-27 11:49:06.570708: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=30506) 2019-06-27 11:49:06.577054: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
(pid=30506) 2019-06-27 11:49:06.577090: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] retrieving CUDA diagnostic information for host: fontana
(pid=30506) 2019-06-27 11:49:06.577096: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:170] hostname: fontana
(pid=30506) 2019-06-27 11:49:06.577128: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:194] libcuda reported version is: 390.116.0
(pid=30506) 2019-06-27 11:49:06.577147: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:198] kernel reported version is: 390.116.0
(pid=30506) 2019-06-27 11:49:06.577153: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:305] kernel version seems to match DSO: 390.116.0
(pid=30506) 2019-06-27 11:49:06,707	INFO dynamic_tf_policy.py:265 -- Initializing loss function with dummy input:
(pid=30506) 
(pid=30506) { 'action_prob': <tf.Tensor 'default_policy/action_prob:0' shape=(?,) dtype=float32>,
(pid=30506)   'actions': <tf.Tensor 'default_policy/actions:0' shape=(?,) dtype=int64>,
(pid=30506)   'advantages': <tf.Tensor 'default_policy/advantages:0' shape=(?,) dtype=float32>,
(pid=30506)   'behaviour_logits': <tf.Tensor 'default_policy/behaviour_logits:0' shape=(?, 2) dtype=float32>,
(pid=30506)   'dones': <tf.Tensor 'default_policy/dones:0' shape=(?,) dtype=bool>,
(pid=30506)   'new_obs': <tf.Tensor 'default_policy/new_obs:0' shape=(?, 4) dtype=float32>,
(pid=30506)   'obs': <tf.Tensor 'default_policy/observation:0' shape=(?, 4) dtype=float32>,
(pid=30506)   'prev_actions': <tf.Tensor 'default_policy/action:0' shape=(?,) dtype=int64>,
(pid=30506)   'prev_rewards': <tf.Tensor 'default_policy/prev_reward:0' shape=(?,) dtype=float32>,
(pid=30506)   'rewards': <tf.Tensor 'default_policy/rewards:0' shape=(?,) dtype=float32>,
(pid=30506)   'value_targets': <tf.Tensor 'default_policy/value_targets:0' shape=(?,) dtype=float32>,
(pid=30506)   'vf_preds': <tf.Tensor 'default_policy/vf_preds:0' shape=(?,) dtype=float32>}
(pid=30506) 
(pid=30506) /home/federico/Desktop/repos/trading-gym/.venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:112: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
(pid=30506)   "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
(pid=30504) /home/federico/Desktop/repos/trading-gym/.venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:112: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
(pid=30504)   "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
(pid=30506) 2019-06-27 11:49:08,610	INFO policy_evaluator.py:437 -- Generating sample batch of size 200
(pid=30506) 2019-06-27 11:49:08,610	INFO sampler.py:308 -- Raw obs from env: { 0: { 'agent0': np.ndarray((4,), dtype=float64, min=-0.048, max=-0.002, mean=-0.023)}}
(pid=30506) 2019-06-27 11:49:08,610	INFO sampler.py:309 -- Info return from env: {0: {'agent0': None}}
(pid=30506) 2019-06-27 11:49:08,610	INFO sampler.py:407 -- Preprocessed obs: np.ndarray((4,), dtype=float64, min=-0.048, max=-0.002, mean=-0.023)
(pid=30506) 2019-06-27 11:49:08,610	INFO sampler.py:411 -- Filtered obs: np.ndarray((4,), dtype=float64, min=-0.048, max=-0.002, mean=-0.023)
(pid=30506) 2019-06-27 11:49:08,611	INFO sampler.py:525 -- Inputs to compute_actions():
(pid=30506) 
(pid=30506) { 'default_policy': [ { 'data': { 'agent_id': 'agent0',
(pid=30506)                                   'env_id': 0,
(pid=30506)                                   'info': None,
(pid=30506)                                   'obs': np.ndarray((4,), dtype=float64, min=-0.048, max=-0.002, mean=-0.023),
(pid=30506)                                   'prev_action': np.ndarray((), dtype=int64, min=0.0, max=0.0, mean=0.0),
(pid=30506)                                   'prev_reward': 0.0,
(pid=30506)                                   'rnn_state': []},
(pid=30506)                         'type': 'PolicyEvalData'}]}
(pid=30506) 
(pid=30506) 2019-06-27 11:49:08,611	INFO tf_run_builder.py:92 -- Executing TF run without tracing. To dump TF timeline traces to disk, set the TF_TIMELINE_DIR environment variable.
(pid=30506) 2019-06-27 11:49:08,624	INFO sampler.py:552 -- Outputs of compute_actions():
(pid=30506) 
(pid=30506) { 'default_policy': ( np.ndarray((1,), dtype=int64, min=0.0, max=0.0, mean=0.0),
(pid=30506)                       [],
(pid=30506)                       { 'action_prob': np.ndarray((1,), dtype=float32, min=0.5, max=0.5, mean=0.5),
(pid=30506)                         'behaviour_logits': np.ndarray((1, 2), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=30506)                         'vf_preds': np.ndarray((1,), dtype=float32, min=-0.0, max=-0.0, mean=-0.0)})}
(pid=30506) 
(pid=30506) 2019-06-27 11:49:08,640	INFO sample_batch_builder.py:161 -- Trajectory fragment after postprocess_trajectory():
(pid=30506) 
(pid=30506) { 'agent0': { 'data': { 'action_prob': np.ndarray((30,), dtype=float32, min=0.499, max=0.501, mean=0.5),
(pid=30506)                         'actions': np.ndarray((30,), dtype=int64, min=0.0, max=1.0, mean=0.433),
(pid=30506)                         'advantages': np.ndarray((30,), dtype=float32, min=0.996, max=26.03, mean=14.1),
(pid=30506)                         'agent_index': np.ndarray((30,), dtype=int64, min=0.0, max=0.0, mean=0.0),
(pid=30506)                         'behaviour_logits': np.ndarray((30, 2), dtype=float32, min=-0.002, max=0.002, mean=-0.0),
(pid=30506)                         'dones': np.ndarray((30,), dtype=bool, min=0.0, max=1.0, mean=0.033),
(pid=30506)                         'eps_id': np.ndarray((30,), dtype=int64, min=1799405065.0, max=1799405065.0, mean=1799405065.0),
(pid=30506)                         'infos': np.ndarray((30,), dtype=object, head={}),
(pid=30506)                         'new_obs': np.ndarray((30, 4), dtype=float32, min=-1.549, max=1.994, mean=-0.017),
(pid=30506)                         'obs': np.ndarray((30, 4), dtype=float32, min=-1.549, max=1.994, mean=-0.019),
(pid=30506)                         'prev_actions': np.ndarray((30,), dtype=int64, min=0.0, max=1.0, mean=0.4),
(pid=30506)                         'prev_rewards': np.ndarray((30,), dtype=float32, min=0.0, max=1.0, mean=0.967),
(pid=30506)                         'rewards': np.ndarray((30,), dtype=float32, min=1.0, max=1.0, mean=1.0),
(pid=30506)                         't': np.ndarray((30,), dtype=int64, min=0.0, max=29.0, mean=14.5),
(pid=30506)                         'unroll_id': np.ndarray((30,), dtype=int64, min=0.0, max=0.0, mean=0.0),
(pid=30506)                         'value_targets': np.ndarray((30,), dtype=float32, min=1.0, max=26.03, mean=14.101),
(pid=30506)                         'vf_preds': np.ndarray((30,), dtype=float32, min=-0.003, max=0.004, mean=0.001)},
(pid=30506)               'type': 'SampleBatch'}}
(pid=30506) 
(pid=30506) 2019-06-27 11:49:08,733	INFO policy_evaluator.py:474 -- Completed sample batch:
(pid=30506) 
(pid=30506) { 'data': { 'action_prob': np.ndarray((200,), dtype=float32, min=0.499, max=0.501, mean=0.5),
(pid=30506)             'actions': np.ndarray((200,), dtype=int64, min=0.0, max=1.0, mean=0.515),
(pid=30506)             'advantages': np.ndarray((200,), dtype=float32, min=0.996, max=26.77, mean=9.865),
(pid=30506)             'agent_index': np.ndarray((200,), dtype=int64, min=0.0, max=0.0, mean=0.0),
(pid=30506)             'behaviour_logits': np.ndarray((200, 2), dtype=float32, min=-0.002, max=0.002, mean=0.0),
(pid=30506)             'dones': np.ndarray((200,), dtype=bool, min=0.0, max=1.0, mean=0.055),
(pid=30506)             'eps_id': np.ndarray((200,), dtype=int64, min=97813177.0, max=1937143959.0, mean=1165894504.795),
(pid=30506)             'infos': np.ndarray((200,), dtype=object, head={}),
(pid=30506)             'new_obs': np.ndarray((200, 4), dtype=float32, min=-2.025, max=2.593, mean=-0.002),
(pid=30506)             'obs': np.ndarray((200, 4), dtype=float32, min=-2.025, max=2.247, mean=-0.001),
(pid=30506)             'prev_actions': np.ndarray((200,), dtype=int64, min=0.0, max=1.0, mean=0.485),
(pid=30506)             'prev_rewards': np.ndarray((200,), dtype=float32, min=0.0, max=1.0, mean=0.94),
(pid=30506)             'rewards': np.ndarray((200,), dtype=float32, min=1.0, max=1.0, mean=1.0),
(pid=30506)             't': np.ndarray((200,), dtype=int64, min=0.0, max=30.0, mean=9.605),
(pid=30506)             'unroll_id': np.ndarray((200,), dtype=int64, min=0.0, max=0.0, mean=0.0),
(pid=30506)             'value_targets': np.ndarray((200,), dtype=float32, min=0.998, max=26.77, mean=9.864),
(pid=30506)             'vf_preds': np.ndarray((200,), dtype=float32, min=-0.004, max=0.004, mean=-0.001)},
(pid=30506)   'type': 'SampleBatch'}
(pid=30506) 
(pid=30511) 2019-06-27 11:49:09,672	INFO multi_gpu_impl.py:146 -- Training on concatenated sample batches:
(pid=30511) 
(pid=30511) { 'inputs': [ np.ndarray((4000,), dtype=int64, min=0.0, max=1.0, mean=0.483),
(pid=30511)               np.ndarray((4000,), dtype=float32, min=0.0, max=1.0, mean=0.954),
(pid=30511)               np.ndarray((4000, 4), dtype=float32, min=-2.611, max=2.68, mean=-0.001),
(pid=30511)               np.ndarray((4000,), dtype=int64, min=0.0, max=1.0, mean=0.508),
(pid=30511)               np.ndarray((4000,), dtype=float32, min=-1.226, max=4.359, mean=0.0),
(pid=30511)               np.ndarray((4000, 2), dtype=float32, min=-0.003, max=0.004, mean=0.0),
(pid=30511)               np.ndarray((4000,), dtype=float32, min=0.996, max=52.941, mean=12.399),
(pid=30511)               np.ndarray((4000,), dtype=float32, min=-0.004, max=0.004, mean=-0.0)],
(pid=30511)   'placeholders': [ <tf.Tensor 'default_policy/action:0' shape=(?,) dtype=int64>,
(pid=30511)                     <tf.Tensor 'default_policy/prev_reward:0' shape=(?,) dtype=float32>,
(pid=30511)                     <tf.Tensor 'default_policy/observation:0' shape=(?, 4) dtype=float32>,
(pid=30511)                     <tf.Tensor 'default_policy/actions:0' shape=(?,) dtype=int64>,
(pid=30511)                     <tf.Tensor 'default_policy/advantages:0' shape=(?,) dtype=float32>,
(pid=30511)                     <tf.Tensor 'default_policy/behaviour_logits:0' shape=(?, 2) dtype=float32>,
(pid=30511)                     <tf.Tensor 'default_policy/value_targets:0' shape=(?,) dtype=float32>,
(pid=30511)                     <tf.Tensor 'default_policy/vf_preds:0' shape=(?,) dtype=float32>],
(pid=30511)   'state_inputs': []}
(pid=30511) 
(pid=30511) 2019-06-27 11:49:09,672	INFO multi_gpu_impl.py:191 -- Divided 4000 rollout sequences, each of length 1, among 1 devices.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:2
  • Comments:9 (9 by maintainers)

github_iconTop GitHub Comments

3reactions
stefanbschneidercommented, Mar 9, 2021

Ah, log_to_driver is not an argument of tune.run() but of ray.init(). Setting ray.init(log_to_driver=False) solved the problem for me. Thanks for the pointer!

0reactions
stefanbschneidercommented, Mar 9, 2021

I don’t think there is a log_to_driver option: https://github.com/ray-project/ray/blob/master/python/ray/tune/tune.py#L71

I get TypeError: run() got an unexpected keyword argument 'log_to_driver' trying it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

python - How to disable logging on the standard error stream?
I found a solution for this: logger = logging.getLogger('my-logger') logger.propagate = False # now if you use logger it will not log to...
Read more >
How to disable logging from imported modules in Python?
To disable logging from imported modules in Python we need to use the getLogger() function. The getLogger() function. The name of the logger...
Read more >
Question** Method to disable all logging · Issue #1382 - GitHub
Is there a method to disable logging? We have different log types setup (e.g. database, file, etc.) and there are times we don't...
Read more >
Enabling and disabling logging - IBM
Open the administrative console. · Click Monitoring and Tuning > Request Metrics. · Under Request Metrics Destination, select the Standard Logs check box...
Read more >
How can I enable/disable logging on specific devices?
To enable or disable logging for a specific device, find the device and toggle the Enabled switch: You can also enable or disable...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found