question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[rllib] Pytorch missing custom loss

See original GitHub issue

What is the problem?

Running rllib/examples/custom_loss.py example, the tensorflow implementation displays the custom loss, but Pytorch does not. During the process, I had to update the rllib/examples/models/custom_loss_model.py models method name from custom_stats to metrics.

Ray version and other system information (Python version, TensorFlow version, OS): Ray version: latest wheel Python: 3.6.8 TF: 2.1 OS: RHEL 7.7

Reproduction (REQUIRED)

Please provide a script that can be run to reproduce the issue. The script should have no external library dependencies (i.e., use fake or mock data / environments):

Use existing example script in Rllib

If we cannot run your script, we cannot fix your issue.

  • [x ] I have verified my script runs in a clean environment and reproduces the issue.
  • [x ] I have verified the issue also occurs with the latest wheels.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:14 (11 by maintainers)

github_iconTop GitHub Comments

2reactions
sven1977commented, Jun 25, 2020

This PR (hopefully) fixes this issue: https://github.com/ray-project/ray/pull/9142 I did confirm via our now-working example script: rllib/examples/custom_loss.py --torch.

Note that you need to: a) return an altered list of policy-losses (same length) from the custom_loss function (no extra custom-loss optimizer; custom_loss gets e.g. added to policy loss(es)). b) return a list that has the policy losses as well as the custom loss(es) in it (list must have same length as there are optimizers in the torch policy). See the: rllib/examples/models/custom_loss_model.py::TorchCustomLossModel for an example for both these cases.

Closing this issue. Feel free to re-open should this not solve the problem on your end.

1reaction
sven1977commented, Jul 17, 2020

Yeah, that makes perfect sense. We’ll fix this as well. Thanks so much for your investigation into this! @cedros23

Read more comments on GitHub >

github_iconTop Results From Across the Web

How To Customize Policies — Ray 2.2.0
This section covers how to build a TensorFlow RLlib policy using tf_policy_template.build_tf_policy() . To start, you first have to define a loss function....
Read more >
Models, Preprocessors, and Action Distributions — Ray 2.2.0
You can mix supervised losses into any RLlib algorithm through custom models. For example, you can add an imitation learning loss on expert...
Read more >
RLlib Concepts and Custom Algorithms - the Ray documentation
This section covers how to build a TensorFlow RLlib policy using tf_policy_template.build_tf_policy() . To start, you first have to define a loss function....
Read more >
Examples — Ray 2.2.0
Example of using a custom Keras- or PyTorch RNN model. Registering a custom model with supervised loss: Example of defining and registering a...
Read more >
Source code for ray.rllib.models.modelv2
Returns: List of or scalar tensor for the customized loss(es) for this model. ... TODO: This is unnecessary for when no preprocessor is...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found