question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[rllib] AttributeError: 'list' object has no attribute 'float', when using dreamer

See original GitHub issue

What is the problem?

Ray version: 2.0.0.dev0 Python version: 3.8.5 OS: Ubuntu 20.04 Pytorch: 1.7.1

I’m the one who opened the issue about importing DREAMERTrainer (https://github.com/ray-project/ray/issues/13551#issue-788966521), and I have a problem with using Dreamer now. After fixing the DREAMERTrainer import error, I tried to use dreamer on my custom environment. However, it didn’t work. Thus, I test with ray-project / rl-experiments repository’s dreamer. But the same error (AttributeError: ‘list’ object has no attribute ‘float’) occurs. I want to know whether this is the module’s problem, and if not, I would like to get an example code on how to use it.

Thank you

Reproduction (REQUIRED)

rllib train -f dreamer/dreamer-deepmind-control.yaml

Traceback (most recent call last): File “/home/sangbeom/ray/python/ray/tune/trial_runner.py”, line 678, in _process_trial results = self.trial_executor.fetch_result(trial) File “/home/sangbeom/ray/python/ray/tune/ray_trial_executor.py”, line 610, in fetch_result result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT) File “/home/sangbeom/ray/python/ray/_private/client_mode_hook.py”, line 47, in wrapper return func(*args, **kwargs) File “/home/sangbeom/ray/python/ray/worker.py”, line 1458, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(AttributeError): ray::Dreamer.train_buffered() (pid=114452, ip=172.27.183.141) File “/home/sangbeom/ray/python/ray/rllib/utils/threading.py”, line 21, in wrapper return func(self, *a, **k) File “/home/sangbeom/ray/python/ray/rllib/policy/torch_policy.py”, line 281, in _compute_action_helper torch.exp(logp.float()) AttributeError: ‘list’ object has no attribute ‘float’

During handling of the above exception, another exception occurred:

ray::Dreamer.train_buffered() (pid=114452, ip=172.27.183.141) File “python/ray/_raylet.pyx”, line 439, in ray._raylet.execute_task File “python/ray/_raylet.pyx”, line 473, in ray._raylet.execute_task File “python/ray/_raylet.pyx”, line 476, in ray._raylet.execute_task File “python/ray/_raylet.pyx”, line 480, in ray._raylet.execute_task File “python/ray/_raylet.pyx”, line 432, in ray._raylet.execute_task.function_executor File “/home/sangbeom/ray/python/ray/rllib/agents/trainer_template.py”, line 107, in init Trainer.init(self, config, env, logger_creator) File “/home/sangbeom/ray/python/ray/rllib/agents/trainer.py”, line 486, in init super().init(config, logger_creator) File “/home/sangbeom/ray/python/ray/tune/trainable.py”, line 97, in init self.setup(copy.deepcopy(self.config)) File “/home/sangbeom/ray/python/ray/rllib/agents/trainer.py”, line 654, in setup self._init(self.config, self.env_creator) File “/home/sangbeom/ray/python/ray/rllib/agents/trainer_template.py”, line 134, in _init self.workers = self._make_workers( File “/home/sangbeom/ray/python/ray/rllib/agents/trainer.py”, line 725, in _make_workers return WorkerSet( File “/home/sangbeom/ray/python/ray/rllib/evaluation/worker_set.py”, line 90, in init self._local_worker = self._make_worker( File “/home/sangbeom/ray/python/ray/rllib/evaluation/worker_set.py”, line 321, in _make_worker worker = cls( File “/home/sangbeom/ray/python/ray/rllib/evaluation/rollout_worker.py”, line 479, in init self.policy_map, self.preprocessors = self._build_policy_map( File “/home/sangbeom/ray/python/ray/rllib/evaluation/rollout_worker.py”, line 1111, in _build_policy_map policy_map[name] = cls(obs_space, act_space, merged_conf) File “/home/sangbeom/ray/python/ray/rllib/policy/policy_template.py”, line 266, in init self._initialize_loss_from_dummy_batch( File “/home/sangbeom/ray/python/ray/rllib/policy/policy.py”, line 622, in _initialize_loss_from_dummy_batch self.compute_actions_from_input_dict(input_dict, explore=False) File “/home/sangbeom/ray/python/ray/rllib/policy/torch_policy.py”, line 207, in compute_actions_from_input_dict return self._compute_action_helper(input_dict, state_batches, File “/home/sangbeom/ray/python/ray/rllib/utils/threading.py”, line 23, in wrapper raise AttributeError( AttributeError: Object <ray.rllib.policy.policy_template.DreamerTorchPolicy object at 0x7efdd1c7e730> must have a self._lock property (assigned to a threading.Lock() object in its constructor)!

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:3
  • Comments:7

github_iconTop GitHub Comments

2reactions
ian-cannoncommented, Mar 11, 2021

I am seeing the same issue with a different system configuration

What is the problem?

Ray version: 1.2.0 Python version: 3.7.10 OS: Ubuntu 18.04 Pytorch: 1.7.0

Reproduction (REQUIRED)

rllib train -f dreamer/dreamer-deepmind-control.yaml

AttributeError: Object <ray.rllib.policy.policy_template.DreamerTorchPolicy object at 0x7f87d82a0b50> must have a `self._lock` property (assigned to a threading.Lock() object in its constructor)!

1reaction
ian-cannoncommented, Mar 24, 2021

It actually causes more problems without changing the algorithm. It looks like the dimensions of the state and action space are expected to be different in the Model’s observe function.

embed = embed.permute(1, 0, 2)
action = action.permute(1, 0, 2)

when both have only 2 dimensions. Changing this to permute(1,0) allows it to continue for a time, but does not remedy this problem either as it tries to cat prev_state[2] with prev action when they have a different number of dimensions as well. I think something is messed up farther up the pipe.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Algorithms — Ray 2.2.0 - the Ray documentation
RLlib's multi-GPU optimizer pins that data in GPU memory to avoid unnecessary transfers from host memory, substantially improving performance over a naive ...
Read more >
Issue creating custom action mask enviorment - RLlib - Ray
Getting this to work with a more complex output (e.g., if the action ... OrderedDict' object has no attribute 'shape' Process finished with...
Read more >
RLlib: Industry-Grade Reinforcement Learning — Ray 2.2.0
RLlib does not automatically install a deep-learning framework, but supports TensorFlow (both 1.x with static-graph and 2.x with eager mode) as well as ......
Read more >
Trainer.compute_action Error with Dict type observation inputs
Action Masking with RLlib. ... self.high) AttributeError: 'dict' object has no attribute 'shape' During handling of the above exception, ...
Read more >
RLlib: using evaluation workers on previously trained models
You have a checkpoint from that training, and want to run various ... AttributeError: 'PPO' object has no attribute 'evaluation_workers'.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found