question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[local mode] Actors are not handled correctly

See original GitHub issue

The below fails with:


Traceback (most recent call last):
  File "/Users/rliaw/Research/riselab/ray/doc/examples/parameter_server/failure.py", line 35, in <module>
    accuracies = run_sync_parameter_server()
  File "/Users/rliaw/Research/riselab/ray/doc/examples/parameter_server/failure.py", line 32, in run_sync_parameter_server
    current_weights = ps.get_weights.remote()
  File "/Users/rliaw/miniconda3/lib/python3.7/site-packages/ray/actor.py", line 148, in remote
    return self._remote(args, kwargs)
  File "/Users/rliaw/miniconda3/lib/python3.7/site-packages/ray/actor.py", line 169, in _remote
    return invocation(args, kwargs)
  File "/Users/rliaw/miniconda3/lib/python3.7/site-packages/ray/actor.py", line 163, in invocation
    num_return_vals=num_return_vals)
  File "/Users/rliaw/miniconda3/lib/python3.7/site-packages/ray/actor.py", line 588, in _actor_method_call
    function = getattr(worker.actors[self._ray_actor_id], method_name)
AttributeError: 'DataWorker' object has no attribute 'get_weights'
import ray

@ray.remote
class ParameterServer(object):
    def __init__(self, learning_rate):
        pass

    def apply_gradients(self, *gradients):
        pass

    def get_weights(self):
        pass

@ray.remote
class DataWorker(object):
    def __init__(self):
        pass

    def compute_gradient_on_batch(self, data, target):
        pass

    def compute_gradients(self, weights):
        pass


def run_sync_parameter_server():
    iterations = 50
    num_workers = 2
    ps = ParameterServer.remote(1e-4 * num_workers)
    # Create workers.
    workers = [DataWorker.remote() for i in range(num_workers)]
    current_weights = ps.get_weights.remote()

ray.init(ignore_reinit_error=True, local_mode=True)
accuracies = run_sync_parameter_server()

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:10 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
davidcottoncommented, Oct 3, 2019

I’m a bit late as I can see a PR in progress, but I was able to fix this local mode issue by using Ray 0.7.3, seems to be introduced in 0.7.4

1reaction
mawrightcommented, Sep 17, 2019

I get a similar error in this test case.

import ray
from ray import tune
config = {"env": "CartPole-v1"}
ray.init(local_mode=True)
tune.run("PPO", config=config)
Traceback (most recent call last):
  File "/home/matt/Code/ray/python/ray/tune/trial_runner.py", line 506, in _process_trial
    result = self.trial_executor.fetch_result(trial)
  File "/home/matt/Code/ray/python/ray/tune/ray_trial_executor.py", line 347, in fetch_result
    result = ray.get(trial_future[0])
  File "/home/matt/Code/ray/python/ray/worker.py", line 2349, in get
    raise value
ray.exceptions.RayTaskError: python test.py (pid=32468, host=Rocko2)
  File "/home/matt/Code/ray/python/ray/local_mode_manager.py", line 55, in execute
    results = function(*copy.deepcopy(args))
  File "/home/matt/Code/ray/python/ray/rllib/agents/trainer.py", line 395, in train
    w.set_global_vars.remote(self.global_vars)
  File "/home/matt/Code/ray/python/ray/actor.py", line 148, in remote
    return self._remote(args, kwargs)
  File "/home/matt/Code/ray/python/ray/actor.py", line 169, in _remote
    return invocation(args, kwargs)
  File "/home/matt/Code/ray/python/ray/actor.py", line 163, in invocation
    num_return_vals=num_return_vals)
  File "/home/matt/Code/ray/python/ray/actor.py", line 588, in _actor_method_call
    function = getattr(worker.actors[self._ray_actor_id], method_name)
AttributeError: 'PPO' object has no attribute 'set_global_vars'
Read more comments on GitHub >

github_iconTop Results From Across the Web

How the Actor Model Meets the Needs of Modern, Distributed ...
Task delegation is the natural mode of operation for actors. State of actors is local and not shared, changes and data is propagated...
Read more >
Configuring Ray — Ray 2.2.0 - the Ray documentation
Important. For the multi-node setting, you must first run ray start on the command line to start the Ray cluster services on the...
Read more >
Referencing Actors | Unreal Engine Documentation
In this guide, you will learn multiple ways you can Reference Actors in Blueprints. Referencing Actors properly is important if you want to...
Read more >
Failure handling | Microsoft Learn
The hardest thing in programming a distributed system is handling failures. The actor model and the way it works make it much easier...
Read more >
Guide to Malware Incident Prevention and Handling for ...
Implementing a combination of threat mitigation techniques and tools, such as antivirus software and firewalls, can prevent threats from successfully attacking ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found