Multiple GPU per node could fail silently with KubeflowEnvironment
See original GitHub issueš Bug
If the user tries to submit a ddp job to a Kubeflow env with multi-gpus per node by following multi-GPU docs and passing the right args (num_nodes
and devices
) one of the following would happen:
WORLD_SIZE
andRANK
are set to total number of processes -> the job gets stuck becausecreates_processes_externally=True
doesnāt let ddp launch other processes.WORLD_SIZE
andRANK
are set to total number of nodes -> the job starts with only local rank 0 of each node participating in distributed training. The major issue here apart from the idle GPUs is thatDDPStrategy
still works correctly and passes the right number of replicas to the distributed sampler:
...
self.cluster_environment.set_global_rank(self.node_rank * self.num_processes + self.local_rank)
self.cluster_environment.set_world_size(self.num_nodes * self.num_processes)
So local rank 0 GPUs will get 1/num_processes of the data assuming other (idle) GPUs are processing the rest. All while training is being done only on a subset of the dataset that was assigned to local rank 0 of each node. The user is unaware of this since they assume they passed devices/gpus
and num_nodes
to trainer correctly.
To Reproduce
N/A (itās how KubeflowEnvironment works)
Expected behavior
Iām not sure if this is the expected behavior. I am using Google Vertex AI that runs Kubeflow under the hood. When a Pytorch Lightning job is submitted to Vertex, Pytorch Lightning automatically selects KubeflowEnvironment
as the cluster environment.
Please let me know if the expectation is to have a separate cluster environment class for something like VertexAI. Iād be happy to create a PR to add the new Env. But the reason why I decided to report this as a bug are:
KubeflowEnvironment
has two very specific requirements a. nodes with a single GPU and b. manual creation of the processes. Neither of these requirements are related to or enforced by Kubeflow. The requirements are also not mentioned in the docs and the user wouldnāt know this until they look at the code.- The
detect
method ofKubeflowEnvironment
can be used for any Kubernetes env, and the rest of its methods basically implement an especial case ofLightningEnvironment
where the user has to manually run the processes.
cc @awaelchli
Issue Analytics
- State:
- Created a year ago
- Reactions:2
- Comments:9 (6 by maintainers)
PytorchJob operator sets the WORLD_SIZE to the total number of replicas by default (here and here) which is different from what torch and lightning expect. So
KubeflowEnvironment
should letDDPStrategy
set global_rank /world_size and create processes externally if needed.Updating the following methods would be enough to make
KubeflowEnvironment
a generic env thatās compatible with Trainer args and multi-gpu clusters:That said this would make it very similar to
LightningEnvironment
. Not sure if thatās a problem.@neggert yes, LOCAL_RANK would be set for subprocesses spun up by PL. And what you said about the PyTorchJobās assumption makes sense, itās just that ideally the KubeflowEnv and LightningEnv should interpret num_nodes the same way.
@awaelchli Iāll send a PR with the proposed changes soon. Thank you both!