F ./tensorflow/core/util/cuda_launch_config.h:127] Check failed: work_element_count > 0 (0 vs. 0)
See original GitHub issuemy computer get two gpus,it’ s normal if i don’t add this code in the context.
model = keras.utils.training_utils.multi_gpu_model(base_model, gpus=2)
however, it will use only one to compute, i don’t understand what’s it means ‘work_element_count > 0’. is it that i have not cleared the cuda worker before ?
Issue Analytics
- State:
- Created 5 years ago
- Reactions:10
- Comments:16
Top Results From Across the Web
2 - Stack Overflow
Cuda Error Message : F ./tensorflow/core/util/cuda_launch_config.h:127] Check failed: work_element_count > 0 (0 vs. 0).
Read more >Check failed error in Keras distributed ... - GitHub
Hello, in attempt to train simple keras model in distributed environment with TF 2.0 MultiWorkerMirroredStrategy I have encoutered an error ...
Read more >logo - splunktool
f ./tensorflow/core/util/cuda_launch_config.h:127] check failed: work_element_count > 0 (0 vs. 0). Last Update : 2022-09-08 04:58 pm.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

@mahaishou This issue was resolved when I upgraded tensorflow-gpu version to 1.9.0.
@qiuyinglin control the number of one epoch input. I use keras to train model, just like this history = multi_model.fit(X_train, Y_train, batch_size=batch_size, epochs=1, validation_data = (X_test, Y_test)) X_train is my input data, so just control the length of X_train.