Cuda appears to be in use even when setting gpu=False
See original GitHub issueI noticed this because I was already using up 95% of my GPU memory for another task. Then when I ran reader.readtext
I got a RuntimeError: cuda runtime error (2) : out of memory
.
This is how I create reader
:
self.reader = easyocr.Reader(['en'], gpu=False)
and I can confirm that I get the following warning:
Using CPU. Note: This module is much faster with a GPU.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:4
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Torch.cuda.is_available() keeps switching to False
The reason for torch.cuda.is_available() resulting False is the incompatibility between the versions of pytorch and cudatoolkit .
Read more >Torch.cuda.is_available() returns False even CUDA is installed
Hello everyone! I experience a problem with pytorch can't see cuda. Can someone give any suggestions, how to make it work properly?
Read more >CUDA C++ Programming Guide - NVIDIA Documentation Center
A scope defines the set of threads that may use the synchronization object to synchronize with the asynchronous operation. The following table defines...
Read more >Overview - CUDA.jl
In the case that driver is out of date or does not support your GPU, and you need to download a driver from...
Read more >Query or select a GPU device - MATLAB - MathWorks
You can use the GPU to run MATLAB ® code that supports gpuArray variables or execute CUDA kernels using CUDAKernel objects. You can...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
thanks for the report, we’ll investigate this issue.
Just to add to this, in my case, if using the GPU, after loading the model,
nvidia-smi
reports 810MiB. This amount of course increases when doing inference and is dependent on the image size and orientation although I think you can bound it by settingcanvas_size
in thereadtext()
calls.When setting
gpu=False
, after loading the model things seem fine but once you run inference on an image, the process takes 671MiB of VRAM. The only way I found to actually stop this from happening is to hide the GPU by setting the environment variableCUDA_VISIBLE_DEVICES
to an empty string, e.g.,os.environ['CUDA_VISIBLE_DEVICES'] = ''
.Hope this helps.