question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Cuda appears to be in use even when setting gpu=False

See original GitHub issue

I noticed this because I was already using up 95% of my GPU memory for another task. Then when I ran reader.readtext I got a RuntimeError: cuda runtime error (2) : out of memory.

This is how I create reader:

self.reader = easyocr.Reader(['en'], gpu=False)

and I can confirm that I get the following warning:

Using CPU. Note: This module is much faster with a GPU.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:4
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

3reactions
rkcosmoscommented, Apr 16, 2021

thanks for the report, we’ll investigate this issue.

0reactions
jose-alanaaicommented, Feb 24, 2022

Just to add to this, in my case, if using the GPU, after loading the model, nvidia-smi reports 810MiB. This amount of course increases when doing inference and is dependent on the image size and orientation although I think you can bound it by setting canvas_size in the readtext() calls.

When setting gpu=False, after loading the model things seem fine but once you run inference on an image, the process takes 671MiB of VRAM. The only way I found to actually stop this from happening is to hide the GPU by setting the environment variable CUDA_VISIBLE_DEVICES to an empty string, e.g., os.environ['CUDA_VISIBLE_DEVICES'] = ''.

Hope this helps.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Torch.cuda.is_available() keeps switching to False
The reason for torch.cuda.is_available() resulting False is the incompatibility between the versions of pytorch and cudatoolkit .
Read more >
Torch.cuda.is_available() returns False even CUDA is installed
Hello everyone! I experience a problem with pytorch can't see cuda. Can someone give any suggestions, how to make it work properly?
Read more >
CUDA C++ Programming Guide - NVIDIA Documentation Center
A scope defines the set of threads that may use the synchronization object to synchronize with the asynchronous operation. The following table defines...
Read more >
Overview - CUDA.jl
In the case that driver is out of date or does not support your GPU, and you need to download a driver from...
Read more >
Query or select a GPU device - MATLAB - MathWorks
You can use the GPU to run MATLAB ® code that supports gpuArray variables or execute CUDA kernels using CUDAKernel objects. You can...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found