question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Is SeeKeR in interactive chat mode supposed to use gpu?

See original GitHub issue

Hi I just installed SeekeR in a new environment for parlai v1.6.0 and have been testing its response to interactive chat queries. It seems pretty slow and I noticed that it does not appear to be changing the use of gpu memory when I run it. I am using this command…

parlai i -mf zoo:seeker/seeker_dialogue_3B/model -o gen/seeker_dialogue --search-server 127.0.0.1:8080

where the search-server I am running is the same one that I use for blenderbot2. Is there a command option that needs to be set to use the gpu? I am running under Windows 10. Thanks!

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:11 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
sjscotticommented, Aug 17, 2022

That fixed it!
But I need to note that with Windows you can’t just do a pip install parlai for a standard install

I needed to make a local copy of the requirements.txt file (I called it requirements-local.txt) where I had to comment out the sh==1.12.14 package to avoid an error (since sh is linux-only) and to change pyzmq==18.1.0 to pyzmq==18.1.1 to avoid an invalid wheel error . Then it is installed as

pip install --no-deps -r requirements-local.txt parlai

But there were still missing packages that I needed to install…

pip install iopath
pip install charset-normalizer
pip install idna
pip install certifi
pip install packaging

before the parlai command would work. There was an installation of torch from the parlai install, but no cudatoolkit, so I also reinstalled pytorch with the toolkits using the command on the pytorch website to be safe.

conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge

For good measure, I also installed cudnn, but I don’t know if it is needed.

0reactions
klshustercommented, Aug 18, 2022

Glad that worked, and thank you for the windows install instructions! I’ll go ahead and close this for now but please reopen if there are lingering concerns

Read more comments on GitHub >

github_iconTop Results From Across the Web

KoboldAI/KoboldAI-Client - GitHub
KoboldAI - Your gateway to GPT writing. This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models.
Read more >
Model Zoo — ParlAI Documentation
Model Zoo¶. This is a list of pretrained ParlAI models. They are listed by task, or else in a pretraining section (at the...
Read more >
Assign GPUs to virtual machines with VMware vGPU mode
Admins who want to assign a GPU to virtual machines have two options: pass-through or virtual GPU mode. Here's how to get started....
Read more >
Wiki - GPU Overload Issues - OBS Studio
I just want this to work, so how do I keep my GPU from being overloaded? Preventing GPU overload mostly boils down to...
Read more >
GPU Mode - Keyshot Manual
To use GPU mode in the KeyShot Real-time View, select GPU mode on the Ribbon or Render, GPU mode from the Main Menu....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found