question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Inference] Support GPT-J-6B

See original GitHub issue

Is your feature request related to a problem? Please describe. With the new release of transformers, the gpt-j-6b model will be available for the public: https://github.com/huggingface/transformers/pull/13022

Currently,

import os
import deepspeed
import torch
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))

print(local_rank)
print(world_size)

pipeline.model = deepspeed.init_inference(
    pipeline.model,
    mp_size=1,
    dtype=torch.float16,
    replace_method='auto',
)

will only return

0
1
[2021-08-28 15:05:58,839] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.4.4, git-hash=unknown, git-branch=unknown

Deepspeed already supports the smaller gpt-neo variants, so the addition of gpt-j-6b would make sense.

Additional context If there is anything I could do (create a PR) with some guidance I’d be happy to work on the issue and contribute as well.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:2
  • Comments:37 (14 by maintainers)

github_iconTop GitHub Comments

6reactions
RezaYazdaniAminabadicommented, Nov 22, 2021

Hi @joehoover,

I am going to be more focused on this through next week. I would say it is ready by early December. Thanks, Reza

4reactions
yovizzlecommented, Sep 30, 2021

I’m also very interested in this one.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Deploy GPT-J 6B for inference using Hugging Face ...
In this blog post, you will learn how to easily deploy GPT-J using Amazon SageMaker and the Hugging Face Inference Toolkit with a...
Read more >
[Inference] Support GPT-J-6B · Issue #245 · bytedance/lightseq
I want to use LightSeq to speed up the inference of a large transformer model GPT-J-6B, which has been available for the public:...
Read more >
Inference with GPT-J-6B.ipynb - Colaboratory - Google Colab
In this notebook, we are going to perform inference (i.e. generate new text) with EleutherAI's GPT-J-6B model, which is a 6 billion parameter...
Read more >
Accelerate GPT-J inference with DeepSpeed-Inference on GPUs
Learn how to optimize GPT-J for GPU inference with a 1-line of code using Hugging ... We are going to optimize GPT-j 6B...
Read more >
GPT-J-6B Inference Demo |
First we download the model and install some dependencies. This step takes at least 5 minutes (possibly longer depending on server load). Make ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found