question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

GPT-J-6B endpoint

See original GitHub issue

I’m kicking off a TRC sub to see how far we can get with a custom GPT-J endpoint.

https://github.com/kingoflolz/mesh-transformer-jax

image

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:8 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
cirecruxcommented, Sep 21, 2021

Just going to leave this comment here for anyone with the same problem:

RuntimeError: Resource exhausted: Failed to allocate request for 32.00MiB (33554432B) on device ordinal 0: while running replica 0 and partition 0 of a replicated computation (other replicas may have failed as well).

Assuming that you’re following the fine-tune-guide from mesh-transformer-jax like I am, the problem is that the TPU has to be set up as v3-8 and not v2-8.

I wasted a whole day trying to figure out what I got wrong since I only know enough about code to copy paste error messages into the search bar. The author of the issue (avaer) is a genius for getting it to work somehow on a v2-8. Anyways, good luck out there.

0reactions
lalalunecommented, May 24, 2022

Hosted inference on HF is a snap, pin the model to keep it running hot.

https://huggingface.co/bigscience/T0pp

https://huggingface.co/EleutherAI/gpt-j-6B

async function query(data) {
	const response = await fetch(
		"https://api-inference.huggingface.co/models/EleutherAI/gpt-j-6B",
		{
			headers: { Authorization: "Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" },
			method: "POST",
			body: JSON.stringify(data),
		}
	);
	const result = await response.json();
	return result;
}

query({"inputs": "Can you please let us know more details about your "}).then((response) => {
	console.log(JSON.stringify(response));
});
Read more comments on GitHub >

github_iconTop Results From Across the Web

Deploy GPT-J 6B for inference using Hugging Face ...
Create model.tar.gz for the Amazon SageMaker real-time endpoint. Since we can load our model quickly and run inference on it let's deploy it ......
Read more >
How to Build Your Own GPT-J Playground | by Heiko Hotz
Creating a web interface. Setting up EC2. Once the model is deployed on a SageMaker endpoint we can run inference request right there...
Read more >
AWS Marketplace: GPT-J 6B (GPT-3) | Text Generation
For model deployment as Real-time endpoint in Amazon SageMaker, the software is priced based on hourly pricing that can vary by instance type....
Read more >
Forefront: Powerful NLP Tools A Click Away
Use the standard GPT-J-6B model by EleutherAI or fine-tune on your dataset. ... Instantly use any of your models with an HTTP endpoint...
Read more >
How to use GPT-3, GPT-J and GPT-NeoX, with few-shot learning
Below, we're showing you examples obtained using the GPT-J endpoint of NLP Cloud on GPU, with the Python client. If you want to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found