question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Can't load model estimater after training

See original GitHub issue

I was trying to follow the Sagemaker instructions here to load the model I just trained and test an estimation. I get the error message: NotImplementedError: Creating model with HuggingFace training job is not supported. Can someone share some sample code to run to do this? Here is the basic thing I am trying to do:

from sagemaker.estimator import Estimator

# job which is going to be attached to the estimator
old_training_job_name='huggingface-sdk-extension-2021-04-02-19-10-00-242'

# attach old training job
huggingface_estimator_loaded = Estimator.attach(old_training_job_name)

# get model output s3 from training job
testModel = huggingface_estimator_loaded.model_data

ner_classifier = huggingface_estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')

I also tried some things with .deploy() and endpoints but didn’t have any luck there either.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
C24IOcommented, Apr 8, 2021

Hey @gwc4github you would have to implement a model_loading and inference handler for this to get setup within a SageMaker Endpoint. Would you please mind sharing the Framework (TF/PyTorch), version, CPU/GPU for your usecase. I can send you the recipe for writing a model_handler post that.

Here is how it will look like, from within SageMaker endpoint:

    self.manifest = ctx.manifest
    properties = ctx.system_properties
    self.device = 'cpu'
    model_dir = properties.get('model_dir')
    
    #print('model_dir ' + model_dir)
    self.model = RobertaModel.from_pretrained(model_dir)
    self.tokenizer = RobertaTokenizerFast.from_pretrained(model_dir)
0reactions
gwc4githubcommented, Jul 12, 2021

Thanks for the update Philipp! I’ll take a look!

Read more comments on GitHub >

github_iconTop Results From Across the Web

can't predict after saving then loading model · Issue #96 - GitHub
I examine a recommendation model based on tfrs, after that, I fit, predict and save model ok, but when loading model with tf.keras.models.load_model, ......
Read more >
Loading a trained Keras model and continue training
I was wondering if it was possible to save a partly trained Keras model and continue the training after loading the model again....
Read more >
How to Save and Load Your Keras Deep Learning Model
In this post, you will discover how to save your Keras models to files and load them up again to make predictions. After...
Read more >
Save and load models | TensorFlow Core
Model progress can be saved during and after training. This means a model can resume where it left off and avoid long training...
Read more >
How to load a partially trained deep learning model ... - YouTube
Code generated in the video can be downloaded from here: https://github.com/bnsreenu/python_for_microscopists.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found