Can't load model estimater after training
See original GitHub issueI was trying to follow the Sagemaker instructions here to load the model I just trained and test an estimation. I get the error message: NotImplementedError: Creating model with HuggingFace training job is not supported. Can someone share some sample code to run to do this? Here is the basic thing I am trying to do:
from sagemaker.estimator import Estimator
# job which is going to be attached to the estimator
old_training_job_name='huggingface-sdk-extension-2021-04-02-19-10-00-242'
# attach old training job
huggingface_estimator_loaded = Estimator.attach(old_training_job_name)
# get model output s3 from training job
testModel = huggingface_estimator_loaded.model_data
ner_classifier = huggingface_estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
I also tried some things with .deploy() and endpoints but didn’t have any luck there either.
Issue Analytics
- State:
- Created 2 years ago
- Comments:9 (2 by maintainers)
Top Results From Across the Web
can't predict after saving then loading model · Issue #96 - GitHub
I examine a recommendation model based on tfrs, after that, I fit, predict and save model ok, but when loading model with tf.keras.models.load_model, ......
Read more >Loading a trained Keras model and continue training
I was wondering if it was possible to save a partly trained Keras model and continue the training after loading the model again....
Read more >How to Save and Load Your Keras Deep Learning Model
In this post, you will discover how to save your Keras models to files and load them up again to make predictions. After...
Read more >Save and load models | TensorFlow Core
Model progress can be saved during and after training. This means a model can resume where it left off and avoid long training...
Read more >How to load a partially trained deep learning model ... - YouTube
Code generated in the video can be downloaded from here: https://github.com/bnsreenu/python_for_microscopists.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hey @gwc4github you would have to implement a model_loading and inference handler for this to get setup within a SageMaker Endpoint. Would you please mind sharing the Framework (TF/PyTorch), version, CPU/GPU for your usecase. I can send you the recipe for writing a model_handler post that.
Here is how it will look like, from within SageMaker endpoint:
Thanks for the update Philipp! I’ll take a look!