How to use model.save() in tf2 when using TFBertModel
See original GitHub issuetensorflow==2.3.1
transformers==4.2.1
My model define as:
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import *
from transformers import TFAutoModel
input_ids = Input(shape=(3000), name='INPUT_input_ids', dtype=tf.int32)
input_mask = Input(shape=(3000), name='INPUT_input_mask', dtype=tf.int32)
segment_ids = Input(shape=(3000), name='INPUT_segment_ids', dtype=tf.int32)
passage_mask = Input(shape=(10), name='INPUT_passage_mask', dtype=tf.int32)
input_ids_reshape = K.reshape(input_ids,(-1, 300))
input_mask_reshape = K.reshape(input_mask,(-1, 300))
segment_ids_reshape = K.reshape(segment_ids,(-1, 300))
transformer = TFAutoModel.from_pretrained('hfl/chinese-roberta-wwm-ext', from_pt=False)
transformer_output = transformer([input_ids_reshape, input_mask_reshape, segment_ids_reshape])[0]
......
model = Model(
inputs = [input_ids, input_mask, segment_ids, passage_mask],
outputs = [start_prob, end_prob]
)
I try to save model in this way:
model.save(path)
but I got error
/lib/python3.6/site-packages/transformers/modeling_tf_utils.py in input_processing(func, config, input_ids, **kwargs)
364 output[tensor_name] = input
365 else:
--> 366 output[parameter_names[i]] = input
367 elif isinstance(input, allowed_types) or input is None:
368 output[parameter_names[i]] = input
IndexError: list index out of range
model.predict() and model.save_weights() is working.
How to use model.save() with huggingface-transformers? OR How to write model with huggingface-transformers? I just want to use transformers as a keras layer in my model.
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
Save and load a model using a distribution strategy
This tutorial demonstrates how you can save and load models in a SavedModel format with tf.distribute.Strategy during or after training.
Read more >Trouble saving tf.keras model with Bert (huggingface) classifier
I am aware that huggingface provides a model.save_pretrained() method for TFBertModel, but I prefer to wrap it in tf.keras.Model as I plan to ......
Read more >François Chollet on Twitter: "Exciting -- you can now push any ...
Exciting -- you can now push any Keras model to the HuggingFace Hub in ... How to use model.save() in tf2 when using...
Read more >Working with Hugging Face Transformers and TF 2.0
2.4 Inference. As the model is based on tf.keras model API, we can use Keras' same commonly used method of model.predict(). We ...
Read more >BERT in keras (tensorflow 2.0) using tfhub/huggingface
For tf 2.0, hub.module() will not work. we need to use ... If you are using tfhub for bert implementation, some of them...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I have the same problem? how to fix?
hi, do you fix this problem? and how? thx.