Bert Embedding still not importing and AttributeError: 'ModelCheckpoint' object has no attribute 'on_train_batch_begin' while training
See original GitHub issueHi! Sorry, I can’t reopen last issue, so I open a new one. Thank you for your previous answers, but problem with error, that for some unknown reason any of bert embeddings is not importing with error is not solved. It still arises
folder = 'multi_cased_L-12_H-768_A-12'
download_url = '/home/karina/bert/multi_cased_L-12_H-768_A-12.zip'
print('Unpacking model...')
zip_path = '{}.zip'.format(folder)
!test -d $folder || (tar xvzf '/home/karina/bert/multi_cased_L-12_H-768_A-12.zip')
config_path = folder+'/bert_config.json'
checkpoint_path = folder+'/bert_model.ckpt'
vocab_path = folder+'/vocab.txt'
from kashgari.embeddings import BERTEmbedding
#embedding = BERTEmbedding('rubert_cased_L-12_H-768_A-12_v1', 200)
embedding = BERTEmbedding('multi_cased_L-12_H-768_A-12', 200)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-49-0fecd0409dd5> in <module>
1 from kashgari.embeddings import BERTEmbedding
2 #embedding = BERTEmbedding('rubert_cased_L-12_H-768_A-12_v1', 200)
----> 3 embedding = BERTEmbedding('multi_cased_L-12_H-768_A-12', 200)
/usr/local/lib/python3.7/site-packages/kashgari/embeddings/bert_embedding.py in __init__(self, model_folder, layer_nums, trainable, task, sequence_length, processor, from_saved_model)
71 embedding_size=0,
72 processor=processor,
---> 73 from_saved_model=from_saved_model)
74
75 self.processor.token_pad = '[PAD]'
/usr/local/lib/python3.7/site-packages/kashgari/embeddings/base_embedding.py in __init__(self, task, sequence_length, embedding_size, processor, from_saved_model)
75 self.processor = LabelingProcessor()
76 else:
---> 77 raise ValueError()
78 else:
79 self.processor = processor
ValueError:
Plus, a new error while fitting model araised. Actually, it araised and when I tried to fit custom model from tutorial. I don’t see any things that I could forgot to code, but it seem like some problem with batches. BUT I even don’t use them - it still arises.
Look:
model.fit(X_train, y_train, x_validate=X_test, y_validate=y_test, epochs = 20, callbacks=callbacks_list)
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) [(None, 660)] 0
_________________________________________________________________
layer_embedding (Embedding) (None, 660, 100) 28400
_________________________________________________________________
lstm_1 (LSTM) (None, 660, 150) 150600
_________________________________________________________________
layer_dropout (Dropout) (None, 660, 150) 0
_________________________________________________________________
dense_1 (Dense) (None, 660, 281) 42431
_________________________________________________________________
activation_1 (Activation) (None, 660, 281) 0
=================================================================
Total params: 221,431
Trainable params: 221,431
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-46-6f7d63df610b> in <module>
----> 1 model.fit(X_train, y_train, x_validate=X_test, y_validate=y_test, epochs = 20, callbacks=callbacks_list)
/usr/local/lib/python3.7/site-packages/kashgari/tasks/base_model.py in fit(self, x_train, y_train, x_validate, y_validate, batch_size, epochs, callbacks, fit_kwargs)
285 validation_steps=validation_steps,
286 callbacks=callbacks,
--> 287 **fit_kwargs)
288
289 def fit_without_generator(self,
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1431 shuffle=shuffle,
1432 initial_epoch=initial_epoch,
-> 1433 steps_name='steps_per_epoch')
1434
1435 def evaluate_generator(self,
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_generator.py in model_iteration(model, data, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch, mode, batch_size, steps_name, **kwargs)
258 # Callbacks batch begin.
259 batch_logs = {'batch': step, 'size': batch_size}
--> 260 callbacks._call_batch_hook(mode, 'begin', step, batch_logs)
261 progbar.on_batch_begin(step, batch_logs)
262
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
245 t_before_callbacks = time.time()
246 for callback in self.callbacks:
--> 247 batch_hook = getattr(callback, hook_name)
248 batch_hook(batch, logs)
249 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
AttributeError: 'ModelCheckpoint' object has no attribute 'on_train_batch_begin'
Issue Analytics
- State:
- Created 4 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
AttributeError: 'ModelCheckpoint' object has no attribute ...
I was using ModelCheckpoint from keras for model traing.Here is the code: model1.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, ...
Read more >Tensorboard AttributeError: 'ModelCheckpoint' object has no ...
So a simple solution is to choose keras or tf.keras , and make all imports from that package, and never mix it with...
Read more >Tensorboard AttributeError: 'ModelCheckpoint ... - Intellipaat
I'm currently using Tensorboard using the below callback as outlined by this SO post as shown below. from keras.callbacks import ModelCheckpoint.
Read more >'BertEncoder' object has no attribute 'gradient_checkpointing'
I'm getting a strange error that previously worked OK. I'm only trying to use a previously trained NLP model to predict a label....
Read more >Attributeerror: 'Model' Object Has No Attribute 'Epoch' - Keras
still not importing and AttributeError: 'ModelCheckpoint' object has no attribute that for some unknown reason any of bert embeddings is not importing with ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
yep, it works! thank you very much for all
This is because of the difference between the two versions. In tf.keras, need to set
task
Params, here is the document link