question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to save wrapped DistilBERT without using `save_pretrained`?

See original GitHub issue

Environment info

  • transformers version: 4.9.2
  • Platform: Ubuntu 20
  • Python version: 3.8
  • PyTorch version (GPU?):
  • Tensorflow version (GPU?): 2.6
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Who can help

@Rocketknight1

Information

Model I am using (Bert, XLNet …):

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below)

To reproduce

Simply run the codes below

import tensorflow as tf
from transformers import (
    TFDistilBertModel,
    DistilBertTokenizerFast,
    DistilBertConfig,
)


def build_classifier_model():
    input_ids = tf.keras.layers.Input(shape=(None,), name="input_ids", dtype=tf.int32)
    attention_mask = tf.keras.layers.Input(
        shape=(None,), name="attention_mask", dtype=tf.int32
    )

    config = DistilBertConfig(
        dropout=0.2,
        attention_dropout=0.2,
        output_attentions=True,
        output_hidden_states=False,
        return_dict=False,
    )
    transformer = TFDistilBertModel.from_pretrained(
        "distilbert-base-uncased", config=config
    )
    transformer.trainable = False

    last_hidden_state = transformer(
        [input_ids, attention_mask],
    )[0]

    x = last_hidden_state[:, 0, :]
    x = tf.keras.layers.Dense(768, activation="relu")(x)
    x = tf.keras.layers.Dropout(0.2)(x)

    outputs = {
        label_name: tf.keras.layers.Dense(1, activation="sigmoid", name=label_name)(x)
        for label_name in ['A', 'B', 'C']
    }

    return tf.keras.Model([input_ids, attention_mask], outputs)

model = build_classifier_model()
model.save('./dump/savedmodel')

Expected behavior

I expect this to generate artifacts containing the model in savedmodel format, but instead I got

~/miniforge3/envs/folder/lib/python3.8/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py in call(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
    561         **kwargs,
    562     ):
--> 563         inputs = input_processing(
    564             func=self.call,
    565             config=self.config,

~/miniforge3/envs/folder/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in input_processing(func, config, input_ids, **kwargs)
    376                     output[tensor_name] = input
    377                 else:
--> 378                     output[parameter_names[i]] = input
    379             elif isinstance(input, allowed_types) or input is None:
    380                 output[parameter_names[i]] = input

IndexError: list index out of range

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
hardianlawicommented, Oct 26, 2021

I saw others posted similar issues https://github.com/huggingface/transformers/issues/13610 and https://github.com/huggingface/transformers/issues/13742. However, since I am wrapping the model in tf.keras.Model, save_pretrained isn’t a viable solution. Are there any workarounds?

0reactions
hardianlawicommented, Nov 22, 2022

@kapilkd13 @Zjq9409 I completely switched to Pytorch and Pytorch Lightning since they made my life easier 😂

Read more comments on GitHub >

github_iconTop Results From Across the Web

Models - Hugging Face
model = BertModel.from_pretrained("bert-base-uncased") >>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not ...
Read more >
huggingface save pretrained | The Search Engine You Control
To save the entire tokenizer, you should use save_pretrained() Thus, as follows: BASE_MODEL = "distilbert-base-multilingual-cased" tokenizer = AutoTokenizer.
Read more >
How to save a tokenizer after training it? - Stack Overflow
I have used tokenizer.save(directory + "/" + fname) successfully. – spiralarchitect. Aug 12, 2021 at 18:57.
Read more >
How to Fine-Tune a Transformer Architecture NLP Model
The uncased version of DistilBERT has 66 million weights and biases. Then the demo fine-tunes the pretrained model by training the model using...
Read more >
BERT Fine-Tuning Tutorial with PyTorch - Chris McCormick
In this tutorial I'll show you how to use BERT with the huggingface ... Unzip the dataset (if we haven't already) if not...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found