question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Wav2Vec fine code

See original GitHub issue

🚀 Feature request

@patrickvonplaten

Hi, I have the following data set I want to use to fine tune Wav2Vec: cv-valid-train.zip

I’m using the current transformers library from github (4.4.0 dev). And I wrote the following code based on the code in the PR https://github.com/huggingface/transformers/pull/10145:

  1. ctc_trainer.py
from typing import Dict, Union, Any

import torch
from transformers import Trainer


class CTCTrainer(Trainer):
    def training_step(self, model: torch.nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) -> torch.Tensor:
        """
        Perform a training step on a batch of inputs.
        Subclass and override to inject custom behavior.
        Args:
            model (:obj:`nn.Module`):
                The model to train.
            inputs (:obj:`Dict[str, Union[torch.Tensor, Any]]`):
                The inputs and targets of the model.
                The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
                argument :obj:`labels`. Check your model's documentation for all accepted arguments.
        Return:
            :obj:`torch.Tensor`: The tensor with training loss on this batch.
        """

        model.train()
        inputs = self._prepare_inputs(inputs)
        loss = self.compute_loss(model, inputs)

        if self.args.n_gpu > 1:
            if model.module.config.ctc_loss_reduction == "mean":
                loss = loss.mean()
            elif model.module.config.ctc_loss_reduction == "sum":
                loss = loss.sum() / (inputs["labels"] >= 0).sum()
            else:
                raise ValueError(f"{model.config.ctc_loss_reduction} is not valid. Choose one of ['mean', 'sum']")

        if self.args.gradient_accumulation_steps > 1:
            loss = loss / self.args.gradient_accumulation_steps

        loss.backward()

        return loss.detach()
  1. data_collector.py
from dataclasses import dataclass
from typing import Union, Optional, List, Dict

import torch
from transformers import Wav2Vec2Processor


@dataclass
class DataCollatorCTCWithPadding:
    """
    Data collator that will dynamically pad the inputs received.
    Args:
        processor (:class:`~transformers.Wav2Vec2Processor`)
            The processor used for proccessing the data.
        padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
            Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
            among:
            * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
              sequence if provided).
            * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
              maximum acceptable input length for the model if that argument is not provided.
            * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
              different lengths).
        max_length (:obj:`int`, `optional`):
            Maximum length of the ``input_values`` of the returned list and optionally padding length (see above).
        max_length_labels (:obj:`int`, `optional`):
            Maximum length of the ``labels`` returned list and optionally padding length (see above).
        pad_to_multiple_of (:obj:`int`, `optional`):
            If set will pad the sequence to a multiple of the provided value.
            This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
            7.5 (Volta).
    """

    processor: Wav2Vec2Processor
    padding: Union[bool, str] = True
    max_length: Optional[int] = None
    max_length_labels: Optional[int] = None
    pad_to_multiple_of: Optional[int] = None
    pad_to_multiple_of_labels: Optional[int] = None

    def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
        # split inputs and labels since they have to be of different lenghts and need
        # different padding methods
        input_features = [{"input_values": feature["input_values"]} for feature in features]
        label_features = [{"input_ids": feature["labels"]} for feature in features]

        batch = self.processor.pad(
            input_features,
            padding=self.padding,
            max_length=self.max_length,
            pad_to_multiple_of=self.pad_to_multiple_of,
            return_tensors="pt",
        )
        with self.processor.as_target_processor():
            labels_batch = self.processor.pad(
                label_features,
                padding=self.padding,
                max_length=self.max_length_labels,
                pad_to_multiple_of=self.pad_to_multiple_of_labels,
                return_tensors="pt",
            )

        # replace padding with -100 to ignore loss correctly
        labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)

        batch["labels"] = labels

        return batch
  1. fine tune model.py
from pathlib import Path

import datasets
import librosa
import numpy
import pandas
import torch
from sklearn.model_selection import train_test_split
from torch.utils.data import TensorDataset
from tqdm import tqdm
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor, TrainingArguments

from ctc_trainer import CTCTrainer
from data_collector import DataCollatorCTCWithPadding


def map_to_array(batch):
    input_audio, _ = librosa.load(
        Path("__file__").parents[0].joinpath(batch["filename"]), sr=16000)
    return input_audio


def convert_to_dataset_torch(x: pandas.DataFrame, y: pandas.DataFrame) -> TensorDataset:
    input_values = []
    labels = []
    for _, row in tqdm(x.iterrows(), total=x.shape[0]):
        input_values.append(row["input_values"])
    for _, row in tqdm(y.iterrows(), total=y.shape[0]):
        labels.append(row["labels"])
    return TensorDataset(torch.cat(input_values, dim=0), torch.cat(labels, dim=0))


if __name__ == '__main__':
    dataset = pandas.read_csv(Path(__file__).parents[0].joinpath("cv-valid-train.csv"))
    X_train, X_test, y_train, y_test = train_test_split(dataset[["filename"]], dataset[["text"]], test_size=0.2,
                                                        random_state=42)
    X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=0.2, random_state=42)

    model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
    processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
    wer_metric = datasets.load_metric("wer")

    X_train["speech"] = X_train.apply(map_to_array, axis=1)
    X_train["input_values"] = X_train.apply(lambda row: processor(row["speech"], sampling_rate=16000).input_values,
                                            axis=1)
    X_validation["speech"] = X_validation.apply(map_to_array, axis=1)
    X_validation["input_values"] = X_validation.apply(
        lambda row: processor(row["speech"], sampling_rate=16000).input_values,
        axis=1)
    X_test["speech"] = X_test.apply(map_to_array, axis=1)
    X_test["input_values"] = X_test.apply(lambda row: processor(row["speech"], sampling_rate=16000).input_values,
                                          axis=1)
    with processor.as_target_processor():
        y_train["labels"] = y_train.apply(lambda row: processor(row["text"]).input_ids, axis=1)
        y_validation["labels"] = y_validation.apply(lambda row: processor(row["text"]).input_ids, axis=1)
        y_test["labels"] = y_test.apply(lambda row: processor(row["text"]).input_ids, axis=1)

    data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)


    def compute_metrics(pred):
        pred_logits = pred.predictions
        pred_ids = numpy.argmax(pred_logits, axis=-1)

        pred.label_ids[pred.label_ids == -100] = 0

        pred_str = processor.batch_decode(pred_ids)
        # we do not want to group tokens when computing the metrics
        label_str = processor.batch_decode(pred.label_ids, group_tokens=False)

        wer = wer_metric.compute(predictions=pred_str, references=label_str)

        return {"wer": wer}


    training_args = TrainingArguments(
        output_dir='./results',  # output directory
        num_train_epochs=2,  # total number of training epochs
        per_device_train_batch_size=16,  # batch size per device during training
        per_device_eval_batch_size=64,  # batch size for evaluation
        warmup_steps=500,  # number of warmup steps for learning rate scheduler
        weight_decay=0.01,  # strength of weight decay
        logging_dir='./logs',  # directory for storing logs
        logging_steps=10,
    )

    trainer = CTCTrainer(
        model=model,
        data_collator=data_collator,
        args=training_args,
        compute_metrics=compute_metrics,
        train_dataset=convert_to_dataset_torch(X_train, y_train),
        eval_dataset=convert_to_dataset_torch(X_validation, y_validation),
        tokenizer=processor.feature_extractor,
    )

    trainer.train()

I’m unable in the method convert_to_dataset_torch to create TensorDataset. I get the following error: TypeError: expected Tensor as element 0 in argument 0, but got numpy.ndarray

  1. How can I convert the 2d numpy to torch?
  2. How can I control argument such n_gpu and gradient_accumulation_steps?
  3. What is model.module.config.ctc_loss_reduction, How It can be controled, and what is best for ASR task?
  4. Is there any remorks over the code?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
patrickvonplatencommented, May 2, 2022

Hey @kasrasehat,

Could you please open a new issue?

1reaction
patrickvonplatencommented, Mar 3, 2021

Hey @idanmoradarthas,

I will soon release a notebook, that will explain in-detail how to fine-tune a Wav2Vec2 model (~1week).

It’s quite time consuming for me to debug user-specific code, such as convert_to_dataset_torch, so I can only give you some tips here:

  • Try to convert your dataset to PyTorch tensors istead of np.ndarray’s. This means you should change all of your lines that do: processor(row["speech"], sampling_rate=16000) to processor(row["speech"], sampling_rate=16000, return_tensors="pt")
Read more comments on GitHub >

github_iconTop Results From Across the Web

Fine-Tune Wav2Vec2 for English ASR with Transformers
Wav2Vec2 is fine-tuned using Connectionist Temporal Classification (CTC), which is an algorithm that is used to train neural networks for ...
Read more >
Fine-tuning Wav2Vec2 with an LM head | TensorFlow Hub
In this notebook, we will load the pre-trained wav2vec2 model from TFHub and will fine-tune it on LibriSpeech dataset by appending Language Modeling...
Read more >
Abdelwahab Heba - Papers With Code
A Fine-tuned Wav2vec 2.0/HuBERT Benchmark For Speech Emotion Recognition, Speaker Verification and Spoken Language Understanding.
Read more >
Fine-tuning Wav2Vec for Speech Recognition with Lightning ...
Since Flash is built on top of PyTorch Lightning, as you learn more, you can override your Task code seamlessly with both Lightning...
Read more >
Fine-tune and deploy a Wav2Vec2 model for speech ...
You can fine-tune and optimize all models from Hugging Face, and SageMaker ... The notebook and code from this post are available on...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found