IndexError: tuple index out of range on spec_augment_pytorch
See original GitHub issueAny idea on how to resolve this problem?
from SpecAugment import spec_augment_pytorch as spec_augment_l
import librosa
y, sr = librosa.load(dataset_audio_path.joinpath(audio_path), sr=16000)
y, _ = librosa.effects.trim(y, top_db=15)
S = librosa.feature.melspectrogram(y, sr=sr, n_fft=2048, hop_length=512, n_mels=128)
S = librosa.power_to_db(S, ref=np.max)
S = spec_augment_l.spec_augment(mel_spectrogram=S)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-10-9e6d8eca1293> in <module>
14 plt.pause(0.001) # pause a bit so that plots are updated
15
---> 16 inputs, classes = next(iter(train_dataloader))
17 out = make_grid(inputs)
18 class_names = train_dataset.classes
~\anaconda3\envs\torch\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
343
344 def __next__(self):
--> 345 data = self._next_data()
346 self._num_yielded += 1
347 if self._dataset_kind == _DatasetKind.Iterable and \
~\anaconda3\envs\torch\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self)
383 def _next_data(self):
384 index = self._next_index() # may raise StopIteration
--> 385 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
386 if self._pin_memory:
387 data = _utils.pin_memory.pin_memory(data)
~\anaconda3\envs\torch\lib\site-packages\torch\utils\data\_utils\fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
~\anaconda3\envs\torch\lib\site-packages\torch\utils\data\_utils\fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
~\anaconda3\envs\torch\lib\site-packages\torchvision\datasets\folder.py in __getitem__(self, index)
133 """
134 path, target = self.samples[index]
--> 135 sample = self.loader(path)
136 if self.transform is not None:
137 sample = self.transform(sample)
<ipython-input-9-9aca4872e8ea> in spec_augment_loader(audio_path)
40 S = librosa.feature.melspectrogram(y, sr=sr, n_fft=2048, hop_length=512, n_mels=128)
41 S = librosa.power_to_db(S, ref=np.max)
---> 42 S = spec_augment_l.spec_augment(mel_spectrogram=S)
43 fig = plt.Figure(figsize=figsize, dpi=dpi, frameon=False)
44 fig.patch.set_visible(False)
~\anaconda3\envs\torch\lib\site-packages\SpecAugment\spec_augment_pytorch.py in spec_augment(mel_spectrogram, time_warping_para, frequency_masking_para, time_masking_para, frequency_mask_num, time_mask_num)
87 """
88 v = mel_spectrogram.shape[1]
---> 89 tau = mel_spectrogram.shape[2]
90
91 # Step 1 : Time warping
IndexError: tuple index out of range
Issue Analytics
- State:
- Created 3 years ago
- Comments:9
Top Results From Across the Web
RandomAdjustSharpness gives IndexError ... - PyTorch Forums
While using RandomAdjustSharpness, my code throws the following error - IndexError: tuple index out of range. Here is my code -
Read more >New user, stuck with tuple index out of range error - nlp
Hi all, new user, looking to replicate this tutorial with my own data set, consistently getting stick with a tuple index out of...
Read more >IndexError : tuple index out of range during backward pass
I was to attack a model with adversarial examples and got IndexError : tuple index out of range during backward pass
Read more >IndexError: tuple index out of range - PyTorch Forums
for num_epochs in range(1): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(train_loader, ...
Read more >IndexError: list index out of range only for a loader
I have no idea why only on val set. This is the code: # Define loaders train_loader_a = DataLoader(train_set_a, batch_size=128, num_workers=0, ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi, you can use the example in the tests/ folder, there it converts the spectrogram to [batch_size, time, frequency, 1]
Hi, thanks for reply. It is working for me.