question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issue with DataLoader with lr_finder.range_test

See original GitHub issue

I try to use:

class CustomTrainIter(TrainDataLoaderIter):
    def inputs_labels_from_batch(self, batch_data):
        return batch_data["img"], batch_data["target"]

to work with DataLoader for the lr_finder.range_test() but still got the error: TypeError: list indices must be integers or slices, not str

TypeError                                 Traceback (most recent call last)
<ipython-input-60-b2a8b27d6c88> in <module>()
      3 optim = torch.optim.Adam(model_ft.parameters(), lr=1e-7, weight_decay=1e-2)
      4 lr_finder = LRFinder(model_ft,optim, criterion, device='cuda')
----> 5 lr_finder.range_test( custom_train_iter ,end_lr=100,num_iter=100)
      6 lr_finder.plot()
      7 lr_finder.reset()

3 frames
/usr/local/lib/python3.7/dist-packages/torch_lr_finder/lr_finder.py in range_test(self, train_loader, val_loader, start_lr, end_lr, num_iter, step_mode, smooth_f, diverge_th, accumulation_steps, non_blocking_transfer)
    318                 train_iter,
    319                 accumulation_steps,
--> 320                 non_blocking_transfer=non_blocking_transfer,
    321             )
    322             if val_loader:

/usr/local/lib/python3.7/dist-packages/torch_lr_finder/lr_finder.py in _train_batch(self, train_iter, accumulation_steps, non_blocking_transfer)
    369         self.optimizer.zero_grad()
    370         for i in range(accumulation_steps):
--> 371             inputs, labels = next(train_iter)
    372             inputs, labels = self._move_to_device(
    373                 inputs, labels, non_blocking=non_blocking_transfer

/usr/local/lib/python3.7/dist-packages/torch_lr_finder/lr_finder.py in __next__(self)
     57         try:
     58             batch = next(self._iterator)
---> 59             inputs, labels = self.inputs_labels_from_batch(batch)
     60         except StopIteration:
     61             if not self.auto_reset:

<ipython-input-58-f89d28995874> in inputs_labels_from_batch(self, batch_data)
      4 
      5 
----> 6         return batch_data["img"], batch_data["target"]
      7 
      8 custom_train_iter = CustomTrainIter(train_dl)

TypeError: list indices must be integers or slices, not str

Any suggestion ? thanks !

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
phongvu009commented, May 20, 2021

My dataset is like as the first one you mentioned :

# assume that `batch_data` is in the form of:
# [
#     [fn_img_01, fn_img_02, ..., fn_img_n],
#     [label_01, label_02, label_n],
# ]
def inputs_labels_from_batch(self, batch_data):
    img, target = batch_data
    return img, target

It works perfectly now. I appreciate your help. Is it like a abstract classes concept? I just use python in few months for deep learning project.

0reactions
NaleRaphaelcommented, May 20, 2021

My pleasure 😃

It’s simply a class inheritance here. And the reason why we have to wrap a torch.Dataset or torch.DataLoader with DataLoaderIter is that we need to make it:

  1. flexible for customization under the restriction of fixed input format: As you can see the following code, most of the PyTorch models follow this convention to build a forward pass. https://github.com/davidtvs/pytorch-lr-finder/blob/acc5e7ee7711a460bf3e1cc5c5f05575ba1e1b4b/torch_lr_finder/lr_finder.py#L376-L378 But since it’s just a convention rather than a syntax-level design, we want to minimize the effort to rewrite the code once the inputs of model.forward() or outputs of dataset.__getitem__() are complicated. In this case, user can implement their own inputs_labels_from_batch() to make it work with current implementation of LRFinder without modifying it.

  2. able to iterate training dataset infinitely: Since we cannot guarantee that the length of dataset is sufficient for running a learning rate range test with given settings (related to num_iter and batch_size), we have to kept a dataset accessible until the range test is finished. And that’s why we made a TrainDataLoaderIter like this: https://github.com/davidtvs/pytorch-lr-finder/blob/acc5e7ee7711a460bf3e1cc5c5f05575ba1e1b4b/torch_lr_finder/lr_finder.py#L51-L67

Hope these information help, and wish you a good luck on this learning journey. 😉

Read more comments on GitHub >

github_iconTop Results From Across the Web

Have I implemented implemenation of learning rate finder ...
Example: >>> lr_finder = LRFinder(net, optimizer, criterion, ... DataLoader, optional): if `None` the range test will only use the training ...
Read more >
torch-lr-finder - PyPI
The learning rate range test is a test that provides valuable information about the optimal learning rate. During a pre-training run, the learning...
Read more >
Fueling up your neural networks with the power of cyclical ...
Now we are all set to train our model. Training using cyclic learning rate. On each iteration through the data loader, we will...
Read more >
Finding good learning rate for your neural nets using PyTorch ...
Nowadays, many libraries implement LR Finder or “LR Range Test”. ... you can imagine neural network adapting to the problem based on ...
Read more >
torch.utils.data — PyTorch 1.13 documentation
The most important argument of DataLoader constructor is dataset , which ... Check out issue #13246 for more details on why this occurs...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found