TypeError: forward() missing 1 required positional argument: 'labels'
See original GitHub issueI’ve been following and making all the necessary changes required to run the lr_finder.range_test()
. However, I’m still facing this error!
Here’s my code defining the Dataset class:
class HappyWhaleDataset(Dataset):
def __init__(self, df, transforms=None):
self.df = df
self.file_names = df['file_path'].values
self.labels = df['individual_id'].values
self.transforms = transforms
def __len__(self):
return len(self.df)
def __getitem__(self, index):
img_path = self.file_names[index]
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
label = self.labels[index]
if self.transforms:
img = self.transforms(image=img)["image"]
return {
'image': img,
'label': torch.tensor(label, dtype=torch.long)
}
def prepare_loaders(df, fold):
df_train = df[df.kfold != fold].reset_index(drop=True)
df_valid = df[df.kfold == fold].reset_index(drop=True)
train_dataset = HappyWhaleDataset(df_train, transforms=data_transforms["train"])
valid_dataset = HappyWhaleDataset(df_valid, transforms=data_transforms["valid"])
train_loader = DataLoader(train_dataset, batch_size=CONFIG['train_batch_size'],
num_workers=2, shuffle=True, pin_memory=True, drop_last=True)
valid_loader = DataLoader(valid_dataset, batch_size=CONFIG['valid_batch_size'],
num_workers=2, shuffle=False, pin_memory=True)
return train_loader, valid_loader
train_loader, valid_loader = prepare_loaders(df, fold=0)
Note: Model training goes without error when I’m just creating a usual train_loader with the above code.
class CustomTrainIter(TrainDataLoaderIter):
def inputs_labels_from_batch(self, batch_data):
return batch_data["image"], batch_data["label"]
custom_loader = CustomTrainIter(train_loader)
lr_finder = LRFinder(model, optimizer, criterion, device=CONFIG['device'])
lr_finder.range_test(custom_loader, end_lr=1, num_iter=100, step_mode="linear")
lr_finder.plot(log_lr=False)
lr_finder.reset()
TypeError Traceback (most recent call last)
/tmp/ipykernel_34/1446799792.py in <module>
6
7 lr_finder = LRFinder(model, optimizer, criterion, device=CONFIG['device'])
----> 8 lr_finder.range_test(custom_loader, end_lr=1, num_iter=100, step_mode="linear")
9 lr_finder.plot(log_lr=False)
10 lr_finder.reset()
/opt/conda/lib/python3.7/site-packages/torch_lr_finder/lr_finder.py in range_test(self, train_loader, val_loader, start_lr, end_lr, num_iter, step_mode, smooth_f, diverge_th, accumulation_steps, non_blocking_transfer)
318 train_iter,
319 accumulation_steps,
--> 320 non_blocking_transfer=non_blocking_transfer,
321 )
322 if val_loader:
/opt/conda/lib/python3.7/site-packages/torch_lr_finder/lr_finder.py in _train_batch(self, train_iter, accumulation_steps, non_blocking_transfer)
375
376 # Forward pass
--> 377 outputs = self.model(inputs)
378 loss = self.criterion(outputs, labels)
379
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() missing 1 required positional argument: 'labels'
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
TypeError: forward() missing 1 required positional argument ...
I think the error message is pretty straight forward. You have two positional arguments input_tokens and hidden for your forward() .
Read more >forward() missing 1 required positional argument: 'target' when ...
TypeError : forward() missing 1 required positional argument: 'target' when using AdaptiveLogSoftmaxWithLoss. I am trying to build a next word prediction model ...
Read more >TypeError: forward() missing 1 required positional argument: 'x'
You haven't shown the entire traceback - there's stuff missing between the two calls, since the call to forward doesn't appear in your...
Read more >forward() missing 1 required positional argument: 'input' When ...
I debug in pycharm and found that the images and labels were loaded correctly, but when in inputs = Variable(images) , I found...
Read more >forward() missing 1 required positional argument: 'indices'错误 ...
只在开头from A import A. 因为A类没有实例化,B类我也没有进行实例化,只是直接引入了这个类。... TypeError: forward() missing 4 required positional ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thanks a lot for your awesome explanation @NaleRaphael this really helped me get more clarity.
return (images, labels), labels
is the part I was missing. You did more than just help me solve the error! Best wishes with you.Got it! This actually can be achieved by modifying your
CustomTrainIter
slightly.Here is the explanation. The following code snippet is how forward pass implemented in
LRFinder._train_batch()
: https://github.com/davidtvs/pytorch-lr-finder/blob/acc5e7ee7711a460bf3e1cc5c5f05575ba1e1b4b/torch_lr_finder/lr_finder.py#L371-L378We can denote those variables with simpler ones. So here is the simplified code showing how the forward pass works:
Since your model needs to get 2 input arguments:
(images, labels)
, it means theX
is actually a 2-value tuple(images, labels)
. So the code above can be written as below:Now we know that
train_iter
has to return(images, labels), labels
in each iteration, so that means you can modify yourCustomTrainIter
as below:But since your model takes 2 input arguments rather than 1, the invocation of your
model.forward()
in that forward pass does not meet the requirement now. Therefore, you have to create a wrapper for model to unpack the tuple ((images, labels)
) into 2 variables. That is:That’s it! So this should be how
LRFinder
runs in your case: