Regarding error while computing the SemanticSegmentationTask with num_classes=2 at loss_ fun
See original GitHub issuemodel =SemanticSegmentationTask( segmentation_model=“unet”, encoder_name=“resnet18”, encoder_weights=None, in_channels=10, num_classes=2, num_filters=64, loss=“jaccard”, ignore_zeros=False, learning_rate=0.1, learning_rate_schedule_patience=5, )
trainer = pl.Trainer(
gpus=1,
max_epochs=10,
precision=16,
log_every_n_steps=1,
max_steps=2,
)
trainer.fit(model,dl)
Using 16bit native Automatic Mixed Precision (AMP)
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/configuration_validator.py:133: UserWarning: You defined a validation_step
but have no val_dataloader
. Skipping val loop.
rank_zero_warn(“You defined a validation_step
but have no val_dataloader
. Skipping val loop.”)
Missing logger folder: /content/lightning_logs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
0 | model | Unet | 14.4 M
1 | loss | JaccardLoss | 0
2 | train_metrics | MetricCollection | 0
3 | val_metrics | MetricCollection | 0
4 | test_metrics | MetricCollection | 0
14.4 M Trainable params 0 Non-trainable params 14.4 M Total params 28.701 Total estimated model params size (MB) Epoch 0: 0% 0/4 [00:00<?, ?it/s]
ValueError Traceback (most recent call last) <ipython-input-15-c68a00784715> in <module>() 21 ) 22 —> 23 trainer.fit(model,dl)
36 frames
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/checks.py in _check_shape_and_type_consistency(preds, target)
113 else:
114 raise ValueError(
–> 115 “Either preds
and target
both should have the (same) shape (N, …), or target
should be (N, …)”
116 " and preds
should be (N, C, …)."
117 )
ValueError: Either preds
and target
both should have the (same) shape (N, …), or target
should be (N, …) and preds
should be (N, C, …).
Issue Analytics
- State:
- Created a year ago
- Comments:9 (4 by maintainers)
Top GitHub Comments
Thank you soo much @calebrob6 i think it worked well
I think you need to change your dataloader to return masks of size
(batch size, height, width)