Patchcore multiple test batch size is not supported.
See original GitHub issueI have another problem after dealing with #243
That is:
ValueError: Either preds
and target
both should have the (same) shape (N, β¦), or target
should be (N, β¦) and preds
should be (N, C, β¦).
Epoch 0: 100%|ββββββββββ| 34/34 [09:17<00:00, 16.39s/it, loss=nan]
From: File β/home/devadmin/haobo/anomalib_venv/lib/python3.8/site-packages/torchmetrics/utilities/checks.pyβ, line 269, in _check_classification_inputs case, implied_classes = _check_shape_and_type_consistency(preds, target) File β/home/devadmin/haobo/anomalib_venv/lib/python3.8/site-packages/torchmetrics/utilities/checks.pyβ, line 115, in _check_shape_and_type_consistency
Then I print preds and target: Epoch 0: 68%|ββββββββββββββββAggregating the embedding extracted from the training set. 2.13it/s, loss=nan] Creating CoreSet Sampler via k-Center Greedy Getting the coreset from the main embedding. Assigning the coreset as the memory bank. Epoch 0: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ| 34/34 [08:59<00:00, 15.85s/it, loss=nan] preds is: tensor([1.4457])00%|βββββββββββββββββββββββββββββββββββββββββββββββ| 11/11 [08:48<00:00, 48.02s/it] target is: tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=torch.int32)
my patchcore config.yaml is:
dataset:
name: wafer_line #options: [mvtec, btech, folder]
format: folder
path: ../data/wafer_line/
normal_dir: "train/Negative"
abnormal_dir: "test/Positive"
normal_test_dir: "test/Negative"
task: segmentation
mask: ../data/wafer_line/ground_truth/Positive
extensions: ".jpg"
split_ratio: 0.1
seed: 0
image_size: 256
train_batch_size: 16
test_batch_size: 16
num_workers: 20
transform_config:
train: null
val: null
create_validation_set: false
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
trainer:
accelerator: "gpu" # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
accumulate_grad_batches: 1
amp_backend: native
auto_lr_find: false
auto_scale_batch_size: false
auto_select_gpus: true
benchmark: false
check_val_every_n_epoch: 1 # Don't validate before extracting features.
default_root_dir: null
detect_anomaly: false
deterministic: false
enable_checkpointing: true
enable_model_summary: true
enable_progress_bar: true
fast_dev_run: false
gpus: null # Set automatically
gradient_clip_val: 0
ipus: null
limit_predict_batches: 1.0
limit_test_batches: 1.0
limit_train_batches: 0.05
limit_val_batches: 1.0
log_every_n_steps: 50
log_gpu_memory: null
max_epochs: 10000
max_steps: -1
max_time: null
min_epochs: null
min_steps: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
num_nodes: 1
num_processes: 1
num_sanity_val_steps: 0
overfit_batches: 0.0
plugins: null
precision: 32
profiler: null
reload_dataloaders_every_n_epochs: 0
replace_sampler_ddp: true
strategy: null
sync_batchnorm: false
tpu_cores: null
track_grad_norm: -1
val_check_interval: 1.0 # Don't validate before extracting features.
Thank you for your patience in reading and answeringοΌ
Issue Analytics
- State:
- Created a year ago
- Comments:8 (4 by maintainers)
Top GitHub Comments
@fujikosu, we have just merged #580, which should address the multiple batch size problem. Let us know if you still encounter any issues. Thanks!
Yes, if you set
test_batch_size: 1
, it would work. Weβll investigate why it doesnβt work for multiple batch sizes.