question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItΒ collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Patchcore multiple test batch size is not supported.

See original GitHub issue

I have another problem after dealing with #243 That is: ValueError: Either preds and target both should have the (same) shape (N, …), or target should be (N, …) and preds should be (N, C, …). Epoch 0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 34/34 [09:17<00:00, 16.39s/it, loss=nan]

From: File β€œ/home/devadmin/haobo/anomalib_venv/lib/python3.8/site-packages/torchmetrics/utilities/checks.py”, line 269, in _check_classification_inputs case, implied_classes = _check_shape_and_type_consistency(preds, target) File β€œ/home/devadmin/haobo/anomalib_venv/lib/python3.8/site-packages/torchmetrics/utilities/checks.py”, line 115, in _check_shape_and_type_consistency

Then I print preds and target: Epoch 0: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆAggregating the embedding extracted from the training set. 2.13it/s, loss=nan] Creating CoreSet Sampler via k-Center Greedy Getting the coreset from the main embedding. Assigning the coreset as the memory bank. Epoch 0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 34/34 [08:59<00:00, 15.85s/it, loss=nan] preds is: tensor([1.4457])00%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 11/11 [08:48<00:00, 48.02s/it] target is: tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=torch.int32)

my patchcore config.yaml is:

dataset:
  name: wafer_line #options: [mvtec, btech, folder]
  format: folder
  path: ../data/wafer_line/
  normal_dir: "train/Negative"
  abnormal_dir: "test/Positive"
  normal_test_dir: "test/Negative"
  task: segmentation
  mask: ../data/wafer_line/ground_truth/Positive
  extensions: ".jpg"
  split_ratio: 0.1
  seed: 0
  image_size: 256
  train_batch_size: 16
  test_batch_size: 16
  num_workers: 20
  transform_config:
    train: null
    val: null
  create_validation_set: false
  tiling:
    apply: false
    tile_size: null
    stride: null
    remove_border_count: 0
    use_random_tiling: False
    random_tile_count: 16
trainer:
  accelerator: "gpu" # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
  accumulate_grad_batches: 1
  amp_backend: native
  auto_lr_find: false
  auto_scale_batch_size: false
  auto_select_gpus: true
  benchmark: false
  check_val_every_n_epoch: 1 # Don't validate before extracting features.
  default_root_dir: null
  detect_anomaly: false
  deterministic: false
  enable_checkpointing: true
  enable_model_summary: true
  enable_progress_bar: true
  fast_dev_run: false
  gpus: null # Set automatically
  gradient_clip_val: 0
  ipus: null
  limit_predict_batches: 1.0
  limit_test_batches: 1.0
  limit_train_batches: 0.05
  limit_val_batches: 1.0
  log_every_n_steps: 50
  log_gpu_memory: null
  max_epochs: 10000
  max_steps: -1
  max_time: null
  min_epochs: null
  min_steps: null
  move_metrics_to_cpu: false
  multiple_trainloader_mode: max_size_cycle
  num_nodes: 1
  num_processes: 1
  num_sanity_val_steps: 0
  overfit_batches: 0.0
  plugins: null
  precision: 32
  profiler: null
  reload_dataloaders_every_n_epochs: 0
  replace_sampler_ddp: true
  strategy: null
  sync_batchnorm: false
  tpu_cores: null
  track_grad_norm: -1
  val_check_interval: 1.0 # Don't validate before extracting features.

Thank you for your patience in reading and answering!

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
samet-akcaycommented, Sep 26, 2022

@fujikosu, we have just merged #580, which should address the multiple batch size problem. Let us know if you still encounter any issues. Thanks!

1reaction
samet-akcaycommented, Apr 23, 2022

should I set test_batch_size=1?

Yes, if you set test_batch_size: 1, it would work. We’ll investigate why it doesn’t work for multiple batch sizes.

Read more comments on GitHub >

github_iconTop Results From Across the Web

amazon-science/patchcore-inspection - GitHub
In general, the majority of experiments should not exceed 11GB of GPU memory; however using significantly large input images will incur higher memory...
Read more >
add app Β· blanchon/MVTec_Padim_Anomalib_Test at c8c12e9
+ * Add Custom Dataset Training Support by @samet-akcay in ... + test_batch_size (int, optional): Test batch size. Defaults to 32.
Read more >
A practical guide to anomaly detection using Anomalib
A short guide on unsupervised anomaly detection and how to apply it using Anomalib.
Read more >
arXiv:2111.07677v2 [cs.CV] 16 Nov 2021
The anomaly score is calcu- lated by measuring the distance between the feature of the test image and the estimated distribution. However,Β ...
Read more >
Towards Total Recall in Industrial Anomaly Detection
PaDiM, input images do not require the same shape during training and testing. Finally, PatchCore makes use of lo- cally aware patch-feature scores...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found