PaDiM: model outputs NaN when inferencing images sequencially
See original GitHub issueDescribe the bug
- I’m planning to do inference on webcam input. The inference works well on the first frame, but on the second frame, it doesn’t.
- Here’re the image outputs and console outputs (
print(predictions)
)
Code (webcam input)
from anomalib.data.utils import read_image
from anomalib.post_processing import compute_mask, superimpose_anomaly_map
from anomalib.post_processing.normalization.cdf import normalize as normalize_cdf
from anomalib.post_processing.normalization.cdf import standardize
from anomalib.post_processing.normalization.min_max import (
normalize as normalize_min_max,
)
from anomalib.deploy.inferencers.base import Inferencer
import cv2
from importlib import import_module
from argparse import ArgumentParser, Namespace
from anomalib.config import get_configurable_parameters
import warnings
from pathlib import Path
def get_args() -> Namespace:
parser = ArgumentParser()
parser.add_argument("--config", type=Path, required=True, help="Path to a model config file")
parser.add_argument("--weight_path", required=True)
# parser.add_argument("--meta_data", default="")
args = parser.parse_args()
return args
def main():
args = get_args()
config = get_configurable_parameters(config_path=args.config)
cap = cv2.VideoCapture(0)
inferencer: Inferencer
module = import_module("anomalib.deploy.inferencers.torch")
TorchInferencer = getattr(module, "TorchInferencer")
inferencer = TorchInferencer(config=config, model_source=args.weight_path, meta_data_path=None)
if hasattr(inferencer, "meta_data"):
meta_data = getattr(inferencer, "meta_data")
else:
meta_data = {}
while(True):
ret, frame = cap.read()
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
meta_data["image_shape"] = image.shape[:2]
processed_image = inferencer.pre_process(image)
predictions = inferencer.forward(processed_image)
print(predictions)
anomaly_map, pred_scores = inferencer.post_process(predictions, meta_data=meta_data)
print(f"Anomaly_score: {pred_scores}")
# Overlay segmentation mask using raw predictions
if meta_data is not None:
image = inferencer._superimpose_segmentation_mask(meta_data, anomaly_map, image)
anomaly_map = superimpose_anomaly_map(anomaly_map, image)
cv2.imshow("Anomaly map", anomaly_map)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
main()
Code output
tensor([[[[41.7540, 41.4636, 40.6263, ..., 34.3366, 34.7471, 34.8892],
[41.5304, 41.2426, 40.4131, ..., 34.1754, 34.5826, 34.7236],
[40.8823, 40.6022, 39.7947, ..., 33.7111, 34.1088, 34.2464],
...,
[19.1933, 19.1503, 19.0268, ..., 18.9357, 19.0051, 19.0278],
[19.4248, 19.3804, 19.2529, ..., 19.1242, 19.1945, 19.2177],
[19.5046, 19.4598, 19.3309, ..., 19.1878, 19.2586, 19.2818]]]])
Anomaly_score: 1.0
tensor([[[[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
...,
[18.6857, 18.6415, 18.5146, ..., 18.1626, 18.2075, 18.2221],
[18.9330, 18.8873, 18.7560, ..., 18.3440, 18.3900, 18.4050],
[19.0184, 18.9721, 18.8394, ..., 18.4052, 18.4517, 18.4668]]]])
Anomaly_score: nan
tensor([[[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]]]])
Anomaly_score: nan
Expected behavior
- A clear and concise description of what you expected to happen.
Screenshots 1st input
2nd input
Hardware and Software Configuration
- Hardware: NVIDIA Jetson Nano
- OS: Ubuntu18.04 (JetPack 4.6.1)
- CUDA Version: 11.4.166
- Docker: nvcr.io/nvidia/l4t-pytorch:r34.1.1-pth1.11-py3
Additional context
- This happens when running on WSL2.
Issue Analytics
- State:
- Created a year ago
- Comments:11 (5 by maintainers)
Top Results From Across the Web
Concurrent requests to multiple models cause NaN values in ...
infer(...) ) the feature extractor returns a numpy array containing NaN values. This does not happen with multiple concurrent requests to any single...
Read more >PaDiM : A machine learning model for detecting defective ...
For training PaDiM, WideResNet50 is applied to a sequence of images of normal products to calculate the feature map and compute the feature ......
Read more >See raw diff - Hugging Face
Anomalib is constantly updated with new algorithms and training/inference extensions, ... + +```bash +python tools/train.py --model padim +``` + +where the ...
Read more >PaDiM: a Patch Distribution Modeling Framework for Anomaly ...
We present a new framework for Patch Distribution Modeling, PaDiM, to concurrently detect and localize anomalies in images in a one-class learning setting....
Read more >Keras Padding | Truncating or Padding the Input Data Points
The deep learning model will accept input data points from standardized tensors. The padding accepts one or two valid or identical values.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I had similar problems. Maybe this is of help to you. Source of problem was ‘torch.sqrt(distances)’ in AnomalyMapGenerator.compute_distance. With custom dataset and backbone distances sometimes contains small negative numbers, possible due to imperfect pseudo inverse calculation. Solution was to clip distances to zero before sqrt.
Good to know thanks. I thought anomalib required version 3.8.
Another question, which pytorch version do you use? I’m struggling with installing anomalib on my Jetson Xavier NX.