inference_detector with multiple images?
See original GitHub issueIs it possible to call inference_detector (mmdet/apis/inference.py) with multiple images (list of image-paths)? The doku says yes, but calling the function with a list of image paths throws an exception. Does anyone have any idea what to change, so that multiple images are possible?
# build the data pipeline
test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:]
test_pipeline = Compose(test_pipeline)
# prepare data
data = dict(img=img)
data = test_pipeline(data)
data = scatter(collate([data], samples_per_gpu=1), [device])[0]
For a Python newbie, these few lines of code are very confusing.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:11
- Comments:7 (3 by maintainers)
Top Results From Across the Web
Inference and train with existing models and standard datasets
Note: inference_detector only supports single-image inference for now. ... MMDetection supports multiple public datasets including COCO, Pascal VOC, ...
Read more >Image and Video Inference using MMDetection - DebuggerCafe
In this post, we explore the MMDetection object detection library to run image and video inference using different models.
Read more >Extracting object identification results and cropping images ...
A quick tutorial on how to parse through the results of an MMDetection inference detector and get the labels and bounding box coordinates....
Read more >Python Examples of mmdet.apis.inference_detector
This page shows Python examples of mmdet.apis.inference_detector. ... test a single image result = inference_detector(model, args.img) # show the results ...
Read more >mmdet.apis.inference_detector Example - Program Talk
COLOR_RGB2BGR) result = inference_detector(self.model, image) return result ... is used for concurrent inference of multiple images streamqueue = asyncio.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
my code for inference multiple images in one directory, just for your reference:
from mmdet.apis import init_detector, inference_detector, show_result_pyplot import mmcv #import matplotlib.pyplot as plt import cv2 import glob import os import sys import time import argparse
‘’’ python tools/my_inference.py
–cfg configs/my_config/htc_without_semantic_r50_fpn_1x_coco_linux.py
–output-dir work_dirs/htc/20200608/infer_paper/
–wts work_dirs/htc/20200608/epoch_200.pth
…/coco/infer_img_dir ‘’’
def parse_args(): parser = argparse.ArgumentParser(description=‘End-to-end inference’) parser.add_argument( ‘–cfg’, dest=‘cfg’, help=‘cfg model file (/path/to/model_config.yaml)’, default=None, type=str ) parser.add_argument( ‘–wts’, dest=‘weights’, help=‘weights model file (/path/to/model_weights.pkl)’, default=None, type=str ) parser.add_argument( ‘–output-dir’, dest=‘output_dir’, help=‘directory for visualization pdfs (default: /tmp/infer_simple)’, default=‘/tmp/infer_simple’, type=str ) parser.add_argument( ‘–image-ext’, dest=‘image_ext’, help=‘image file name extension (default: jpg)’, default=‘jpg’, type=str ) parser.add_argument( ‘–always-out’, dest=‘out_when_no_box’, help=‘output image even when no object is found’, action=‘store_true’ ) parser.add_argument( ‘–output-ext’, dest=‘output_ext’, help=‘output image file format (default: jpg)’, default=‘jpg’, type=str ) parser.add_argument( ‘–thresh’, dest=‘thresh’, help=‘Threshold for visualizing detections’, default=0.3, type=float ) parser.add_argument( ‘im_or_folder’, help=‘image or folder of images’, default=None ) if len(sys.argv) == 1: parser.print_help() sys.exit(1) return parser.parse_args()
def main(args):
if name == ‘main’: args = parse_args() main(args)
Why is that batch inference only slightly faster instead of multiple times faster? It’s because of: GPU hardware/cuda version/python limitation/pytorch or mmdetection implementation? Could you share more insights? Thanks!