question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

inference_detector with multiple images?

See original GitHub issue

Is it possible to call inference_detector (mmdet/apis/inference.py) with multiple images (list of image-paths)? The doku says yes, but calling the function with a list of image paths throws an exception. Does anyone have any idea what to change, so that multiple images are possible?

# build the data pipeline
test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:]
test_pipeline = Compose(test_pipeline)
# prepare data
data = dict(img=img)
data = test_pipeline(data)
data = scatter(collate([data], samples_per_gpu=1), [device])[0]

For a Python newbie, these few lines of code are very confusing.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:11
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
minan19605commented, Jun 27, 2020

my code for inference multiple images in one directory, just for your reference:

from mmdet.apis import init_detector, inference_detector, show_result_pyplot import mmcv #import matplotlib.pyplot as plt import cv2 import glob import os import sys import time import argparse

‘’’ python tools/my_inference.py
–cfg configs/my_config/htc_without_semantic_r50_fpn_1x_coco_linux.py
–output-dir work_dirs/htc/20200608/infer_paper/
–wts work_dirs/htc/20200608/epoch_200.pth
…/coco/infer_img_dir ‘’’

def parse_args(): parser = argparse.ArgumentParser(description=‘End-to-end inference’) parser.add_argument( ‘–cfg’, dest=‘cfg’, help=‘cfg model file (/path/to/model_config.yaml)’, default=None, type=str ) parser.add_argument( ‘–wts’, dest=‘weights’, help=‘weights model file (/path/to/model_weights.pkl)’, default=None, type=str ) parser.add_argument( ‘–output-dir’, dest=‘output_dir’, help=‘directory for visualization pdfs (default: /tmp/infer_simple)’, default=‘/tmp/infer_simple’, type=str ) parser.add_argument( ‘–image-ext’, dest=‘image_ext’, help=‘image file name extension (default: jpg)’, default=‘jpg’, type=str ) parser.add_argument( ‘–always-out’, dest=‘out_when_no_box’, help=‘output image even when no object is found’, action=‘store_true’ ) parser.add_argument( ‘–output-ext’, dest=‘output_ext’, help=‘output image file format (default: jpg)’, default=‘jpg’, type=str ) parser.add_argument( ‘–thresh’, dest=‘thresh’, help=‘Threshold for visualizing detections’, default=0.3, type=float ) parser.add_argument( ‘im_or_folder’, help=‘image or folder of images’, default=None ) if len(sys.argv) == 1: parser.print_help() sys.exit(1) return parser.parse_args()

def main(args):

config_file = args.cfg
checkpoint_file = args.weights
thresh=args.thresh
model = init_detector(config_file, checkpoint_file, device='cuda:0')
#model = init_detector(config_file, checkpoint_file, device='cpu')

if os.path.isdir(args.im_or_folder):
    im_list = glob.iglob(args.im_or_folder + '/*.' + args.image_ext)
else:
    im_list = [args.im_or_folder]

total_t = time.time()
for i, im_name in enumerate(im_list):
    print('Get img {}'.format(im_name))
    one_t = time.time()
    if(args.image_ext == args.output_ext):
        output_name = os.path.join(
            args.output_dir, '{}'.format(os.path.basename(im_name))
        )
    else:
        output_name = os.path.join(
            args.output_dir, '{}'.format(os.path.basename(im_name) + '.' + args.output_ext)
        )
    
    result = inference_detector(model, im_name)
    if hasattr(model, 'module'):
        model = model.module
    pred_img = model.show_result(im_name, result, score_thr=thresh, show=False)
    cv2.imwrite(output_name,pred_img)
    print('write {}'.format(output_name))
    print('One image time {:.3f}s'.format(time.time()-one_t))
print("Total inference time is {:.3f}s".format(time.time() - total_t))

if name == ‘main’: args = parse_args() main(args)

0reactions
leemengweicommented, Jul 1, 2021

@Kaeseknacker Actually it won’t be “much” faster. We have tried forward the network with multiple images but it is only slightly faster under some circumstance (e.g., input images have the same resolution), and even slower sometimes.

Why is that batch inference only slightly faster instead of multiple times faster? It’s because of: GPU hardware/cuda version/python limitation/pytorch or mmdetection implementation? Could you share more insights? Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Inference and train with existing models and standard datasets
Note: inference_detector only supports single-image inference for now. ... MMDetection supports multiple public datasets including COCO, Pascal VOC, ...
Read more >
Image and Video Inference using MMDetection - DebuggerCafe
In this post, we explore the MMDetection object detection library to run image and video inference using different models.
Read more >
Extracting object identification results and cropping images ...
A quick tutorial on how to parse through the results of an MMDetection inference detector and get the labels and bounding box coordinates....
Read more >
Python Examples of mmdet.apis.inference_detector
This page shows Python examples of mmdet.apis.inference_detector. ... test a single image result = inference_detector(model, args.img) # show the results ...
Read more >
mmdet.apis.inference_detector Example - Program Talk
COLOR_RGB2BGR) result = inference_detector(self.model, image) return result ... is used for concurrent inference of multiple images streamqueue = asyncio.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found