question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

inference is not work

See original GitHub issue

here is my inference script `import mmcv import os import numpy as np from tqdm import tqdm import json import cv2

def show_result(result): bbox_result = result bboxes = np.vstack(bbox_result) labels = [ np.full(bbox.shape[0], i, dtype=np.int32) for i, bbox in enumerate(bbox_result) ] labels = np.concatenate(labels) return bboxes,labels

config_file = ‘/data/lzy_intern/mmdetection/configs/my_config/image_ann_cascade_rcnn_x101_64x4d_fpn_1x.py’ checkpoint_file = ‘/data/lzy_intern/mmdetection/models/cascade_rcnn_x101_64x4d_fpn_1x.pth’ model = init_detector(config_file, checkpoint_file, device = ‘cuda:2’)

image = mmcv.imread(‘/data/lzy_intern/dataset/coco/train2017/000000174601.jpg’) result = inference_detector(model,image) bbox,label = show_result(result) print(len(bbox)) print(len(label))`

the checkpoint model is download from https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/cascade_rcnn_x101_64x4d_fpn_1x_20181218-e2dc376a.pth and the config file is the deafult version of mmdetection, but why is output of len(bbox) and len(label) is 0, Is the inference is not work?

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:6 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
hellockcommented, Jan 20, 2020

You may try device = 'cuda:0'.

0reactions
AlaylmYCcommented, Feb 9, 2020

The solution is very simple, but I’ve been bothered by this for a few days -_-||. I want to use 5th gpu, so I set device = 'cuda:5' and run my test script with python test.py this is not work and get nothing. The right way is: use device = 'cuda:0' and run like this: CUDA_VISIBLE_DEVICES=5 python test.py This works for me, but I’m confused. Can you explain it in detail @hellock

I meet the same problem. It bothers me a lot. At last I find that it works only use device = ‘cuda:0’. The result will become abnormal when use other devices such as device = ‘cuda:1’ or as device = ‘cuda:2’ Could you explain it in detail? Thanks! @hellock

Read more comments on GitHub >

github_iconTop Results From Across the Web

Typescript inference not working correctly - Stack Overflow
Before you were relying on inference there and you need to define that the function is generic to use the type parameter.
Read more >
Why practising inference doesn't work - David Didau
Let's see if I can explain why. An inference is defined as “a conclusion reached on the basis of evidence and reasoning”.
Read more >
Inference is not working · Issue #6812 · tensorflow/models
Inference is not correct. I have trained with my own data. I think i have trained my own data correctly. I am not...
Read more >
What is an inference? And how to teach it.
Learn what an inference is, and the skill of how to infer information, facts and opinions from texts of all types with this...
Read more >
Inference | Classroom Strategies - Reading Rockets
The point here is not to invalidate students' original inferences, ... The teacher guides students as they work in pairs and as a...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found