question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

difference results between infer using .pb and infer using trtis

See original GitHub issue

Hi everyone. after convert yolov3 to frozen tensorflow model. i infer by two way. CASE 1: infer by .pb. my model draw 4 bboxes on 4 cars ##############code#################

import numpy as np
import tensorflow as tf
from PIL import Image
from core import utils


IMAGE_H, IMAGE_W = 416, 416
classes = utils.read_coco_names('coco.names')
num_classes = len(classes)
image_path = "image.jpg"  # 181,
img = Image.open(image_path)
img_resized = np.array(img.resize(size=(IMAGE_W, IMAGE_H)), dtype=np.float32)
img_resized = img_resized / 255.
gpu_nms_graph = tf.Graph()

input_tensor, output_tensors = utils.read_pb_return_tensors(gpu_nms_graph, "./checkpoint/yolov3_gpu_nms.pb",
                                           ["Placeholder:0", "concat_10:0", "concat_11:0", "concat_12:0"])
print("output_tensors: ", output_tensors)
with tf.Session(graph=gpu_nms_graph) as sess:
    print(sess.run(output_tensors, feed_dict={input_tensor: np.expand_dims(img_resized, axis=0)}))
    results = sess.run(output_tensors, feed_dict={input_tensor: np.expand_dims(img_resized, axis=0)})
    (boxes, scores, labels) = (results[0], results[1], results[2])
    print("boxes, scores, labels: ", boxes, scores, labels)
    boxes, scores, labels = utils.gpu_nms(boxes, scores, num_classes, score_thresh=0.3, iou_thresh=0.5)
    image = utils.draw_boxes(img, boxes, scores, labels, classes, [IMAGE_H, IMAGE_W], show=True)

CASE 2: infer by trtis my model draw just 1 bboxes on 1 car (4 car) ##############code##################

import argparse
import numpy as np
import os
import colorsys
from builtins import range
from tensorrtserver.api import *
import tensorrtserver.api.model_config_pb2 as model_config
import time
import cv2
from multiprocessing import Process, Queue


def read_coco_names(class_file_name):
    names = {}
    with open(class_file_name, 'r') as data:
        for ID, name in enumerate(data):
            names[ID] = name.strip('\n')
    return names

def vehicle_detection(img, queue_ob_class, queue_bbox, queue_ob_scores):
    protocol = ProtocolType.from_str('http')

    input_name = 'Placeholder'
    boxes = 'concat_10'
    scores = 'concat_11'
    labels = 'concat_12'

    model_name = 'vehicle-detector'

    ctx = InferContext('localhost:8000', protocol, model_name, -1, False)
    coconames = 'data/vehicle-detector/voc.names'
    # img_resized = np.array(img.resize(size=(416, 416)), dtype=np.float32)
    img_resized = cv2.resize(img, (416, 416))
    img_resized = cv2.cvtColor(img_resized, cv2.COLOR_BGR2RGB)
    img_resized = np.float32(img_resized)
    img_resized = np.multiply(img_resized, 1.0/255.0)
    # img_resized = img_resized / 255.
    classes = read_coco_names(coconames)

    image_data = []
    image_data.append(img_resized)

    request_ids = []

    request_ids.append(ctx.async_run(
        {input_name : image_data},
        {boxes : (InferContext.ResultFormat.RAW),
        scores : (InferContext.ResultFormat.RAW),
        labels : (InferContext.ResultFormat.RAW)},
    1))
    bbox = []
    ob_scores = []
    ob_class = []
    print("len(request_ids): ", len(request_ids))
    for i, request_id in enumerate(request_ids):
        result = ctx.get_async_run_results(request_id, True)
        yolov3_classes = ['person', 'bicycle', 'car', 'motorbike', 'aeroplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign']
        v_class = yolov3_classes[result['concat_12'][0]]
        if v_class in ['car', 'truck', 'motorbike']:
            bbox.append(np.asarray(result['concat_10'][0]))
            ob_scores.append(np.asarray(result['concat_11'][0]))
            ob_class.append(np.asarray(result['concat_12'][0]))
    #print("bbox: ", bbox)
    return queue_ob_class.put(ob_class), queue_bbox.put(bbox), queue_ob_scores.put(ob_scores)

the result show below lp result

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
DimaK-tracxpointcommented, May 28, 2019

I had similar issue with TensorRT 4. It was related to “max_pooling” and “flatten” layer implementation differences between TF and TRT. I found the cause by running both variants, dumping and comparing all intermediate layer result vectors. Then, our researchers did some tweaks to a model to work around the problem.

0reactions
srijithscommented, May 9, 2019

@ThiagoMateo Can you help us with the model file configurations for YOLOv3 which you are using ?

Read more comments on GitHub >

github_iconTop Results From Across the Web

difference results between infer using .pb and infer using trtis
Hi everyone. after convert yolov3 to frozen tensorflow model. i infer by two way. CASE 1: infer by .pb. my model draw 4...
Read more >
NVIDIA Triton Inference Server Boosts Deep Learning Inference
Triton Server runs models concurrently to maximize GPU utilization, supports CPU-based inferencing, offers advanced features like model ...
Read more >
Save, Load and Inference From TensorFlow Frozen Graph
pb stands for Protocol Buffers, it is a language-neutral, platform-neutral extensible mechanism for serializing structured data. It is widely ...
Read more >
python - What is difference frozen_inference_graph.pb and ...
frozen_inference_graph.pb, is a frozen graph that cannot be trained anymore, it defines the graphdef and is actually a serialized graph and ...
Read more >
TENSORRT INFERENCE SERVER - GTC On Demand
Significantly improves inference performance of models trained in FP32 ... PyTorch JIT (.pb) ... TRTIS CUDA Streams are 1-4% slower than MPS but...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found