question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Don't know how to inference

See original GitHub issue

hello @snosov1 @alexey-trushkov @alexey-sidnev AlexanderDokuchaev and everyone, Thank for your code. I have already trained and exported to frozen model (.pb) without optimize. Then i use frozen model for tensorrt inference server. but I got the wrong result compared to using infer.py file (checkpoints)

I don’t know if my export .pb file is incorrect or my clients code are wrong

#### MY CLIENTS CODE #####
import argparse
import os
import random
import cv2
import numpy as np
import tensorflow as tf
import tensorflow.contrib.slim as slim
from builtins import range
from tensorrtserver.api import *
import tensorrtserver.api.model_config_pb2 as model_config


if __name__ == '__main__':
    input_name = 'input'
    output_name = 'd_predictions'
    model_name = 'lp-recognitor'
    protocol = ProtocolType.from_str('http')
    ctx = InferContext('localhost:8000', protocol, model_name, -1, False)

    img = cv2.imread("/data/tmp/plate/000111.png")
    img = np.float32(img)
    img = cv2.resize(img, (24, 94))

    in_frame = img.reshape((24, 94, 3)) 

    
    input_data = []
    input_data.append(in_frame) 

    results = [] 
    results.append(ctx.run(
            { input_name : input_data },
            { output_name : (InferContext.ResultFormat.RAW) }))
    print("****************results*********************", results)

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:9 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
AlexanderDokuchaevcommented, Apr 3, 2019

Hi @tienduchoang, You should add preprocessing for input data in your code

    img = cv2.resize(image, (94, 24))
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    img = np.float32(img)
    img = np.multiply(img, 1.0/255.0)

For IE model it done by arguments for mo.py.

1reaction
snosov1commented, Apr 3, 2019

I’ve heard from @AlexanderDokuchaev that he sees the same (strange) behavior. We’ll take a closer look and let you know when we have something.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Inference | Classroom Strategies - Reading Rockets
Helping students understand when information is implied, or not directly stated, will improve their skill in drawing conclusions and making inferences.
Read more >
What is an inference? And how to teach it.
Learn what an inference is, and the skill of how to infer information, facts and opinions from texts of all types with this...
Read more >
How to Make an Inference in 5 Easy Steps - ThoughtCo
First, you'll need to determine whether or not you're actually being asked to make an inference on a reading test.
Read more >
How to Teach Making Inferences in Upper Elementary
Tips for Teaching Inferencing · Begin by modeling what it looks like. The easiest way for many students to grasp how to inference,...
Read more >
Follow 5 Steps to Make an Inference - Smekens Education
In other words, inferences are not created in a vacuum. ... For students to understand how to do this, let's break it down...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found