question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Can't get the shape right

See original GitHub issue

Description

So were using an EfficientNet model and off the shelf its config looks like this (note the input dimensions):

{
    "name": "super-grouper",
    "platform": "tensorflow_savedmodel",
    "backend": "tensorflow",
    "version_policy": {
        "latest": {
            "num_versions": 1
        }
    },
    "max_batch_size": 0,
    "input": [
        {
            "name": "image_tensor",
            "data_type": "TYPE_FP32",
            "format": "FORMAT_NONE",
            "dims": [
                -1,
                456,
                456,
                3
            ],
            "is_shape_tensor": false,
            "allow_ragged_batch": false
        }
    ],
    "output": [
        {
            "name": "logits",
            "data_type": "TYPE_FP32",
            "dims": [
                -1,
                152
            ],
            "label_filename": "",
            "is_shape_tensor": false
        }
    ],
    "optimization": {
        "priority": "PRIORITY_DEFAULT",
        "input_pinned_memory": {
            "enable": true
        },
        "output_pinned_memory": {
            "enable": true
        }
    },
    "instance_group": [
        {
            "name": "super-grouper",
            "kind": "KIND_GPU",
            "count": 1,
            "gpus": [
                0
            ],
            "profile": []
        }
    ],
    "default_model_filename": "model.savedmodel",
    "cc_model_filenames": {},
    "metric_tags": {},
    "parameters": {},
    "model_warmup": []
}

Triton Information

Triton client 2.2.0

To Reproduce

So i can’t seem to get the input shape so that it would be accepted. First i tried a batch of 8 images following the image_client.py example i np.stack()'ed them before sending:

tritonclientutils.InferenceServerException: got unexpected numpy array shape [8, 456, 456, 3], expected [-1, 456, 456, 3]

then i tried to using np.expand_dims first as this worked in v1, still no joy

… got unexpected numpy array shape [8, 1, 456, 456, 3], expected [-1, 456, 456, 3]

reduced the batch to a single image (our use case anyway)

… got unexpected numpy array shape [1, 456, 456, 3], expected [-1, 456, 456, 3]

Expected behavior

One of these methods to work!?!?!?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:9 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ghostcommented, Sep 10, 2020

Thanks @tanmayv25. I released that once I set the max_batch_size > 0 it removes the -1 dim. for me in the configuration JSON (which I was using for setting the input dims. Thanks again!

0reactions
ghostcommented, Sep 11, 2020

Sure here is my client for you client:

    def infer(
        self,
        model_name: str,
        data: List[np.ndarray],
        batch_size_override=None,
    ):
        """
        Inference on Trion Inference Server 20.08
        """
        model = self.models[model_name]

        batch_size = (
            batch_size_override
            if batch_size_override and batch_size_override < model['max_batch_size']
            else model['max_batch_size']
        )

        triton_type = model['input'][0]['data_type'][5:]
        np_input_type = triton_to_np_dtype(triton_type)  # assume 1 input layer
        data = [image.astype(np_input_type) for image in data]

        self.log.debug(f"\nBATCH SIZE: {batch_size}, model: {model_name}\n")

        t = timeit.default_timer()

        # Inference! chunk the input into batches, execute concurrently
        with concurrent.futures.ThreadPoolExecutor() as executor:

            def make_call(client, model_name, triton_type, input_layer, batch):
                if len(batch) > 1:
                    batched_image_data = np.stack(batch, axis=0)
                else:
                    batched_image_data = batch[0] #np.expand_dims(batch[0], axis=0)

                infer_input = tritongrpcclient.InferInput(input_layer, batched_image_data.shape, triton_type)
                infer_input.set_data_from_numpy(batched_image_data)

                return client.infer(model_name, [infer_input])

            func = partial(
                make_call,
                self.client,
                model_name,
                triton_type,
                model['input'][0]['name']
            )
            answers = executor.map(func, chunks(data, batch_size))

        print(f"Time took {timeit.default_timer() - t}")

        # merge the keys from each batch into single lists
        results = defaultdict(list)
        for a in answers:
            for layer in model['output']:
                name = layer["name"]
                results[name] += [a.as_numpy(name)]

        return results

I refactored my client more in line with one of your examples and I’m getting correct inference when the model has max_batch_size=0 and only 3 dims. But I can’t get inference for models with a -1 batch dimension even after setting max_batch_size > 0 with a -1 4th dim. Always getting back from the server:

tritonclientutils.InferenceServerException: [StatusCode.INVALID_ARGUMENT] unexpected shape for input ‘image_tensor’ for model ‘meta_0’. Expected [-1,-1,456,456,3], got [8,456,456,3]

HA! I did manage to get a batch though but the model config has max_batch_size=0 with `dims: [-1 456 456 3] i force the batch size with the client to 8? I guess because the model is handling the batching and not Triton. So that makes sense.

Read more comments on GitHub >

github_iconTop Results From Across the Web

can't get the right shape of TensorFlow custom layer
I have tried 'cheating' by reshaping the input into the right shape then multiplying by 0 and adding a tensor with the size...
Read more >
Think You Can't Get in Shape? Think Again - 12 Minute Athlete
Never give up on getting in shape. If you believe you can, and stick with it, you'll get there.
Read more >
I can't even draw the basic face shape, how am I supposed to ...
You CAN draw a face, it just takes lots of practice. Faces are one of the hardest things to draw, but it's not...
Read more >
How to Get in Shape Quickly and Safely: 4-Step Beginner Guide
Ultimate guide to getting in shape quickly (without hating it). Free workout plans, diet plants, and exact step-by-step plans!
Read more >
Get in Shape for Basketball the RIGHT Way! - YouTube
Copy link. Info. Shopping. Tap to unmute. If playback doesn't begin shortly, try restarting your device. Your browser can't play this video.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found