question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to send FP16 NCHW data to the NeuralNetwork node

See original GitHub issue

I have a network with input: Dimensions [3, 224, 224, 1] DataType.FP16 StorageOrder.NCHW

I want to send data to the input from the host.

One method is:

nn_data = dai.NNData()
nn_data.setData(nn_input_frame)
nn_in.send(nn_data)

But it looks like setData accepts uint8 only.

Another method is:

nn_data = dai.ImgFrame()
nn_data.setTimestamp(time.monotonic())
nn_data.setType(dai.RawImgFrame.Type.BGRF16F16F16i)
nn_data.setWidth(244)
nn_data.setHeight(244)
nn_data.setFrame(nn_input_frame)
nn_in.send(nn_data)

But the data order is NHWC

Is there a way to do this?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:10 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
tarekmuallimcommented, Aug 4, 2022

Hi @Erol444 I think this should do the work. Thank you.

1reaction
tarekmuallimcommented, Aug 10, 2022

Hi @Erol444

It is working now. I used a .blob with FP32 precision.

I am passing the input like your suggesting. np.array([input_frame], dtype=np.float32).view(np.int8) Now I am getting correct results from my model.

Thank you for your help.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Tensor Ops Made Easier in cuDNN | NVIDIA Technical Blog
New: Use FP32 Data for Tensor Ops. The post on using Tensor Cores in CUDA discussed the use of FP16 input for tensor...
Read more >
Converting a Model Using General Conversion Parameters
Usually neural network models are trained with the normalized input data. This means that the input data values are converted to be in...
Read more >
Optimize PyTorch Performance for Speed and Memory ...
Avoid unnecessary data transfer between CPU and GPU 6. Use torch.from_numpy(numpy_array) or ... Use channels_last memory format for 4D NCHW Tensors
Read more >
Accelerating WinML and NVIDIA Tensor Cores
On NVIDIA RTX hardware, from the Volta architecture forward, ... When you provide data in NCHW (planar) layout, there is poor spatial ...
Read more >
Developing and Deploying Vision AI with Dell and NVIDIA ...
data required when training deep neural networks from scratch. ... Transfer learning with RetinaNet / EfficientNet using NVIDIA® TAO ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found