question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

GRU of PyDirectML throw error use the parameters of the ONNX example are used.

See original GitHub issue

I wrote a simple example of using GRU operator:

import pydirectml as dml
import numpy as np

steps = 2
batchSize = 3
inputSize = 3
hiddenSize = 5
numDirections = 1

input_bindings = []

def append_input_tensor(builder: dml.GraphBuilder, input_bindings: list, input_tensor: dml.TensorDesc):
    tensor = dml.input_tensor(builder, len(input_bindings), input_tensor)
    input_bindings.append(dml.Binding(tensor, np.zeros(tensor.get_output_desc().sizes)))
    return tensor

# Create a GPU device, and build a model graph.
device = dml.Device(True, True)
builder = dml.GraphBuilder(device)
data_type = dml.TensorDataType.FLOAT32
flags = dml.TensorFlags.OWNED_BY_DML

input_data = [1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18]
input_array = np.array(input_data, np.float32)
input = dml.input_tensor(builder, 0, dml.TensorDesc(data_type, [1, steps, batchSize, inputSize]))
input_bindings.append(dml.Binding(input, input_array))
weight = append_input_tensor(builder, input_bindings, dml.TensorDesc(data_type, flags, [1, numDirections, 3 * hiddenSize, inputSize]))
recurrentWeight = append_input_tensor(builder, input_bindings, dml.TensorDesc(data_type, flags, [1, numDirections, 3 * hiddenSize, hiddenSize]))

gru = dml.gru(
  input = input,
  weight = weight,
  recurrence = recurrentWeight,
  activation_descs = [dml.FusedActivation(dml.OperatorType.ACTIVATION_SIGMOID), dml.FusedActivation(dml.OperatorType.ACTIVATION_SIGMOID)],
  direction = dml.RecurrentNetworkDirection.FORWARD,
  output_options = dml.OutputOptions.Single
)

# Compile the expression graph into a compiled operator
op = builder.build(dml.ExecutionFlags.NONE, [gru])

# Compute the result
output_data = device.compute(op, input_bindings, [gru])
output_tensor = np.array(output_data[0], np.float32)

print(output_tensor)

But it will throw the following error:

Traceback (most recent call last):
  File ".\Python\samples\gru.py", line 36, in <module>
    output_options = dml.OutputOptions.Single
RuntimeError: m_device->CreateOperator(&opDesc, IID_PPV_ARGS(&op))

Is there a problem with the example I wrote? BTW, are there any sample code for GRU usage?

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
miaobincommented, Apr 22, 2022

@miaobin Did the debug layer enlighten?

Yes, with DirectML debug layer we found two issues. We found a bug about creating the inputs of Gru. I submitted a PR to fix this and it has been merged. In addition, we note that DML_TENSOR_FLAGS of all structure DML_GRU_OPERATOR_DESC members must be set to DML_TENSOR_FLAG_NONE. I thank this issue can be closed. Thanks a lot!

1reaction
huningxincommented, Oct 13, 2021

@miaobin, did you happen to try DirectML debug layer and find more details?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Exception: 'The parameter is incorrect.' When attempting to run ...
My model runs fine on Default and Cpu devices, and I am able to run the SqueezeNet.onnx model from the Windows Machine Learning...
Read more >
MXNet to ONNX to ML.NET with Amazon SageMaker, ECS ...
This tutorial will show the steps necessary for training and deploying a regression application based on MXNet, ONNX and ML.NET in the Amazon ......
Read more >
Bring Your AI to Any GPU with DirectML - Windows Blog
Let's look at a few ways DirectML is used today and spark ideas for your own applications. Model Inference on the Edge with...
Read more >
OnnxScoringEstimator Class (Microsoft.ML.Transforms.Onnx)
Supports inferencing of models in ONNX 1.6 format (opset 11), using the Microsoft.ML.OnnxRuntime library. Models are scored on CPU if the project references ......
Read more >
Using Windows ML, ONNX, and NVIDIA Tensor Cores
However, when a pretrained model is being used in a pipeline for inference, it can be treated as simply a series of arbitrary...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found