OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did not convert this function. This might indicate you are trying to use an unsupported feature.
See original GitHub issueI wrote a function as below:
def convert_to_circuit(image):
# values = tf.reshape(image, [-1])
images = image.numpy()
values = np.ndarray.flatten(images)
qubits = cirq.GridQubit.rect(4, 4)
circuit = cirq.Circuit()
for i in range(4):
for j in range(4):
k = (4 * i) + j
circuit.append(cirq.H(cirq.GridQubit(i, j)))
circuit.append(cirq.Rx(rads=values[k]).on(cirq.GridQubit(i, j)))
return circuit
While running it for tensorflow_quantum.layers.PQC, I need to convert a batch of images into these circuit and save them in a list.
After running it, using @tf.function decorator, I am getting an OperatorNotAllowedInGraphError.
@tf.function
def train_step(images):
noise = tf.random.normal((BATCH_SIZE, noise_dim, 1))
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_in = [convert_to_circuit(x) for x in images]
real_input = tfq.convert_to_tensor(real_in)
fake_in = [convert_to_circuit(x) for x in generated_images]
fake_input = tfq.convert_to_tensor(fake_in)
real_output = discriminator(real_input, training=True)
fake_output = discriminator(fake_input, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
The error shown as below:
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError Traceback (most recent call last)
[<ipython-input-77-d152560ca122>](https://localhost:8080/#) in <module>
----> 1 train(train_dataset, EPOCHS)
2 frames
[/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py](https://localhost:8080/#) in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
OperatorNotAllowedInGraphError: in user code:
File "<ipython-input-75-acefef8060ba>", line 12, in train_step *
real_in = tf.numpy_function(convert_to_circuit, images, tf.float32)
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
The error also comes with a comment that “This might indicate you are trying to use an unsupported feature”.
Other information:
tensorflow-quantum==0.7.2
tensorflow==2.8.2
cirq-core==0.13.1
cirq-google==0.13.1
Issue Analytics
- State:
- Created a year ago
- Comments:16 (9 by maintainers)
Top Results From Across the Web
Iterating over tf.Tensor is not allowed: AutoGraph is disabled ...
I receive the following error: tensorflow. python.framework.errors_impl.OperatorNotAllowedInGraphError: iterating over tf.Tensor is not allowed: ...
Read more >iterating over 'tf.Tensor' is not allowed AutoGraph did ... - GitHub
Hi, I am trying to convert below function to tensorflow graph using tf.function decorator. However, I got an error message.
Read more >Autograph error when iterating over tensor - Google Groups
OperatorNotAllowedInGraphError : iterating over `tf.Tensor` is not allowed: AutoGraph did not convert this function. Try decorating it directly with ...
Read more >Better performance with tf.function | TensorFlow Core
It is a transformation tool that creates Python-independent dataflow graphs out of your Python code. This will help you create performant and portable...
Read more >OperatorNotAllowedInGraphError - debug? | Data Science ...
Tensor as a Python bool is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

As with most things in TFQ, it’s probably easiest to use a custom layer (a valuable lesson I learned from @/MichaelBroughton). I made up a discriminator circuit and generator, but the workflow should look something like the code below. This is able to encode the output of the generator using a ControlledPQC (in which the weights of the encoder circuit are controlled by said output). This is structurally combined with the discriminator, whose weights are learned (and managed) by TFQ inside the custom layer. This code produces gradients for me, and is wrappable in a
tf.functiondecorator for speed. Hopefully this is a step in direction of what you are looking for. The losses are copied from: https://www.tensorflow.org/tutorials/generative/dcgan and the other code is mostly Frankenstein-ed from: https://github.com/lockwo/quantum_computation/blob/master/TFQ/RL_QVC/atari_qddqn.py and https://github.com/lockwo/quantum_computation/blob/master/TFQ/RL_QVC/policies.pyI see now. That’s not a question with a simple answer, there are a ton of factors that go into running stuff on real hardware. 16 qubits is definitely possible, but it depends a lot on depth and number of 2 qubit gates. 16 qubit with depth 1 and no 2 qubit gates will be easy, 16 qubits with 200 qubit gates will not be. There is a decent amount of literature on the limits of hardware (e.g. https://arxiv.org/abs/2202.11045) that might be of interest to you.