question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did not convert this function. This might indicate you are trying to use an unsupported feature.

See original GitHub issue

I wrote a function as below:

def convert_to_circuit(image):
  # values = tf.reshape(image, [-1])
  images = image.numpy()
  values = np.ndarray.flatten(images)
  qubits = cirq.GridQubit.rect(4, 4)
  circuit = cirq.Circuit()
  for i in range(4):
    for j in range(4):
      k = (4 * i) + j
      circuit.append(cirq.H(cirq.GridQubit(i, j)))
      circuit.append(cirq.Rx(rads=values[k]).on(cirq.GridQubit(i, j)))
  return circuit

While running it for tensorflow_quantum.layers.PQC, I need to convert a batch of images into these circuit and save them in a list. After running it, using @tf.function decorator, I am getting an OperatorNotAllowedInGraphError.

@tf.function
def train_step(images):
    noise = tf.random.normal((BATCH_SIZE, noise_dim, 1))

    with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
      generated_images = generator(noise, training=True)

      real_in = [convert_to_circuit(x) for x in images]
      
      real_input = tfq.convert_to_tensor(real_in)

      fake_in = [convert_to_circuit(x) for x in generated_images]
      fake_input = tfq.convert_to_tensor(fake_in)

      real_output = discriminator(real_input, training=True)

      fake_output = discriminator(fake_input, training=True)

      gen_loss = generator_loss(fake_output)
      disc_loss = discriminator_loss(real_output, fake_output)

    gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
    gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)

    generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
    discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))

The error shown as below:

---------------------------------------------------------------------------
OperatorNotAllowedInGraphError            Traceback (most recent call last)
[<ipython-input-77-d152560ca122>](https://localhost:8080/#) in <module>
----> 1 train(train_dataset, EPOCHS)

2 frames
[/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py](https://localhost:8080/#) in autograph_handler(*args, **kwargs)
   1145           except Exception as e:  # pylint:disable=broad-except
   1146             if hasattr(e, "ag_error_metadata"):
-> 1147               raise e.ag_error_metadata.to_exception(e)
   1148             else:
   1149               raise

OperatorNotAllowedInGraphError: in user code:

    File "<ipython-input-75-acefef8060ba>", line 12, in train_step  *
        real_in = tf.numpy_function(convert_to_circuit, images, tf.float32)

    OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.

The error also comes with a comment that “This might indicate you are trying to use an unsupported feature”.

Other information:

tensorflow-quantum==0.7.2
tensorflow==2.8.2
cirq-core==0.13.1
cirq-google==0.13.1

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:16 (9 by maintainers)

github_iconTop GitHub Comments

1reaction
lockwocommented, Aug 23, 2022

As with most things in TFQ, it’s probably easiest to use a custom layer (a valuable lesson I learned from @/MichaelBroughton). I made up a discriminator circuit and generator, but the workflow should look something like the code below. This is able to encode the output of the generator using a ControlledPQC (in which the weights of the encoder circuit are controlled by said output). This is structurally combined with the discriminator, whose weights are learned (and managed) by TFQ inside the custom layer. This code produces gradients for me, and is wrappable in a tf.function decorator for speed. Hopefully this is a step in direction of what you are looking for. The losses are copied from: https://www.tensorflow.org/tutorials/generative/dcgan and the other code is mostly Frankenstein-ed from: https://github.com/lockwo/quantum_computation/blob/master/TFQ/RL_QVC/atari_qddqn.py and https://github.com/lockwo/quantum_computation/blob/master/TFQ/RL_QVC/policies.py

import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np

cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)

def discriminator_loss(real_output, fake_output):
    real_loss = cross_entropy(tf.ones_like(real_output), real_output)
    fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
    total_loss = real_loss + fake_loss
    return total_loss

def generator_loss(fake_output):
    return cross_entropy(tf.ones_like(fake_output), fake_output)

def convert_to_circuit(params):
  qubits = cirq.GridQubit.rect(4, 4)
  circuit = cirq.Circuit()
  for i in range(4):
    for j in range(4):
      k = (4 * i) + j
      circuit.append(cirq.H(cirq.GridQubit(i, j)))
      circuit.append(cirq.Rx(rads=params[k]).on(cirq.GridQubit(i, j)))
  return circuit

def u_ent(qubits, ps):
    c = cirq.Circuit()
    for i in range(len(qubits)):
        c += cirq.rz(ps[i]).on(qubits[i])
    for i in range(len(qubits)):
        c += cirq.ry(ps[i + len(qubits)]).on(qubits[i])
    for i in range(len(qubits) - 1):
        c += cirq.CZ(qubits[i], qubits[i+1])
    c += cirq.CZ(qubits[-1], qubits[0])
    return c

def convert_to_circuit(params):
  qubits = cirq.GridQubit.rect(4, 4)
  circuit = cirq.Circuit()
  for i in range(4):
    for j in range(4):
      k = (4 * i) + j
      circuit.append(cirq.H(cirq.GridQubit(i, j)))
      circuit.append(cirq.Rx(rads=params[k]).on(cirq.GridQubit(i, j)))
  return circuit

class Hybrid_Dis(tf.keras.layers.Layer):
    def __init__(self) -> None:
        super().__init__()
        ops = [cirq.Z(cirq.GridQubit(0, 0))]
        convert_params = sympy.symbols("q0:16")
        qubits = cirq.GridQubit.rect(4, 4)
        ent_params = sympy.symbols("e0:32")
        convert_circuit = convert_to_circuit(convert_params)
        dis_circuit = u_ent(qubits, ent_params)
        circuit = convert_circuit + dis_circuit
        self.quantum_operation = tfq.layers.ControlledPQC(circuit, ops, differentiator=tfq.differentiators.Adjoint())
        self.quantum_weights = tf.Variable(initial_value=np.random.uniform(0, 2 * np.pi, len(ent_params)), dtype="float32", trainable=True)
        self.circuit_tensor = tfq.convert_to_tensor([cirq.Circuit()])
    
    def call(self, inputs):
        circuit_batch_dim = tf.gather(tf.shape(inputs), 0)
        tiled_b = tf.tile(tf.expand_dims(self.quantum_weights, 0), [circuit_batch_dim, 1])
        quantum_inputs = tf.concat([inputs, tiled_b], axis=1)
        tiled_circuits = tf.tile(self.circuit_tensor, [circuit_batch_dim])
        quantum_output = self.quantum_operation([tiled_circuits, quantum_inputs])
        return quantum_output

generator = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(4, 4)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(16)
])

discriminator = tf.keras.Sequential([
    tf.keras.layers.Input(shape=(16,)),
    Hybrid_Dis(),
])

dis_opt = tf.keras.optimizers.Adam()
gen_opt = tf.keras.optimizers.Adam()

@tf.function
def grad(noise, real_input):
  with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
    generated_images = generator(noise, training=True)

    real_output = discriminator(real_input, training=True)
    fake_output = discriminator(generated_images, training=True)

    gen_loss = generator_loss(fake_output)
    disc_loss = discriminator_loss(real_output, fake_output)

  grads = gen_tape.gradient(gen_loss, generator.trainable_variables)
  gen_opt.apply_gradients(zip(grads, generator.trainable_variables))
  grads = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
  dis_opt.apply_gradients(zip(grads, discriminator.trainable_variables))

noise = tf.random.uniform(shape=[10, 4, 4])
real_input = tf.random.uniform(shape=[10, 4, 4])
real_input = tf.reshape(real_input, [10, 16])
grad(noise, real_input)
0reactions
lockwocommented, Aug 24, 2022

I see now. That’s not a question with a simple answer, there are a ton of factors that go into running stuff on real hardware. 16 qubits is definitely possible, but it depends a lot on depth and number of 2 qubit gates. 16 qubit with depth 1 and no 2 qubit gates will be easy, 16 qubits with 200 qubit gates will not be. There is a decent amount of literature on the limits of hardware (e.g. https://arxiv.org/abs/2202.11045) that might be of interest to you.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Iterating over tf.Tensor is not allowed: AutoGraph is disabled ...
I receive the following error: tensorflow. python.framework.errors_impl.OperatorNotAllowedInGraphError: iterating over tf.Tensor is not allowed: ...
Read more >
iterating over 'tf.Tensor' is not allowed AutoGraph did ... - GitHub
Hi, I am trying to convert below function to tensorflow graph using tf.function decorator. However, I got an error message.
Read more >
Autograph error when iterating over tensor - Google Groups
OperatorNotAllowedInGraphError : iterating over `tf.Tensor` is not allowed: AutoGraph did not convert this function. Try decorating it directly with ...
Read more >
Better performance with tf.function | TensorFlow Core
It is a transformation tool that creates Python-independent dataflow graphs out of your Python code. This will help you create performant and portable...
Read more >
OperatorNotAllowedInGraphError - debug? | Data Science ...
Tensor as a Python bool is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found