question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Training: Passing tensors from tf to tfq using cirq to train

See original GitHub issue

Hello

Is there a way to pass a tensor in a form that is usable by cirq when it is passed from an upstream layer during training?

I have a tensor from a Dense layer. I want to run it through a custom layer and perform some tensor operations. This layer then outputs/returns a cirq circuit I designed. We can remove the functions in the layer or the layer entirely to make the problem more clear. Is there a way to pass a tensor into a cirq circuit using tfq/tf/cirq so that it can be trained?

I was assuming that since tensorflow is graph based none of the values would be accessible for training (i.e. you cannot get_weights() or evaluate the network in the middle of training to produce a numpy array/float value to pass into the circuit. Is there any way to go about this from any of the packages inside of tfq/tf/cirq/keras? I have tried using an tfq.layers.Expectation() by passing the tensor as the initializer. I have tried to find a solution to this using cirq functions as well but have not had success. Was thinking of also posting this in the cirq issues but (at least right now) believe this is more related to tfq than cirq.

Please let me know if there is anything below that is unclear or needs further explanation. Thanks in advance.


Code


I have tried to input a tensor into cirq and gotten this:

place = tf.compat.v1.placeholder(tf.float32, shape=(4,1)) placeholder = cirq.ry(place)(cirq.GridQubit(0,0)) placeholder ## -> cirq.ry(np.pi*<tf.Tensor 'truediv_25:0' shape=(4, 1) dtype=float32>).on(cirq.GridQubit(0, 0))

When it is embedded inside of a circuit and called like this it gives an error:

circuit1 = cirq.Circuit() circuit1.append(cirq.ry(place)(cirq.GridQubit(0,0))) circuit1 ## ->TypeError: int() argument must be a string, a bytes-like object or a number, not 'Tensor'

When it is called inside of the layer using `tfq.convert_to_tensor ` the error shows:

TypeError: can't pickle _thread.RLock objects


The functions from the layer: GetAngles3(weights): creates tensors with shape=(4,) CreateCircuit2(x): creates a cirq circuit using tfq.convert_to_tensor ` following the documetation


Here is the layer itself:

class ASDF2(tf.keras.layers.Layer):
    def __init__(self, outshape=4, qubits=cirq.GridQubit.rect(1, 4)):
        super(ASDF2, self).__init__()
        self.outshape = outshape
        self.qubits = qubits

    def build(self, input_shape):
        self.kernel = self.add_weight(name='kernel',
                                    shape=(int(input_shape[1]), self.outshape),
                                    trainable=True)

        super(ASDF2, self).build(input_shape)


    def call(self, inputs, **kwargs):
        try:
            x = GetAngles3(inputs)
            y = CreateCircuit2(x)
            return y
        except:
            x = tf.dtypes.cast(inputs, tf.dtypes.string)
            return tf.keras.backend.squeeze(x, 0)

Here is the output when the model runs:

Train on 78 samples, validate on 26 samples Epoch 1/25 ****************inputs Tensor("model/dense/Sigmoid:0", shape=(None, 4), dtype=float32) ****************GetAngles Tensor("model/dense/Sigmoid:0", shape=(None, 4), dtype=float32) [<tf.Tensor 'model/asd_f2/Asin:0' shape=(None, 4) dtype=float32>, <tf.Tensor 'model/asd_f2/mul:0' shape=(None, 4) dtype=float32>, <tf.Tensor 'model/asd_f2/mul_1:0' shape=(None, 4) dtype=float32>, <tf.Tensor 'model/asd_f2/mul_2:0' shape=(None, 4) dtype=float32>, <tf.Tensor 'model/asd_f2/mul_3:0' shape=(None, 4) dtype=float32>] ****************createCircuit WARNING:tensorflow:Gradients do not exist for variables ['conv1d/kernel:0', 'conv1d/bias:0', 'dense/kernel:0', 'dense/bias:0', 'asd_f2/kernel:0'] when minimizing the loss. ****************inputs Tensor("model/dense/Sigmoid:0", shape=(None, 4), dtype=float32) ****************GetAngles Tensor("model/dense/Sigmoid:0", shape=(None, 4), dtype=float32) [<tf.Tensor 'model/asd_f2/Asin:0' shape=(None, 4) dtype=float32>, <tf.Tensor 'model/asd_f2/mul:0' shape=(None, 4) dtype=float32>, <tf.Tensor 'model/asd_f2/mul_1:0' shape=(None, 4) dtype=float32>, <tf.Tensor 'model/asd_f2/mul_2:0' shape=(None, 4) dtype=float32>, <tf.Tensor 'model/asd_f2/mul_3:0' shape=(None, 4) dtype=float32>] ****************createCircuit WARNING:tensorflow:Gradients do not exist for variables ['conv1d/kernel:0', 'conv1d/bias:0', 'dense/kernel:0', 'dense/bias:0', 'asd_f2/kernel:0'] when minimizing the loss.


Here is a plot of the model where ASDF2 is the layer I created: image

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:17 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
MichaelBroughtoncommented, Aug 6, 2020

Ok, turns out that the error was indeed a good old fashioned shape mismatch. I added a little snippet in the notebook to show where:

print('Required number of parameters for exp layer:', (None, len(data_syms + model_syms)))
print('Provided number of parameters for exp layer:', tf.concat([dense_in, learnable_gamma], axis=1).shape)
print('Should be equal on 2nd dim.')

Output of the above is (None, 34) and (8,12) (or something). The 2nd dim’s need to line up so that every free symbol has exactly one value to be placed inside of it.

Does this clear things up now ?

1reaction
MichaelBroughtoncommented, Aug 2, 2020

Glad I was able to help clear things up 😃

could you explain why/if its OK to pass both the classical and quantum versions of the data as the input

You can pass whatever you want into tensorflow compute graphs as inputs. It’s OK to pass associated classical information to go along with your quantum circuits. I would just be careful to not accidentally do all the learning on a classical only model that just learns to ignore the harder to learn quantum parts.

what exactly is tfq.layers.Expectation doing to the tensor?

I would check out this: https://www.tensorflow.org/quantum/api_docs/python/tfq/layers/Expectation which goes over how tfq.layers.Expectation works and the types of tensor input formats it can accept. tfq.layers.Expectation is a little more low level than a PQC/ControlledPQC layer so in the case where you need a little more fine grained control it can help out a lot.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Hello, many worlds | TensorFlow Quantum
This tutorial shows how a classical neural network can learn to correct qubit calibration errors. It introduces Cirq, a Python framework to ...
Read more >
docs/tutorials/hello_many_worlds.ipynb · Alex/tensorflow- ...
This tutorial shows how a classical neural network can learn to correct qubit calibration errors. It introduces Cirq, a Python framework to create,...
Read more >
Quantum Machine Learning with TensorFlow Quantum
TensorFlow Quantum (TFQ) is a quantum machine learning library for rapid prototyping of hybrid quantum-classical ML models. Research in ...
Read more >
Classify with TensorFlow-Quantum and Cirq
TensorFlow Quantum (TFQ) is a quantum machine learning library for rapid ... https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1- ...
Read more >
How to use Dataset in TensorFlow
We can, of course, initialise our dataset with some tensor # using a tensor dataset = tf.data.Dataset.from_tensor_slices(tf.random_uniform([100, 2])) ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found