question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

A faster displacement operator avoiding matrix exponentiation

See original GitHub issue

Matrix exponentiation is a costly operation. See [1]Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later∗.

In a quantum optics, the displacement operator is one of the most basic. It is used to create coherent states from vacuum and forms one of the two gates for universal control of a cavity (Displacement + SNAP gates) [2] Efficient cavity control with SNAP gates.

When we want to write an optimisation routine that finds best displacement parameters in a routine similar to the paper above [2], it would be nice if we can compute the operator faster without doing matrix exponentiation as qutip does now: https://github.com/qutip/qutip/blob/master/qutip/operators.py#L732

I have some notes from a colleague who calculated an analytical formula to compute the matrix elements of the displacement operator without having to do matrix exponentiation [3]: Displacement_operator.pdf

A PR to implement this in QuTiP would be great. We could first write a _displace_analytical function that calculates the displacement matrix using the Scipy Laguerre polynomial and have it as an option as displace(N, alpha, offset, method=analytical )

Could it also come in handy for optimal control? @ajgpitch

In the paper above [2], the authors use gradient descent to fine tune the parameters of a gate sequence containing displacement gates and SNAP gates to target some Bosonic quantum state.

We wish to do similar things for @araza6 s GSoC project.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:17 (17 by maintainers)

github_iconTop GitHub Comments

2reactions
jakelishmancommented, Jun 19, 2020

Ok, that makes sense to me. As long as you’re making its constructor public, it shouldn’t start with an underscore (i.e. just be class Displacer or whatever), but other than that, I can certainly go along with what you’re saying.

1reaction
jakelishmancommented, Jun 18, 2020

I was just thinking about this again and came up with a good speed up for the truncated Hilbert space. I can’t think of any method to get analytic closed-form solutions for the truncated space, though, so this is just a more efficient numerical method.

First we take the generator of the displacement operator G, such that exp(G) is the displacement operator we’re looking for. G is anti-Hermitian, and so it shares its eigensystem (up to scaling of the eigenvalues) with the Hermitian i G and consequently is diagonalised by a unitary formed of its eigenvectors. Now S = i G / abs(alpha) is a tridiagonal Hermitian, and with a similarity transformation we can find a real-symmetric tridiagonal T = P^-1 . S . P for some diagonal unitary P (which is easy to calculate). The reason for scaling out alpha here should become clear at the end.

The main diagonal of T is all zeros, and the first sub- and super-diagonals look like [sqrt(1), -sqrt(2), sqrt(3), -sqrt(4), ...] and the diagonal of P looks like [i, e^(-1i arg(alpha)), i e^(-2i arg(alpha)), e^(-3i arg(alpha)), ...]

Now this real-symmetric tridiagonal form is the basis of Hermitian eigenvalue solvers, and has direct entry points in LAPACK (e.g. ?stemr), which allow us to pass only the main diagonal and the first subdiagonal. Scipy provides convenient wrapped access in Python by scipy.linalg.eigh_tridiagonal. This lets us get the full eigensystem of T, which is related to that of G by dividing the eigenvalues by the scaling factor, and multiplying the eigenvectors by P to transform them into the correct basis.

We now have a diagonalised matrix G = Q^-1 . D . Q, so exp(G) = Q^-1 . exp(D) . Q, which is now trivial because D is diagonal.

Putting all this together allows us to use our knowledge of the problem domain to convert the matrix exponentiation problem into a much simpler real-symmetric tridiagonal eigensystem problem, which gets us a nice big speed up, and it’s equivalent up to the tolerance of the eigenvalue solver (~1e-14).

Even better for you, a lot of the hard work is done in the eigensystem solver, and I scaled out alpha at the start, so we can do a good chunk without fixing alpha. That means we can pay the computational cost only once at the start, and then get faster calculations from then on.

If I make a totally fair test, and simply replicate the full functionality of qutip.displace (including creating a Qobj at the end), my method is ~4x faster on small matrices (1 <= dim <= 20) and it only goes up from there (I found it’s about ~10x faster at dim = 1000, and beyond that qutip.displace is too slow to bother).

If I store the calculation of the eigensystem, and output an ndarray instead of converting to csr_matrix (and so don’t produce a Qobj), then I find speed ups in getting the operator for a new alpha as ~100x for small matrices and ~25x for large ones. The larger a matrix is, the more the computational time is dominated by the dense dot product at the end.

Code:

class Displacer:
    def __init__(self, n):
        # The off-diagonal of the real-symmetric similar matrix T.
        sym = (2*(np.arange(1, n)%2) - 1) * np.sqrt(np.arange(1, n))
        # Solve the eigensystem.
        self.evals, self.evecs = scipy.linalg.eigh_tridiagonal(np.zeros(n), sym)
        self.range = np.arange(n)
        self.t_scale = 1j**(self.range % 2)

    def __call__(self, alpha):
        # Diagonal of the transformation matrix P, and apply to eigenvectors.
        transform = self.t_scale * (alpha / np.abs(alpha))**-self.range
        evecs = transform[:, None] * self.evecs
        # Get the exponentiated diagonal.
        diag = np.exp(1j * np.abs(alpha) * self.evals)
        return np.conj(evecs) @ (diag[:, None] * evecs.T)

While the analytic closed-form solution of the eigensystem is difficult, you may be able to express the eigenvalues as some function of the roots of a constructed orthogonal polynomial - you can create a recurrence relationship for the determinant of the characteristic equation of the system, and that typically ends up producing orthogonal polynomials. I didn’t pursue this very far because it looked difficult, and eventually you’d still need to calculate the eigenvectors anyway, which I didn’t have many ideas for. Unfortunately my copy of Numerical Methods is still in my office, and I can’t get to it!

Read more comments on GitHub >

github_iconTop Results From Across the Web

On Matrices With Displacement Structure - Archive ouverte HAL
Abstract. For matrices with displacement structure, basic operations like multiplication, in- version, and linear system solving can all be ...
Read more >
Representation of the displacement-operator in number basis
The displacement operator satisfies the identity ˆD†(α)=ˆD(−α). ... The matrix elements of the displacement operator in the number basis ...
Read more >
Matrix-exponentiation operator - Rosetta Code
AdaEdit. This is a generic solution for any natural power exponent. It will work with any type that has +,*, additive and multiplicative...
Read more >
Demonstration of Density Matrix Exponentiation Using a ...
(iii) Quantum principal component analysis. By using. DME to turn a state into an operator, quantum phase estimation can be implemented on that ......
Read more >
Taming numerical errors in simulations of continuous variable ...
fast generation and manipulation of entangled Gaussian quantum states that ... the exponential operator, truncF{exp( ˆQ)}, with the matrix ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found