Add adaptive limits for VF2Layout in preset passmanagers
See original GitHub issueWhat is the expected enhancement?
In #7213 we added the vf2layout pass to all the preset passmanagers. For the instantiation of that pass on each optimization level we had to set limits on the pass, both in number of internal vf2 state visits (which gets passed directly to retworkx), as well as other options to limit the amount of time we spend searching for an ideal solution. For example:
- https://github.com/Qiskit/qiskit-terra/blob/7cab49fb7f223798d31ff295f1b9d0b2f7e15fed/qiskit/transpiler/preset_passmanagers/level1.py
- https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/transpiler/preset_passmanagers/level2.py#L147-L153
- https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/transpiler/preset_passmanagers/level3.py#L150-L156
in each optimization level we set an increasing limit to roughly model the maximum amount of time we want to spend on the pass: 100ms for level 1, 10sec for level 2, and 60 sec for level 3. For a first approach this makes sense as it was easy to implement and will improve the quality of the transpiler significantly when there is a perfect mapping available. However, the limitation with this approach is it doesn’t take into account the complexity of the input problem. As both the size of the interaction graph and the coupling graph increase the amount of time it takes for vf2 to find a potential solution can increase significantly. To accommodate this we should come up with a scaling factor for the hard limits we place on the pass so that we can scale these limits to something appropriate for inputs that will require more time. This shouldn’t be a problem as the other passes typically scale linearly with the number of qubits (and routing passes scale exponentially) so spending extra time up front to potentially avoid the slower passes later is a decent tradeoff as long as we don’t spend too much time.
To implement this we’ll need to do some benchmarking of retworkx’s vf2_mapping()
function to figure out the scaling properties of how long it takes to find a result. Once we have a good model of that we can come up with a formula on how to scale the limits appropriately for each optimization level and apply that in the preset passmanager based on the input target backend.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:12 (6 by maintainers)
Top GitHub Comments
Well I was thinking more about time to find that initial isomorphic mapping, When the size of the graphs are closer the search space to verify a mapping is valid is a lot bigger because you have to check more nodes. Yeah, there are less overall posibilities of valid mappings if the subgraph when the graphs are closer in size, but to find that first mapping you have to do a lot more work. I did a very small benchmark to test this with
vf2_mapping()
basically something like:graphing that output was:
Here are
call_limit
counts as a function of |H|.I ended up using a combination of
println
in retworkx, Bash redirects, and log files to get the counts.self._counter
was printed every 1000 counts, which means that thecall_limit
counts are rounded down to nearest 1000, but since vf2 reaches ~1 million counts for |H|=26, we do not need to look at every single count. There is only one trial since I tried a small run with 3 trials per |H|, but each trial had exactly the same counts, which makes sense.The line that goes through the green circles is the best-fit line ( A 10 B*|H| ) to those points. The other line (slightly above the fit line) is a scaled version of the
A
coefficient. It might be a good idea to round up theA
andB
coefficients to a nice value, so the scaling ofA
is my first attempt.Figure 5: Log plot of
call_limit
counts to first isomorphic submapping from a path subgraph H onto a grid graph G. The points drawn asx
were excluded from the fit. The horizontal lines show counts of 5e4, 5e6, and 3e7 from the current hard-coded counts in Levels 1, 2, and 3 of the pass manager.