Perf improvement for primitive comparison
See original GitHub issueIs your feature request related to a problem? Please describe.
The compiled code is never fast enough. The code for GenericComparisonWithComparerFast generates specific IL code for primitive data types. This code use an if then else branching to return -1, 0 or 1. Branches are costly.
https://github.com/dotnet/fsharp/blob/main/src/fsharp/FSharp.Core/prim-types.fs#L1061
Comparison is used a lot in Set and Map as well as all sorting algorithms.
Describe the solution you’d like
It is possible to get the same result without branching. In the following image, there is the code as currently generated for its, and as it could be implemented.
it uses the fact that clt and cgt actually push 0 or 1 on the stack that can be used directly as an int instead as a bool.
The logic behind this is simple:
- when x > y, cgt returns 1, clt returns 0 => 1 - 0 = 1
- when x < y, cgt returns 0, clt returns 1 => 0 - 1 = -1
- when x = y, cgt returns 0, clt returns 0 => 0 - 0 = 0
The assembly code generated by the JIT have the same size, but the second one contains no branching:
Benchmarking such code cannot be done on a single sample… it can be subject to constant propagation (this is true in both cases if you call the functions with constant parameters), and branch prediction that can hide the cost of branching. It is better to test it on a large array of random values:
#nowarn "42"
let inline stdcmp (x: int) (y: int) =
if (# "clt" x y : bool #) then (-1) else (# "cgt" x y : int #)
let inline fastcmp (x: int) (y: int) =
(# "cgt" x y : int #) - (# "clt" x y : int #)
open BenchmarkDotNet
open BenchmarkDotNet.Attributes
[<SimpleJob>]
type Bench() =
let rnd = System.Random(1234)
let values =
[|for _ in 0 .. 10000 -> struct(rnd.Next(), rnd.Next()) |]
[<Benchmark(Baseline = true)>]
member _.Std() =
let mutable sum = 0
for i in 0 .. values.Length - 1 do
let struct(x,y) = values[i]
sum <- sum + stdcmp x y
[<Benchmark()>]
member _.Fast() =
let mutable sum = 0
for i in 0 .. values.Length - 1 do
let struct(x,y) = values[i]
sum <- sum + fastcmp x y
Running.BenchmarkRunner.Run(typeof<Bench>)
In the benchmark, we sum the results. If not, result is discarded and actually not computed at all.
The result shows the version without branches takes 1/3 of the time compared to the version with branches:
Of course, when the branch is systematically not taken, we do more work while the branch prediction is systematically correct. Here is a benchmark with random x, but y is always x + 2:
When the branch is systematically taken, both version do globally the same work since the branch is not costly. The branch seems even to be free, while the subtraction takes an instruction.
However, these situations where the result are always the same and same branch is always take is less probable than have variable inputs, and the gain when there is misprediction is higher than the loss when prediction is always correct.
Describe alternatives you’ve considered
Not do it.
Issue Analytics
- State:
- Created a year ago
- Reactions:6
- Comments:12 (12 by maintainers)
Top GitHub Comments
Could you add a PR for this please?
That example with movs is x86 I believe.