Deserializing streams slower than all the other libraries...[I was wrong, after fixing my issues SpanJson proved to be faster of all !!]
See original GitHub issueFirst of all I want to congratulate with the author of this library! To the benchmark published here (I am not the author), I added SpanJson (along with Swifter.Json, Spreads.Utf8Json and NetJSON, all claimed to be faster than Utf8Json) and this is what I got:
Method | Mean | Error | StdDev | Median | Min | Max | Ratio | RatioSD | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|---|---|---|
SerializeAndDeserializeInTextJson | 48.38 s | 0.949 s | 1.805 s | 49.11 s | 43.78 s | 49.54 s | 1.00 | 0.00 | 2441000.0000 | 4000.0000 | - | 3.57 GB |
SerializeAndDeserializeInSpanJson | 19.75 s | 0.128 s | 0.114 s | 19.73 s | 19.57 s | 19.94 s | 0.42 | 0.02 | 1137000.0000 | 1000.0000 | - | 1.66 GB |
SerializeAndDeserializeInSwifterJson | 29.01 s | 0.207 s | 0.193 s | 28.93 s | 28.77 s | 29.33 s | 0.62 | 0.04 | 2262000.0000 | 2000.0000 | - | 3.31 GB |
SerializeAndDeserializeInNetJson | 58.97 s | 0.148 s | 0.131 s | 58.96 s | 58.80 s | 59.19 s | 1.27 | 0.07 | 3545000.0000 | 5000.0000 | - | 5.18 GB |
SerializeAndDeserializeInSpreadsUtf8Json | 23.30 s | 0.130 s | 0.121 s | 23.29 s | 23.14 s | 23.53 s | 0.50 | 0.03 | 1239000.0000 | 2000.0000 | - | 1.81 GB |
SerializeAndDeserializeInUtf8Json | 21.75 s | 0.165 s | 0.147 s | 21.78 s | 21.48 s | 21.97 s | 0.48 | 0.02 | 1239000.0000 | 2000.0000 | - | 1.81 GB |
SerializeAndDeserializeInServiceStackText | 61.30 s | 3.445 s | 9.546 s | 57.57 s | 48.70 s | 90.88 s | 1.23 | 0.08 | 2932000.0000 | 4000.0000 | - | 4.28 GB |
SerializeAndDeserializeInJil | 38.53 s | 1.864 s | 5.348 s | 36.82 s | 31.50 s | 54.69 s | 0.91 | 0.15 | 4310000.0000 | 3000.0000 | - | 6.3 GB |
Not sure if the test can be considered reliable or not, but SpanJson outperforms all competitors in speed and memory allocation! This also applies to deserialization from string.
However, in other tests, SJS resulted the worst one in both speed and memory allocation when deserializing streams and I need to understand if I have done something wrong or there is a bug with your lib:
Method | Mean | Error | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|
SystemTextJson_FromStream | 45.34 ms | 0.159 ms | 0.133 ms | 1619.0476 | 333.3333 | 119.0476 | 9.29 MB |
Utf8Json_FromStream | 43.04 ms | 0.253 ms | 0.236 ms | 1826.0870 | 608.6957 | 478.2609 | 11.14 MB |
SwifterJson_FromStream | 56.59 ms | 0.598 ms | 0.559 ms | 2848.4848 | 939.3939 | 454.5455 | 13.63 MB |
SpanJson_FromStream | 59.40 ms | 0.552 ms | 0.517 ms | 2588.2353 | 1147.0588 | 470.5882 | 13.55 MB |
SpreadsUtf8json_FromStream | 42.51 ms | 0.823 ms | 0.881 ms | 1608.6957 | 326.0870 | 130.4348 | 9.25 MB |
In the above test, in all the other libs I just deserialize with something like:
using var stream = File.OpenRead(JsonFile); // Simulate a web stream from a file
return lib.JsonSerializer.Deserialize<TestClass[]>(stream);
With SpanJson I have been forced to use this code to make it work (which resulted in the bad result above):
using var stream = new StreamReader(JsonFile, Encoding.UTF8); // Simulate a web stream from a file
return (TestClass[]) await SpanJson.JsonSerializer.NonGeneric.Utf16.DeserializeAsync(stream, typeof(TestClass[]));
I tried to directly deal with UTF8, but I always get “Error Reading JSON data: ‘ExpectedBeginArray’ at position: ‘0’” whether I try with:
await SpanJson.JsonSerializer.Generic.Utf8.DeserializeAsync<TestClass[]>(stream);
or:
Span<byte> stacall = stackalloc byte[fileLen];
stream.Read(stacall);
SpanJson.JsonSerializer.Generic.Utf8.Deserialize<TestClass[]>(stacall);
SpanJson.JsonSerializer.NonGeneric.Utf8.Deserialize(stacall, typeof(TestClass[])); //same exception
To replicate (the json file used for stream):
public class TestClass
{
public TestClass() { }
public string prop1 { get; set; }
public string prop2 { get; set; }
public string prop3 { get; set; }
public string prop4 { get; set; }
public string prop5 { get; set; }
public string prop6 { get; set; }
public AnotherClass prop7 { get; set; }
public bool prop8 { get; set; }
public string prop9 { get; set; }
public long prop10 { get; set; }
public DateTimeOffset prop11 { get; set; }
public DateTimeOffset prop12 { get; set; }
}
public class AnotherClass
{
public AnotherClass() { }
public string prop { get; set; }
}
await using var stream = File.OpenRead("data.json");
// exception
var res = await SpanJson.JsonSerializer.Generic.Utf8.DeserializeAsync<TestClass[]>(stream);
// Success for Utf8json and all the other libs
var res2 = = Utf8Json.JsonSerializer.Deserialize<TestClass[]>(stream);
// Even if I simplify the json file and TestClass to contain only one string property (prop1) I still get the same error.
Many thanks
P.S. Tests above have been done on .NET Core 3.1.7 x64 and System.Text.Json 5.0.0-preview.8.20407. I didn’t noticed any difference between the preview and the version bundled with .Net Core 3.1 and I am suprised (may be because it wasn’t running on .Net Core 5 ?)
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (3 by maintainers)
Top GitHub Comments
Hi, thank you for you trying to get it running. A few pointers:
On a side note: I took a look at the benchmark repo you linked to at the beginning, the benchmarks all have their own hardcoded for loops (a million iterations or so) in there, usually this is done by the micro-benchmarking library itself as it is way more qualified to do that. There are only a few very rare cases where you should do this yourself.
I missed up your last part on streams! You’re right! I didn’t considered the full streaming capabilities of System.Text.Json In my specific case I always receive the data in chunks. Even if I go for the stream way (by implementing IAsyncEnumerable and calling it for each json object) I see from Fiddler that the full buffer is still filled with all the data as soon as I get the Stream. I have yet to deal with a real stream like you describe. So, I believe I should benefit from your lib (I only have 4Gb of RAM)!
Many thanks for the warnings about stackalloc! You saved me and other newbies which could be tempted by looking at those numbers and end up with this big mistake!
Yes, we can close this. Problem are all solved!