Array Converter is slow and can be replaced with simple converters that perform much better
See original GitHub issueThe Array Converter
can be replaced with simple converters that perform much better.
This is possible because you already have all the information for the type ahead of time and the order/offset doesn’t change.
Example Kline Converter (Spot)
/// <summary>
/// Binance Kline Converter
/// </summary>
public class BinanceKlineConverter : JsonConverter
{
private const int EXPECTED_LENGTH_DESERIALIZE = 11; // Ignore has been trimmed away
private const int EXPECTED_LENGTH_FULL = 12;
private const int OPEN_TIME = 0;
private const int OPEN = 1;
private const int HIGH = 2;
private const int LOW = 3;
private const int CLOSE = 4;
private const int BASE_VOLUME = 5;
private const int CLOSE_TIME = 6;
private const int QUOTE_VOLUME = 7;
private const int TRADE_COUNT = 8;
private const int TAKER_BUY_BASE_VOLUME = 9;
private const int TAKER_BUY_QUOTE_VOLUME = 10;
private const int TICKS_IN_MILLISECOND = 10000;
/// <inheritdoc />
public override object? ReadJson(JsonReader reader, Type objectType, object? existingValue, JsonSerializer serializer)
{
JArray jArray = JArray.Load(reader);
if (jArray.Count == EXPECTED_LENGTH_FULL || jArray.Count == EXPECTED_LENGTH_DESERIALIZE)
{
BinanceKline bkCached = CachedType<BinanceKline>.Instance();
bkCached.OpenTime = TimestampConverter.ReadTimeStamp((long)jArray[OPEN_TIME]) ?? DateTime.MinValue;
bkCached.Open = (decimal)jArray[OPEN];
bkCached.High = (decimal)jArray[HIGH];
bkCached.Low = (decimal)jArray[LOW];
bkCached.Close = (decimal)jArray[CLOSE];
bkCached.BaseVolume = (decimal)jArray[BASE_VOLUME];
bkCached.CloseTime = TimestampConverter.ReadTimeStamp((long)jArray[CLOSE_TIME]) ?? DateTime.MinValue;
bkCached.QuoteVolume = (decimal)jArray[QUOTE_VOLUME];
bkCached.TradeCount = (int)jArray[TRADE_COUNT];
bkCached.TakerBuyBaseVolume = (decimal)jArray[TAKER_BUY_BASE_VOLUME];
bkCached.TakerBuyQuoteVolume = (decimal)jArray[TAKER_BUY_QUOTE_VOLUME];
return bkCached;
}
return new BinanceKline();
}
/// <inheritdoc />
public override void WriteJson(JsonWriter writer, object? value, JsonSerializer serializer)
{
if (value == null)
{
return;
}
if (value is BinanceKline binanceKline)
{
writer.WriteStartArray();
writer.WriteValue(binanceKline.OpenTime.Ticks / TICKS_IN_MILLISECOND);
writer.WriteValue(binanceKline.Open);
writer.WriteValue(binanceKline.High);
writer.WriteValue(binanceKline.Low);
writer.WriteValue(binanceKline.Close);
writer.WriteValue(binanceKline.BaseVolume);
writer.WriteValue(binanceKline.CloseTime.Ticks / TICKS_IN_MILLISECOND);
writer.WriteValue(binanceKline.QuoteVolume);
writer.WriteValue(binanceKline.TradeCount);
writer.WriteValue(binanceKline.TakerBuyBaseVolume);
writer.WriteValue(binanceKline.TakerBuyQuoteVolume);
writer.WriteEndArray();
}
}
/// <inheritdoc />
public override bool CanConvert(Type objectType)
{
return true;
}
}
The new expression
is cached, which is faster then calling new()
by about 30% even for small objects and that is pretty consistent across all runtimes etc.
namespace BinanceAPI.Converters
{
/// <summary>
/// Cache the New Expression for a given type T
/// </summary>
/// <typeparam name="T"></typeparam>
public static class CachedType<T> where T : new()
{
/// <summary>
/// Instance of the New Expression for given type T
/// </summary>
public static readonly Func<T> Instance = Expression.Lambda<Func<T>>(Expression.New(typeof(T))).Compile();
}
}
Issue Analytics
- State:
- Created 2 months ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
why is converting a long 2D list to numpy array so slow?
I have a long list of xy coordinates, and would like to convert it into numpy array. >>> import numpy as np >>>...
Read more >Array covariance: not just ugly, but slow too
The basic idea is that C# allows a reference conversion from type TDerived[] to type TBase[], so long as: TDerived and TBase are...
Read more >List vs Array - finding out which is faster when being ...
When implementing a List, try converting it to an array first and see if you get better performance. THE CODE. Code (CSharp):.
Read more >SQL Server Data Type Conversion Methods and ...
This artcile explores the SQL Server Data type conversion method and their performance comparison.
Read more >Tips for optimizing performance obstructions
Avoid inter-workbook links when it is possible; they can be slow, easily broken, and not always easy to find and fix. Using fewer...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
So here’s where am at then: Situation 1: The same instance is returned every time This is what you want for Converter types, but not what you want for the model or stream types Situation 2: A new instance is returned every time (which is what happens since you’re not experiencing exception in your application) This what you want for the models and stream types, but not for the converter types.
Also this is my benchmark for using cached type vs just new(), which shows no improvement as I suspected
Interesting results,
Activator
is even worse than I expected on your hardware, and the difference between the other choices is much smaller than I expected.I think your results are more a testament to the
i7
than anything, but I can see how you came to the conclusion that it isn’t better.Another way to interpret these results is that it isn’t slower for people with older CPUs to cache the type