question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Array Converter is slow and can be replaced with simple converters that perform much better

See original GitHub issue

The Array Converter can be replaced with simple converters that perform much better.

This is possible because you already have all the information for the type ahead of time and the order/offset doesn’t change.

Example Kline Converter (Spot)

/// <summary>
/// Binance Kline Converter
/// </summary>
public class BinanceKlineConverter : JsonConverter
{
    private const int EXPECTED_LENGTH_DESERIALIZE = 11; // Ignore has been trimmed away
    private const int EXPECTED_LENGTH_FULL = 12;

    private const int OPEN_TIME = 0;
    private const int OPEN = 1;
    private const int HIGH = 2;
    private const int LOW = 3;
    private const int CLOSE = 4;
    private const int BASE_VOLUME = 5;
    private const int CLOSE_TIME = 6;
    private const int QUOTE_VOLUME = 7;
    private const int TRADE_COUNT = 8;
    private const int TAKER_BUY_BASE_VOLUME = 9;
    private const int TAKER_BUY_QUOTE_VOLUME = 10;
    private const int TICKS_IN_MILLISECOND = 10000;

    /// <inheritdoc />
    public override object? ReadJson(JsonReader reader, Type objectType, object? existingValue, JsonSerializer serializer)
    {
        JArray jArray = JArray.Load(reader);

        if (jArray.Count == EXPECTED_LENGTH_FULL || jArray.Count == EXPECTED_LENGTH_DESERIALIZE)
        {
            BinanceKline bkCached = CachedType<BinanceKline>.Instance();

            bkCached.OpenTime = TimestampConverter.ReadTimeStamp((long)jArray[OPEN_TIME]) ?? DateTime.MinValue;
            bkCached.Open = (decimal)jArray[OPEN];
            bkCached.High = (decimal)jArray[HIGH];
            bkCached.Low = (decimal)jArray[LOW];
            bkCached.Close = (decimal)jArray[CLOSE];
            bkCached.BaseVolume = (decimal)jArray[BASE_VOLUME];
            bkCached.CloseTime = TimestampConverter.ReadTimeStamp((long)jArray[CLOSE_TIME]) ?? DateTime.MinValue;
            bkCached.QuoteVolume = (decimal)jArray[QUOTE_VOLUME];
            bkCached.TradeCount = (int)jArray[TRADE_COUNT];
            bkCached.TakerBuyBaseVolume = (decimal)jArray[TAKER_BUY_BASE_VOLUME];
            bkCached.TakerBuyQuoteVolume = (decimal)jArray[TAKER_BUY_QUOTE_VOLUME];

            return bkCached;
        }

        return new BinanceKline();
    }

    /// <inheritdoc />
    public override void WriteJson(JsonWriter writer, object? value, JsonSerializer serializer)
    {
        if (value == null)
        {
            return;
        }

        if (value is BinanceKline binanceKline)
        {
            writer.WriteStartArray();

            writer.WriteValue(binanceKline.OpenTime.Ticks / TICKS_IN_MILLISECOND);

            writer.WriteValue(binanceKline.Open);

            writer.WriteValue(binanceKline.High);

            writer.WriteValue(binanceKline.Low);

            writer.WriteValue(binanceKline.Close);

            writer.WriteValue(binanceKline.BaseVolume);

            writer.WriteValue(binanceKline.CloseTime.Ticks / TICKS_IN_MILLISECOND);

            writer.WriteValue(binanceKline.QuoteVolume);

            writer.WriteValue(binanceKline.TradeCount);

            writer.WriteValue(binanceKline.TakerBuyBaseVolume);

            writer.WriteValue(binanceKline.TakerBuyQuoteVolume);

            writer.WriteEndArray();
        }
    }

    /// <inheritdoc />
    public override bool CanConvert(Type objectType)
    {
        return true;
    }
}

The new expression is cached, which is faster then calling new() by about 30% even for small objects and that is pretty consistent across all runtimes etc.

namespace BinanceAPI.Converters
{
    /// <summary>
    /// Cache the New Expression for a given type T
    /// </summary>
    /// <typeparam name="T"></typeparam>
    public static class CachedType<T> where T : new()
    {
        /// <summary>
        /// Instance of the New Expression for given type T
        /// </summary>
        public static readonly Func<T> Instance = Expression.Lambda<Func<T>>(Expression.New(typeof(T))).Compile();
    }
}

Issue Analytics

  • State:closed
  • Created 2 months ago
  • Comments:9 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
JKorfcommented, Jul 24, 2023

So here’s where am at then: Situation 1: The same instance is returned every time This is what you want for Converter types, but not what you want for the model or stream types Situation 2: A new instance is returned every time (which is what happens since you’re not experiencing exception in your application) This what you want for the models and stream types, but not for the converter types.

Also this is my benchmark for using cached type vs just new(), which shows no improvement as I suspected

BenchmarkDotNet v0.13.6, Windows 10 (10.0.19045.3208/22H2/2022Update)
Intel Core i7-6600U CPU 2.60GHz (Skylake), 1 CPU, 4 logical and 2 physical cores
.NET SDK 7.0.202
  [Host]     : .NET 6.0.15 (6.0.1523.11507), X64 RyuJIT AVX2 [AttachedDebugger]
  DefaultJob : .NET 6.0.15 (6.0.1523.11507), X64 RyuJIT AVX2


|          Method |     N |      Mean |     Error |    StdDev |    Median |
|---------------- |------ |----------:|----------:|----------:|----------:|
|         UseCtor |  1000 |  4.515 ns | 0.1685 ns | 0.1576 ns |  4.480 ns |
|    UseActivator |  1000 | 14.728 ns | 0.3051 ns | 0.2705 ns | 14.712 ns |
| UseCompiledExpr |  1000 |  4.163 ns | 0.1684 ns | 0.4408 ns |  3.979 ns |
|         UseCtor | 10000 |  4.243 ns | 0.1626 ns | 0.3501 ns |  4.187 ns |
|    UseActivator | 10000 | 14.824 ns | 0.3592 ns | 0.4137 ns | 14.761 ns |
| UseCompiledExpr | 10000 |  4.745 ns | 0.1717 ns | 0.3097 ns |  4.634 ns |
0reactions
NoParamediccommented, Jul 26, 2023

Interesting results, Activator is even worse than I expected on your hardware, and the difference between the other choices is much smaller than I expected.

I think your results are more a testament to the i7 than anything, but I can see how you came to the conclusion that it isn’t better.

Another way to interpret these results is that it isn’t slower for people with older CPUs to cache the type

Read more comments on GitHub >

github_iconTop Results From Across the Web

why is converting a long 2D list to numpy array so slow?
I have a long list of xy coordinates, and would like to convert it into numpy array. >>> import numpy as np >>>...
Read more >
Array covariance: not just ugly, but slow too
The basic idea is that C# allows a reference conversion from type TDerived[] to type TBase[], so long as: TDerived and TBase are...
Read more >
List vs Array - finding out which is faster when being ...
When implementing a List, try converting it to an array first and see if you get better performance. THE CODE. Code (CSharp):.
Read more >
SQL Server Data Type Conversion Methods and ...
This artcile explores the SQL Server Data type conversion method and their performance comparison.
Read more >
Tips for optimizing performance obstructions
Avoid inter-workbook links when it is possible; they can be slow, easily broken, and not always easy to find and fix. Using fewer...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found