question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Large queues allocated

See original GitHub issue

In ContinuationProcessor<T>, you allocate two arrays of size 500,000. This uses about 30 MB of memory for each type of IAwaitInstruction used (WaitForFrames, WaitForSeconds, etc), regardless of how few Continuations you actually use.

You could instead start the queue size small (like 16) and dynamically resize the arrays. For example:

public void Add(T cont)
{
	if (futureCount == futureQueue.Length)
	{
		int newLength = futureQueue.Length * 3 / 2;
		Array.Resize(ref futureQueue, newLength);
		Array.Resize(ref currentQueue, newLength);
	}

	// rest of method...
}

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:14 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
muckSpongecommented, Aug 18, 2019

A priority queue is only applicable if you can assign each item a priority. That works for WaitForFrames or WaitForSeconds, but won’t work with WaitWhile.

I was thinking something like…

interface IPriorityAwaitInstruction<T> where T : IComparable<T>
{
	public T Priority { get; }
}

… for instructions which could be assigned a priority (and an accompanying priority queue processor), and keeping IAwaitInstruction for others but that’s getting into over-engineered territory. I originally thought returning an int would be fine but that’s not going to play too nicely with something like WaitForSeconds.

I’m going to close this issue because we’ve fixed it but thanks for your feedback and discussion on the other points 😃

1reaction
muckSpongecommented, Aug 15, 2019

It throws an exception for me too. I’ve added a slightly more complex resize check in Add to avoid this. Basically, it considers how many awaiters are left in the current queue. Worst case all of them will need to be re-added so it resizes to ensure that if that is the case, there will be enough capacity for them. Not sure about whether to resize to 1.5x the largest potential queue size or 1.5x the current. I’ve done the latter but now I’m leaning toward the former (probably better to resize less frequently if it can be helped).

I went to all this trouble instead of calling Add inside Process because by their nature, these awaiters are likely to be evaluated over many frames. You’ll typically have more awaiters active than awaiters being added in any one frame. As such, I wanted to keep the hot path in Process as lean as possible. This meant doing a bit more work in Add, which I hope is a good trade-off. I haven’t benchmarked it though, I’ve found it really difficult to get reliable figures in the past.

I was thinking, for the threading issue where a background thread could call Add and manipulate the futureCount, instead of locking I could add a check to determine if it is called from in Unity’s SynchronizationContext and if not, just await the UnitySynchronizationContext. A simple sync context comparison should be faster than a lock that’s rarely needed, and it means I don’t need to lock in Process.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Queues and memory allocation - NI Community
The main question is: are the "read queue" and "lossy enqueue" functions allocating memory or causing other jitter issues at run-time?
Read more >
Resource allocation in congested queueing systems with ...
We formulate this as a dynamic program in which the objective is based on second moments of stochastic queue lengths and show that,...
Read more >
A queue using structs and dynamic memory allocation
I am tasked with making a queue data structure in C, as a linked list. Our lecturer gave us a large amount of...
Read more >
Queue tactics for large memory objects - Kernel
If the buffers are allocated sequentially, your allocator becomes a next_slot index that gets ++ every time you enqueue, and wraps around when ......
Read more >
Understand Queue Buffer Allocation on Catalyst 9000 ...
This document describes how to predict queue buffer allocation to traffic queues on Catalyst 9000 Series Switches.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found