Large queues allocated
See original GitHub issueIn ContinuationProcessor<T>, you allocate two arrays of size 500,000. This uses about 30 MB of memory for each type of IAwaitInstruction used (WaitForFrames, WaitForSeconds, etc), regardless of how few Continuations you actually use.
You could instead start the queue size small (like 16) and dynamically resize the arrays. For example:
public void Add(T cont)
{
if (futureCount == futureQueue.Length)
{
int newLength = futureQueue.Length * 3 / 2;
Array.Resize(ref futureQueue, newLength);
Array.Resize(ref currentQueue, newLength);
}
// rest of method...
}
Issue Analytics
- State:
- Created 4 years ago
- Comments:14 (8 by maintainers)
Top Results From Across the Web
Queues and memory allocation - NI Community
The main question is: are the "read queue" and "lossy enqueue" functions allocating memory or causing other jitter issues at run-time?
Read more >Resource allocation in congested queueing systems with ...
We formulate this as a dynamic program in which the objective is based on second moments of stochastic queue lengths and show that,...
Read more >A queue using structs and dynamic memory allocation
I am tasked with making a queue data structure in C, as a linked list. Our lecturer gave us a large amount of...
Read more >Queue tactics for large memory objects - Kernel
If the buffers are allocated sequentially, your allocator becomes a next_slot index that gets ++ every time you enqueue, and wraps around when ......
Read more >Understand Queue Buffer Allocation on Catalyst 9000 ...
This document describes how to predict queue buffer allocation to traffic queues on Catalyst 9000 Series Switches.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I was thinking something like…
… for instructions which could be assigned a priority (and an accompanying priority queue processor), and keeping
IAwaitInstructionfor others but that’s getting into over-engineered territory. I originally thought returning an int would be fine but that’s not going to play too nicely with something likeWaitForSeconds.I’m going to close this issue because we’ve fixed it but thanks for your feedback and discussion on the other points 😃
It throws an exception for me too. I’ve added a slightly more complex resize check in
Addto avoid this. Basically, it considers how many awaiters are left in the current queue. Worst case all of them will need to be re-added so it resizes to ensure that if that is the case, there will be enough capacity for them. Not sure about whether to resize to 1.5x the largest potential queue size or 1.5x the current. I’ve done the latter but now I’m leaning toward the former (probably better to resize less frequently if it can be helped).I went to all this trouble instead of calling
AddinsideProcessbecause by their nature, these awaiters are likely to be evaluated over many frames. You’ll typically have more awaiters active than awaiters being added in any one frame. As such, I wanted to keep the hot path inProcessas lean as possible. This meant doing a bit more work inAdd, which I hope is a good trade-off. I haven’t benchmarked it though, I’ve found it really difficult to get reliable figures in the past.I was thinking, for the threading issue where a background thread could call
Addand manipulate the futureCount, instead of locking I could add a check to determine if it is called from in Unity’sSynchronizationContextand if not, just await theUnitySynchronizationContext. A simple sync context comparison should be faster than a lock that’s rarely needed, and it means I don’t need to lock inProcess.