Optimize `runBlocking` and `EventLoop` implementations
See original GitHub issueEventLoopBase
implementation class should allocate bothqueue
anddelayed
lazily, moreover, a queue shall not be allocated as long as there is at most one queued task (optimization that is similar toJobSupport
)DispatchedContinuation
shall extendQueuedTask
, so that common case ofEventLoop
usage does not involve creation of objects at all.- Consider providing a reusable version of
EventLoop
implementation, so that performance-sensitive code can create a single instance that is reused for multiple sequential invocations ofrunBlocking
.
Issue Analytics
- State:
- Created 6 years ago
- Comments:15 (12 by maintainers)
Top Results From Across the Web
Understanding Kotlin coroutines - LogRocket Blog
Dive deeper into Kotlin coroutines — suspendable computations similar to threads that simplify asynchronous programming in Kotlin.
Read more >Doc: Is it correct that the thread running runBlocking blocks?
Runs a new coroutine and blocks the current thread until its completion. This function should not be used from a coroutine. It is...
Read more >Change log for kotlinx.coroutines
runBlocking and EventLoop implementations optimized (see #190). Version 0.20. Migrated to Kotlin 1.2.0. Channels: Sequence-like filter , ...
Read more >How runBlocking May Surprise You - ProAndroidDev
This happens because runOnUiThread uses optimization by checking the current thread. If the current thread (UI thread in this case) is the same ......
Read more >How to Prevent Reactive Java Applications from Stalling
Traditional Java applications run blocking code and a common approach ... need to worry about the event loop freezing up (reactor meltdown).
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
No. Saving small object to
ThreadLocal
is usually counter-productive. Access toThreadLocal
is typically more expensive than allocation/collection of small objects. TheadLocal pooling is usually beneficial only for very large objects.@tr8dr in general
sendBlocking
andoffer
have the same performance ifsend
doesn’t suspend (aka there is enough buffer capacity), because internally it has the same (if (!offer()) { runBlocking {...}}
) check.For cases when channel capacity is saturated you should measure your performance on your specific workload because numbers out of thin air (“on my artificial benchmark sendBlocking is two times slower”) are mostly useless