Sliceable dispatchers: Provide alternative to newSingle/FixedThreadPoolContext via a shared pool of threads
See original GitHub issueBackground
newFixedThreadPoolContext
is actively used in coroutines code as a concurrency-limiting mechanism. For example, to limit a number of concurrent request to the database to 10 one typically defines:
val DB = newFixedThreadPoolContext(10, "DB")
and then wraps all DB invocation into withContext(DB) { ... }
blocks.
This approach have the following problems:
- This
withContext(DB)
invocation performs an actual switch to a different thread which is extremely expensive. - The result of
newFixedThreadPoolContext
references the underlying threads and must be explicitly closed when no longer used. This is quite error-prone as programmers may usenewFixedThreadPoolContext
in their code without realizing this fact, thus leaking threads.
Solution
The plan is to reimplement newFixedThreadPoolContext
from scratch so that it does not create any threads. Instead, there will be one shared pool of threads that creates new thread strictly when they are needed. Thus, newFixedThreadPoolContext
does not create its own threads, but acts only as a semaphore that limits the number of concurrent operations running in this context.
Moreover, DefaultContext
, which is currently equal to CommonPool
(backed by ForkJointPool.commonPool
), is going to be redefined in this way:
val DefaultContext = newFixedThreadPoolContext(defaultParallelism, "DefaultContext")
The current plan is to set
defaultParallelism
tonCPUs + 1
as a compromise value that ensures utilization of the underlying hardware even if one coroutine accidentally blocks and helps us avoid issue #198
Now, with this redefinition of DefaultContext
the code that is used to define its own DB
context continues to work as before (limiting the number of concurrent DB operations). However, both issues identified above are solved:
- This
withContext(DB)
invocation does not actually perform thread context switch anymore. It only switches coroutine context and separately keeps track of and limits the number of concurrently running coroutines inDB
context. - There is not need to close
newFixedThreadPoolContext
anymore, as it is not backed by any physical threads, no risk of leaking threads.
This change also affects newSingleThreadContext
as its implementation is:
fun newSingleThreadContext(name: String) = newFixedThreadPoolContext(1, name)
This might break some code (feedback is welcome!) as there could have been some code in the wild that assumed that everything working in
newSingleThreadContext
was indeed happening in the single instance ofThread
and usedThreadLocal
for to store something, for example. The workaround for this code is to useExecutors.newSingleThreadExecutor().toCoroutineDispatcher()
.
This issue is related to the discussion on IO
dispatcher in #79. It is inefficient to use Executors.newCachedThreadPool().toCoroutineContext()
due to the thread context switches. The plan, as a part of this issue, is to define the following constant:
val IO: CoroutineContext = ...
The name is to be discussed in #79
Coroutines working in this context share the same thread pool as DefaultContext
, so there is no cost of thread switch when doing withContext(IO) { ... }
, but there is no inherent limit on the number of such concurrently executed operations.
Note, that we also avoid issue #216 with this rewrite.
Open questions
-
Shall we rename
newFixedThreadPoolContext
andnewSingleThreadContext
after this rewrite or leave their names as is? Can we name it better? -
Should we leave
newSingleThreadContext
defined as before (with all the context switch cost) to avoid potentially breaking existing code? This would work especially well ifnewFixedThreadPoolContext
is somehow renamed (old is deprecated), butnewSingleThreadContext
retains the old name.
UPDATE: Due to backward-compatibility requirements the actual design will likely be different. Stay tuned.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:58
- Comments:22 (12 by maintainers)
Top GitHub Comments
I say yes, may be something like:
UPDATE: Due to backward-compatibility requirements the actual design will likely be different. Stay tuned.