Allow callbackFlow to specify capacity
See original GitHub issueI just wrote the following flow to observe changes in a preference.
val sharedPrefs: SharedPreferences
val key: String
val defaultValue: Boolean
fun observe(): Flow<Boolean> = callbackFlow {
@Suppress("ObjectLiteralToLambda")
val listener = object : SharedPreferences.OnSharedPreferenceChangeListener {
override fun onSharedPreferenceChanged(prefs: SharedPreferences, key: String) {
if (this@BoolPref.key == key) {
// I want to guarantee that this `.offer(..)` call emits
offer(prefs.getBoolean(key, defaultValue))
}
}
}
send(sharedPrefs.getBoolean(key, defaultValue))
sharedPrefs.registerOnSharedPreferenceChangeListener(listener)
awaitClose {
sharedPrefs.unregisterOnSharedPreferenceChangeListener(listener)
listeners.remove(listener)
}
}
I would like to specify a channel capacity of Channel.UNLIMITED
to guarantee that .offer(..)
will succeed, but the current default is Channel.BUFFERED
, without the option to specify the limit.
Another option would be to use Channel.UNLIMITED
as the default as it’s “safer”
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
callbackFlow - Kotlin
It allows elements to be produced by code that is running in a different context ... Use the buffer operator on the resulting...
Read more >Kotlin flows on Android - Android Developers
The channel created in callbackFlow has a default capacity of 64 elements. When you try to add a new element to a full...
Read more >Simplifying APIs with coroutines and Flow - Manuel Vivo .dev
The channel created in callbackFlow has the default capacity of 64 elements. When adding a new element to an already full channel, send...
Read more >Sample flows - Amazon Connect - AWS Documentation
Amazon Connect includes a set of sample flows that show you how to perform common functions. They are designed to help you learn...
Read more >Setting up Callbacks within Amazon Connect - Perficient Blogs
Check Queue Status: This node can check if a queue is at capacity or if ... At it's most basic you could set...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@ZakTaccardi There is no different between
callbackFlow
andchannelFlow
. They are the same. They just have different names to tailor their docs the the speicific use-case and to enable the code using them to convey your intention. E.g., when I read your code I would immediately see what you are planning to do just by the name of the flow builder you are using.In this specific case, you probably want conflated. If someone misses a value change of a preference, they likely just want to get the latest value, not all of the value changes that occurred since the last one.
But in general, unlimited buffers are how you get cascading failures rather than localized ones (or potentially none at all). Unlimited queues are very hard to recover from when the producer is faster than the consumer because there’s neither backpressure being applied nor points where you can shed load (because how would you even know?).
Especially in the design of Flow, backpressure is handled naturally and arguably transparently inside the system such that producers can’t outrun consumers. At the points where you bridge in and out of the suspension world, there’s still signals like the return value of
offer
orrunBlocking
and actual caller-blocking which you can use to try and slow (or stop) your producer. Too-small or too-large buffers are things you can tweak over time based on actual usage patterns, but every unlimited buffer is just a time-bomb for a crash and/or outage.