question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

consume: the unreliable way to cancel upstream sources for channel operators

See original GitHub issue

Background

Currently, all built-in operations on ReceiveChannel are implemented using basically the same code pattern. Take a look at the implementation of filter, for example:

fun <E> ReceiveChannel<E>.filter(context: CoroutineContext = Unconfined, predicate: suspend (E) -> Boolean): ReceiveChannel<E> =
    produce(context) {
        consumeEach {
            if (predicate(it)) send(it)
        }
    }

Under the hood, consumeEach uses consume to make sure that every item from the upstream channel is consumed even when the filter coroutine crashes or is cancelled:

suspend inline fun <E> ReceiveChannel<E>.consumeEach(action: (E) -> Unit) =
    consume {
        for (element in this) action(element)
    }

where definition of consume is likewise simple:

inline fun <E, R> ReceiveChannel<E>.consume(block: ReceiveChannel<E>.() -> R): R =
    try {
        block()
    } finally {
        cancel()
    }

This design ensures that when you have a chain of operators applied to some source channel like in val res = src.filter { ... }, then closing the resulting res channel forwards that cancellation upstream and closes the source src channel, too, thus making sure that any resources used by the source are promptly released.

Problem

It all works great until we have an unhappy combination of factors. Consider the following code:

fun main(args: Array<String>) = runBlocking<Unit> {
    // Create the source channel producing numbers 0..9
    val src = produce { repeat(10) { send(it) } }
    // Create the resulting channel by filtering even numbers from the source
    val res = src.filter { it % 2 == 0 }
    // Immediately cancel the resulting channel
    res.cancel()
    // Check if the source was cancelled
    println("source was cancelled = ${src.isClosedForReceive}")
}

Run the code and see that it behaves as expected, printing:

source was cancelled = true

Now, replace the definition of res channel with new one, adding coroutineContext parameter to the filter invocation, so that filtering coroutine runs in the context of runBlocking dispatcher:

    val res = src.filter(coroutineContext) { it % 2 == 0 }

Running this code again produces:

source was cancelled = false

What is going on here? The problem is in the combination of three factors: produce(context) { consume { ... } }, our runBlocking dispatcher, and the fact that the resulting channel is immediately cancelled.

When produce is invoked it does not immediately start running the coroutine, but schedules its execution using the specified context. In the above code we’ve passed runBlocking context, so it gets scheduled to the main thread for execution instead of being executed right way. Now if we cancel the produce coroutine while it is being scheduled for execution, then it does not run at all, so it does not execute consume and does not cancel the source channel.

It works without problems when we don’t specify the context explicitly, because, by default, Unconfined context is used, which starts executing its code immediate until the next suspension point.

Solution 1: CoroutineStart.ATOMIC

We can use produce(context, start = CoroutineStart.ATOMIC) { ... } in implementation of all the operators. It makes sure that coroutine is not cancellable until it starts to execute. This change provides an immediate relief for this problem, but it has a slight unintended side-effect that this “non-cancellable” period extends until the first suspension in the coroutine, while might be too long. It also feels extremely fragile. It is quite easy to forget that start parameter.

Solution 2: Abolish consume, provide onCompletion

We can add additional optional parameter onCompletion: (Throwable?) -> Unit to produce and other similar coroutine builders and completely change the implementation pattern for filter and other operations like this:

fun <E> ReceiveChannel<E>.filter(context: CoroutineContext = Unconfined, predicate: suspend (E) -> Boolean): ReceiveChannel<E> =
    produce(context, onCompletion = { cancel(it) }) {
        for (element in this) {
            if (predicate(element)) send(element)
        }
    }

The advantage of this pattern is that it scales better to operators on multiple channels and makes them less error-prone to write. For example, the current implementation of zip looks like this:

fun <E, R, V> ReceiveChannel<E>.zip(other: ReceiveChannel<R>, context: CoroutineContext = Unconfined, transform: (a: E, b: R) -> V): ReceiveChannel<V> =
    produce(context) {
        other.consume {
            val otherIterator = other.iterator()
            this@zip.consumeEach { element1 ->
                if (!otherIterator.hasNext()) return@consumeEach
                val element2 = otherIterator.next()
                send(transform(element1, element2))
            }
        }
    }

It is quite nested and magic for its reliance on consume and consumeEach to close the source channels. With onCompletion we can write instead:

fun <E, R, V> ReceiveChannel<E>.zip(other: ReceiveChannel<R>, context: CoroutineContext = Unconfined, transform: (a: E, b: R) -> V): ReceiveChannel<V> =
    produce(context, onCompletion = { 
        cancel(it)
        other.cancel(it)
    }) {
        val otherIterator = other.iterator()
        for (element1 in this) {
            if (!otherIterator.hasNext()) return@produce
            val element2 = otherIterator.next()
            send(transform(element1, element2))
        }
    }

Here the fact that the source channels are closed on completion is much more explicit in the code instead of being hidden, and multiple source channel do not produce additional nesting of consume blocks.

However, that leaves an open question on what to do with consume and consumeEach. One approach is to deprecate consume for its error-proneness. However, we should leave finally { close() } behavior in consumeEach, because consumeEach is a terminal operator that should always make sure that the original channel is fully consumed.

Discussion

Making sure that upstream channels are properly closed is a bane and inherent complexity of “hot” channels, especially when they are backed by actual resources, like network streams. It is not a problem with “cold” channels (see #254) which, by definition, do not allocate any resource until subscribed to. However, just switching to cold channels everywhere does not provide any relief if the domain problem at hand actually calls for a hot channel. Cold channels that are backed by a hot channels suffer from all the same problem that their underlying hot channels have to be somehow closed when they are no longer needed.

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Reactions:1
  • Comments:10 (9 by maintainers)

github_iconTop GitHub Comments

2reactions
pull-vertcommented, Mar 14, 2018

Hope this new onCompletion parameter (kind of Callback function) will not lead to “Async Callback style” use of Coroutines, using onCompletion lambda for next statements

1reaction
elizarovcommented, Mar 13, 2018

So, I’ve further played with different names and so far I like best the following:

produce(context, onCompletion = consumes()) { ... }

For multiple source channels operations (like zip) it is going to look like this:

produce(context, onCompletion = consumesAll(this, other)) { ... }

The reason for this particular naming is that in the future we’d like to to fail-fast when multiple consumers are trying to work with the same channel (see #167), so consumes() would do a double-duty of marking the source channel of being consumed and providing a completion handler to cancel remaining on completion of consumer.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Increase Upstream Reliability and Capacity with Optimized ...
Configuring optimized profiles brings solid reliability to the upstream network connection and also increases the capacity in parts of the spectrum which can ......
Read more >
Optimizing the DOCSIS Upstream | - Broadband Library
Here are four must do items for ensuring pre-equalization is working in the upstream: All modems must be DOCSIS 2.0 or higher (preferably...
Read more >
Cable TV Return-Path Transmission Characteristics - InformIT
This article discusses the sources of upstream noise, including ingress noise, and their impact on the return-path transmission over cable ...
Read more >
Improving Upstream Efficiency - Amazon AWS
An operator can use the lower end of that range for noisier channels (e.g., lower in the cable upstream spectrum) and the higher...
Read more >
Cisco uBR-MC3GX60V Broadband Processing Engine with ...
With 72 DOCSIS downstream and 60 upstream channels per card, ... also allows cable operators to use third-party Edge QAM products that ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Hashnode Post

No results found