question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[API Proposal] introducing END

See original GitHub issue

Besides usage with channels (see #254), END can also be used with store actions. This would allow us to break the while(true) loop inside watchers. this combined with #78 (support for attached forks) would allow us to write universal Saga code.

END is a special action, when there is saga waiting on a take

function* saga() {
  const action = take(SOME_ACTION)
  //...
}

if END is dispatched, then the take will be resolved no matter what SOME_ACTION is. Similarly, if the store/channel already ENDed and a Saga emits a take(SOME_ACTION) it’ll also be resolved immediately with END (see Semantics section for the why).

I was planning on implementing this on the real-world example but due to lack of time i’ll give a simpler example here

function* clientOnlySaga() {
  let action = yield take(CLIENT_ONLY_ACTION)
  while(action !== END) {
    yield fork(myClientTask)
    action = yield take(CLIENT_ONLY_ACTION)
  }
}

function* universalSaga() {
  let action = yield take(UNIVERSAL_ACTION)
  while(action !== END) {
    yield fork(myUniversalTask)
    action = yield take(UNIVERSAL_ACTION)
  }
}

function* rootSaga() {
  yield [
    fork(clientOnlySaga),
    fork(universalSaga)
  ]
}

// store/middleware setup
const sagaMiddleware = createSagaMiddleware()
const store = createStore(
  rootReducer,
  applyMiddleware(sagaMiddleware)
)
const rootTask = sagaMiddleware.run(rootSaga)

If we run the code i the client, then it’ll be as with while(true) because no END action is dispatched on the client. On the server however, using for example React Router

match({routes, location: req.url}, (error, redirectLocation, renderProps) => {
  if (error) { ... } 
  else if (redirectLocation) { ... } 
  else if (renderProps && renderProps.components) {
      const rootTask = sagaMiddleware.run(rootSaga)

      // this will cause the universal Saga tasks to trigger
      renderToString(
         <Root store={store} renderProps={renderProps} type="server"/>
      )
      // notify Saga that there will be no more dispatches
      // this will break the while loop of watchers
      store.dispatch(END)
      rootTask.done.then(() => {
        res.status(200).send(
          Layout(
            renderToString(
              <Root store={store} renderProps={renderProps} type="server"/>
            ),
            JSON.stringify(store.getState())
          )
        )
      }).catch(...)

    } else {
      res.status(404).send('Not found')
    }
  })
})

Above dispatching END will cause the while loops to break and the related saga to terminate its main body. With support for attached forks, a parent which has terminated its own body will wait for all forked children (attached by default) to terminate before returning. So the root saga will terminate after all fired tasks terminate.

There is on drawback though: we need to render twice: the 1st time to trigger the necessary actions and fire the load tasks, and the 2nd to send the final result to the client; but I dont think it’s a big deal, because the time wasted on rendering would be non significant here compared to the latency of network request. And more importantly we can run the same code on the client and server.

Semantics of END

The motivation for the above behavior arises from the need to define precise semantics for END esp. how it should compose within race and parallel effects.

For some time, I was confused because I looked to END from the Rx’s point of view: i.e. as an end of a stream. But Actually there is no notion of stream in redux-saga, there is only notion of Futures (e.g. Promises): take(action), call(func), join(task) … can all be viewed like normal function calls which return Futures (like await in async functions). So the issue become how do we translate END of a streams into the Future/Promise model.

IMO the answer is the Never-happening-Future. For example suppose we have a kind of nextEvent method which returns the next event occurrence on a stream of events. What happens if we call nextEvent on a stream that has already terminated

myFunc() {
  const promise = nextEvent(stream)
  // ...
}

Since the stream is terminated, the promise should never resolve because there is no more future actions, so myFunc won’t make any progress.

Once we define it this way, the sense of combining END with race and parallel becomes more obvious. We have a simple and precise algebra. End behaves like a sort of a Zero for Futures.

// a race with never will always yields to the other Future
Never `race` Future = Future

// a parallel  having a Never will always yield Never
Never `parallel` Future = Never

This is how it’s actually implemented in the proposal.

The doubt I’m having though is whether we should expose END explicitly to the developer or if we should handle it automatically by terminating the Saga. In the above example we used explicit handling of END. Now with automatic handling we can write

function* saga() {
  while(true) {
    yield take(action)
    yield fork(task)
  }
}

function* parentSaga() {
  yield call(saga)
}

Above there is no explicit END value; if the take resolves with an END. then we can choose to terminate the Saga automatically and resolve its return value with END. the END would then propagate to the parent, so it’ll be also terminated. The only way to escape from END will be inside forked tasks and within race effects.

Automatic handling prevents us from dealing manually with END results. OTOH manual handling of END gives more flexibility (for example starting a second stream after a 1st one terminates)

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Reactions:12
  • Comments:43 (14 by maintainers)

github_iconTop GitHub Comments

7reactions
Andaristcommented, Nov 3, 2016

you can import it like that import { END } from 'redux-saga'

although it should probably be better documented

6reactions
bmealhousecommented, Aug 21, 2017

@keithnorm @rosendi - I did find a solution to this problem using channels, however, I’m not sure it is the recommended approach. It would be nice to get some feedback from the community, @yelouafi, and @Andarist.

Here is a simple weather example. To kick things off, FETCH_LOCATIONS_REQUEST needs to be dispatched, which will put locations-saga.js and weather-saga.js in a running state. When END is dispatched it will terminate all sagas that aren’t in the middle of doing work. Since locations-saga.js and weather-saga.js are still running, they will not be terminated until their work is complete.


main.js

import {END} from 'redux-saga'
import store from './store'

store.dispatch({
  type: 'FETCH_LOCATIONS_REQUEST'
})

store.dispatch(END)

root-saga.js

import {all, fork} from 'redux-saga/effects'
import locationsSaga from './locations-saga'
import weatherSaga from './weather-saga'

function* rootSaga() {
  yield all([
    fork(locationsSaga),
    fork(weatherSaga)
  ])
}

export default rootSaga

locations-saga.js

import {channel} from 'redux-saga'
import {call, fork, put, take} from 'redux-saga/effects'
import {weatherChannelHandler} from './weather-saga'
import * as api from './utils/api'

function* fetchLocations() {
  const locations = yield call(api.fetchLocations)

  yield put({
    type: 'FETCH_LOCATIONS_SUCCESS',
    locations
  })

  return locations
}

function* locationsSaga() {
  const weatherChannel = yield call(channel)
  yield fork(weatherChannelHandler, weatherChannel)

  while(true) {
    yield take('FETCH_LOCATIONS_REQUEST')
    const locations = yield call(fetchLocations)
    yield put(weatherChannel, locations)
  }
}

export default locationsSaga

weather-saga.js

function* fetchWeather(action) {
  // fetch weather data from api...
  // action.locations available here
}

export function* weatherChannelHandler(channel) {
  const action = yield take(channel)
  yield call(fetchWeather, action) // blocking
}

function* weatherSaga() {
  // listen for other actions...
  yield takeEvery('SELECT_LOCATION', fetchWeather)
}

export default weatherSaga
Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Write a Proposal. Examples and Templates for Your ...
Introduce your strategies and explain “why” you are making these exact decisions to complete the project. The project proposal should answer most of...
Read more >
Guide to building an enterprise API strategy
APIs enable enterprises to better deliver diverse data and services to internal and external customers, and open up new revenue streams.
Read more >
Documenting APIs: A guide for technical writers and ...
In this course on writing documentation for APIs, instead of just talking about abstract concepts, I contextualize APIs with a direct, hands-on approach....
Read more >
What is API: Definition, Specifications, Types, Documentation
API is a set of programming code that enables data transmission ... The client is any front-end application that a user interacts with....
Read more >
Technical proposal writing: Definition, formatting ...
A technical proposal is a document that contains an introduction to the ... It also shows that you know the field well enough...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found