Feature request: Job upsert
See original GitHub issueHi all,
I am attempting to use BullMQ to debounce an event stream on the trailing edge using delayed jobs. The debounce window can vary depending on the use case (from several seconds to months).
My first shot at this was to use job.name as a key to group jobs that correspond to the same window. Here is some pseudo code for a job scheduler:
// get all delayed jobs with the same name
// should never return length > 1 😄
const [delayedJob] = await queue
.getDelayed()
.then((jobs = []) => jobs.filter((j) => j?.name === jobName))
if (delayedJob) {
await job.remove()
}
await queue.add(jobName, jobData, {
delay: jobDelay
})
Obviously this approach does not guarantee an atomic read+remove+add operation, and my test cases definitely repro race conditions when experiencing high stream throughput (i.e. > 1 delayedJob is present in the queue). I want to preserve the trailing edge since it contains most recent jobData state, so it’s critical that the last event is queued.
Maybe a less transient approach could be read+update / read+add but there is currently no way to update the delay. There is an open issue in bull for this https://github.com/OptimalBits/bull/issues/1733
I assume this feature will require lua scripting, but I have zero experience there and very minimal experience with Redis data structures in general.
If there is an alternative approach or advice on tackling the lua script I would be happy to hear/discuss it.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:5 (2 by maintainers)

Top Related StackOverflow Question
@jamesholcomb ok, so you mean that if the the job has started is because the debounce window has “expired” and now adding would imply adding a new job whereas the one that has started will continue normally until completion?
Yes.