v1.0 API
See original GitHub issueA live (updated in place) proposal for v1.0 API:
Signature of main memoizee function will remain same: memoizee(fn[, options])
Supported options:
contextMode, possible values:
'function'(default) target of memoization is regular function'method'target of memoization is a method'weak'target of memoization is regular function which takes an object as first argument, and we don’t want to lock those objects from gc
resolutionMode, possible values:
'sync'(default for non-native async functions) target of memoization is synchronous function'callback'target of memoization is Node.js style, an asynchronous, callback taking function.'async'(forced for native async functions) target of memoization is an asynchronous function that returns promise. ES2017 async functions will be detected automatically (setting this option to any other value will have no effect).
serialize
null(default), cache ids are resolved against object/value instances directly, therefore cache cannot be persisted in physical layer. StillO(1)time complexity will be ensured within cache id resolution algorithm (this is not the case right now, in equivalent object mode)true, cache ids are resolved against serialized values (e.g. two different plain objects of exactly same structure will map to same cache id). This mode, will allow to persist cache in persistent layer between process runs. Default serialization function will be a smarter version ofJSON.stringify<function> serialize(value), custom value serializer. Whether it’ll be persistence friendly will be up to the developer
length
Will work nearly exactly same as in current version. One difference would be that dynamic length intention will have to be indicated through -1 and not false
normalizers
Arguments normalizers, it’s what’s represented now by resolvers, otherwise it will work exactly same
ttl (previously maxAge)
Will represent same feature as in current version, with following changes, improvements:
- Expressed in seconds, not milliseconds (but not necessarily an integer, therefore invalidation time of e.g. 0.5s would be supported)
- If combined with
resolutionModeof'async'or'callback', it’ll come with prefetch functionality, which could be customized via following options passed withttloption (e.g.ttl: { value: 3600, prefetchSpan: 0.5 }):-
prefetchSpan(default:0.3). Assuming following variables:Srepresents last time when result was requested and cachedErepresents time when currently cached value becomes invalidated (is reached by TTL setting)
Then if invocation occurs in a period between
E - (E - S) * prefetchSpanandE, cached result is returned, but behind the scenes it is refreshed with updated value. -
recoverySpan(default:0.3) if invocation occurs in a period betweenEandE + (E - S) * recoverySpan, and request for updated result value fails, then we return previously cached (stale) result.
-
Additionally:
- Implementation should no longer be based on
setTimeoutcalls that invalidate the values, as that’s inefficient approach. Cached values should be invalidated at moment of consecutive invocation where we discover cached value is stale. (and we should not care of eventual large number of stale values being stored, that can eventually be tuned with othermaxoption). - If on re-fetch we approach the error, then we should provide an option to return a stale value (as requested in #132 )
max
Will work same way as now. Still performance of lru-queue will have to be revised, we should not drag behind lru-cache.
Additionally, in async case, the setting should be effect at the invocation, and not at the resolution as it’s currently (see #131)
refCounter
Will work same way as it is now.
Memoize configuration objects
Each memoized function will expose memoization object, which will provide access to events and methods which will allow to access and operate on cache manually
It will be either instance of Memoizee (exposed on memoizedFn.memoizee property) or instance of MemoizeeFactory (exposed on memoizedFn.memoizeeFactory property).
Memoizee
It’s instance will be exposed on memoizedFn.memoizee when memoization is configured with 'function' contextMode (that’s the default).
Methods
getId(...args)- Resolve cache id for given argshas(id)- Whether we have cached value for given cache idget(id)- Get value for given cache idset(id, result)- Cache value for given cache iddelete(id)- Delete value for given cache idclear()- Clear cacheforEach(cb)- Iterate over all cached values (alternatively some other mean of access to full cached object can be provided)
Events
hit(id, args)- At any memoized function invocationset(id, args, result)- When result value is cachedpurge(id, result)- When value is removed from cache (users ofdisposeoption, will now have to rely on this event)
MemoizeeFactory
Its instance will be exposed on memoizedFn.memoizeeFactory when memoization is configured with 'weak' or 'method' contextMode.
It will produce different Memoizee instances, e.g. in case of 'method' for each different context different Memoizee instance will be created. Same with 'weak' contextMode, when length > 1. In case of length === 1 and 'weak' there’ll either be other dedicated class, or MemoizeeFactory instance will not produce any Memoizee instances (it will be just handling context objects).
Methods
In below methods value can be a: memoized method (in case of 'method'), a Memoizee instance (in case of 'weak' with length > 1), or cached value (in case of 'weak' and length === 1)
has(context)- Whether we have value initialized for given contextget(context)- Get value for given context (create value if not yet created)delete(context)- Delete value for given contextclear()- Clear cache (as we do not store already handled contexts it clears cache on already visited context only if they’re processed again).
There will be no mean to iterate over all contexts for which values have been resolved, as we will not keep handles to processed contexts in a factory (it’s to avoid blocking them from gc).
Events
set(context, value)- Initialization of value for given contextpurge(context, result)- When value for context is cleared. (not be invoked forclear())
This is just rough proposal, it’s important that performance is at least maintained and at best improved (where possible). Therefore some deviations from above are possible.
It might be good to also consider:
- Expose few simple straightforward cases as distict (light) modules, which could be reused in a main factory module. Still they can be required directly in modules that may need simple memoization techniques.e.g. cases to address:
resolutionMode: 'sync', length: 0.
- Provide
primitiveversion or serializer, to support fastest possible memoization which is based on id’s resolved purely via stringification of arguments - Provide a
@memoizeedecorator that is ES draft compliant, and can be used to memoize class methods
Issue Analytics
- State:
- Created 7 years ago
- Reactions:8
- Comments:10 (4 by maintainers)

Top Related StackOverflow Question
maxAgehas always been confusing to me.ttlwould be vastly more intuitive (ortimeToLive)@medikoo a quick update on the subject, we’ll have soon a speedup of x18 on integer keys and between 2.5x and 5x on string keys. It’ll be a dropin replacement for the current lru-queue implementation. I’ll propose a pullrequest soon. I’ll test https://www.npmjs.com/package/hashtable first, as I suspect it’ll bring even more speedup for string keys.