question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

A live (updated in place) proposal for v1.0 API:


Signature of main memoizee function will remain same: memoizee(fn[, options])

Supported options:

contextMode, possible values:

  • 'function' (default) target of memoization is regular function
  • 'method' target of memoization is a method
  • 'weak' target of memoization is regular function which takes an object as first argument, and we don’t want to lock those objects from gc

resolutionMode, possible values:

  • 'sync' (default for non-native async functions) target of memoization is synchronous function
  • 'callback' target of memoization is Node.js style, an asynchronous, callback taking function.
  • 'async' (forced for native async functions) target of memoization is an asynchronous function that returns promise. ES2017 async functions will be detected automatically (setting this option to any other value will have no effect).

serialize

  • null (default), cache ids are resolved against object/value instances directly, therefore cache cannot be persisted in physical layer. Still O(1) time complexity will be ensured within cache id resolution algorithm (this is not the case right now, in equivalent object mode)
  • true, cache ids are resolved against serialized values (e.g. two different plain objects of exactly same structure will map to same cache id). This mode, will allow to persist cache in persistent layer between process runs. Default serialization function will be a smarter version of JSON.stringify
  • <function> serialize(value), custom value serializer. Whether it’ll be persistence friendly will be up to the developer

length

Will work nearly exactly same as in current version. One difference would be that dynamic length intention will have to be indicated through -1 and not false

normalizers

Arguments normalizers, it’s what’s represented now by resolvers, otherwise it will work exactly same

ttl (previously maxAge)

Will represent same feature as in current version, with following changes, improvements:

  • Expressed in seconds, not milliseconds (but not necessarily an integer, therefore invalidation time of e.g. 0.5s would be supported)
  • If combined with resolutionMode of 'async' or 'callback', it’ll come with prefetch functionality, which could be customized via following options passed with ttl option (e.g. ttl: { value: 3600, prefetchSpan: 0.5 }):
    • prefetchSpan(default: 0.3). Assuming following variables:

      • S represents last time when result was requested and cached
      • E represents time when currently cached value becomes invalidated (is reached by TTL setting)

      Then if invocation occurs in a period between E - (E - S) * prefetchSpan and E, cached result is returned, but behind the scenes it is refreshed with updated value.

    • recoverySpan (default: 0.3) if invocation occurs in a period between E and E + (E - S) * recoverySpan, and request for updated result value fails, then we return previously cached (stale) result.

Additionally:

  • Implementation should no longer be based on setTimeout calls that invalidate the values, as that’s inefficient approach. Cached values should be invalidated at moment of consecutive invocation where we discover cached value is stale. (and we should not care of eventual large number of stale values being stored, that can eventually be tuned with other max option).
  • If on re-fetch we approach the error, then we should provide an option to return a stale value (as requested in #132 )

max

Will work same way as now. Still performance of lru-queue will have to be revised, we should not drag behind lru-cache.

Additionally, in async case, the setting should be effect at the invocation, and not at the resolution as it’s currently (see #131)

refCounter

Will work same way as it is now.

Memoize configuration objects

Each memoized function will expose memoization object, which will provide access to events and methods which will allow to access and operate on cache manually

It will be either instance of Memoizee (exposed on memoizedFn.memoizee property) or instance of MemoizeeFactory (exposed on memoizedFn.memoizeeFactory property).

Memoizee

It’s instance will be exposed on memoizedFn.memoizee when memoization is configured with 'function' contextMode (that’s the default).

Methods

  • getId(...args) - Resolve cache id for given args
  • has(id) - Whether we have cached value for given cache id
  • get(id) - Get value for given cache id
  • set(id, result) - Cache value for given cache id
  • delete(id) - Delete value for given cache id
  • clear() - Clear cache
  • forEach(cb) - Iterate over all cached values (alternatively some other mean of access to full cached object can be provided)

Events

  • hit(id, args) - At any memoized function invocation
  • set(id, args, result) - When result value is cached
  • purge(id, result) - When value is removed from cache (users of dispose option, will now have to rely on this event)

MemoizeeFactory

Its instance will be exposed on memoizedFn.memoizeeFactory when memoization is configured with 'weak' or 'method' contextMode.

It will produce different Memoizee instances, e.g. in case of 'method' for each different context different Memoizee instance will be created. Same with 'weak' contextMode, when length > 1. In case of length === 1 and 'weak' there’ll either be other dedicated class, or MemoizeeFactory instance will not produce any Memoizee instances (it will be just handling context objects).

Methods

In below methods value can be a: memoized method (in case of 'method'), a Memoizee instance (in case of 'weak' with length > 1), or cached value (in case of 'weak' and length === 1)

  • has(context) - Whether we have value initialized for given context
  • get(context) - Get value for given context (create value if not yet created)
  • delete(context) - Delete value for given context
  • clear() - Clear cache (as we do not store already handled contexts it clears cache on already visited context only if they’re processed again).

There will be no mean to iterate over all contexts for which values have been resolved, as we will not keep handles to processed contexts in a factory (it’s to avoid blocking them from gc).

Events

  • set(context, value) - Initialization of value for given context
  • purge(context, result) - When value for context is cleared. (not be invoked for clear())

This is just rough proposal, it’s important that performance is at least maintained and at best improved (where possible). Therefore some deviations from above are possible.

It might be good to also consider:

  • Expose few simple straightforward cases as distict (light) modules, which could be reused in a main factory module. Still they can be required directly in modules that may need simple memoization techniques.e.g. cases to address:
    • resolutionMode: 'sync', length: 0.
  • Provide primitive version or serializer, to support fastest possible memoization which is based on id’s resolved purely via stringification of arguments
  • Provide a @memoizee decorator that is ES draft compliant, and can be used to memoize class methods

Issue Analytics

  • State:open
  • Created 7 years ago
  • Reactions:8
  • Comments:10 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
Rushcommented, May 19, 2017

maxAge (or ttl, to be decided)

maxAge has always been confusing to me. ttl would be vastly more intuitive (or timeToLive)

1reaction
fazouane-marouanecommented, Sep 25, 2018

@medikoo a quick update on the subject, we’ll have soon a speedup of x18 on integer keys and between 2.5x and 5x on string keys. It’ll be a dropin replacement for the current lru-queue implementation. I’ll propose a pullrequest soon. I’ll test https://www.npmjs.com/package/hashtable first, as I suspect it’ll bring even more speedup for string keys.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Microsoft Graph REST API v1.0 endpoint reference
Documentation for the Microsoft Graph REST API v1.0 endpoint, which includes APIs in general availability (GA) status.
Read more >
REST API v1.0 Reference - Sisense Developers
# REST API v1.0 Reference · account · actions · admin · alerts · analytics · application · authentication · build-rest-controller ...
Read more >
v10.0 - Graph API - Meta for Developers - Facebook
Version 10.0. Graph API. Released February 23, 2021 | Available until June 8th, 2023 | Blog post. Business Apps.
Read more >
Index / V1.0 / Versions / REST API / Docs - Ably Realtime
Ably documentation for 40+ web, mobile, and IoT SDKs, quickstart guides and tutorials, and realtime concepts and design patterns.
Read more >
Managing API tokens – Help Center - Support - AppsFlyer
In the AppsFlyer API tokens section, click Manage your AppsFlyer API tokens. The available tokens (V1.0, V2.0, or both) are displayed.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found