question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

optimization: with an in memory cache decorator

See original GitHub issue

Description A small “in memory” cache system as a decorator (not talking about lru_cache and its cousins…), talking about a customizable cache with a TTL…

Basic example Inside the class

import time

class Index():
  def __init__():
  ...
  self.cache_activated: bool = True
  self.cache_args_values: Dict[str, Any] = {}
  self.cache_ttl: int = 1

  def clean_cache(self) -> None:
      """"
      A cleaner for the actual cache
      
      """
      self.cache_args_values = {}
      
  def cache(self, func: Any) -> Any:
      """
      A custom cache decorator to save output of a function depending on arguments
      With a ttl
      
      func
      """
      def _wrapped(*args: Any, **kwargs: Any) -> Any:
          if not self.cache_activated:
               return func(*args, **kwargs)
               
          cache_key: str = str(args) + str(kwargs)
          if cache_key in self.cache_args_values:
              if time.time() - self.cache_args_values[cache_key]['at'] <= self.cache_ttl:
                  return self.cache_args_values[cache_key]['as']
  
          self.cache_args_values[cache_key] = {
              'at': time.time(),
              'as': func(*args, **kwargs)
          }
          return self.cache_args_values[cache_key]['as']
  
      return _wrapped
  
  
  @self.cache
  def search(self, query: str, opt_params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
      .......

  @self.cache
  def get_filterable_attributes(self) -> List[str]:
      .......

  @self.cache
  def get_stop_words(self) -> List[str]:
      .......
      
  @self.cache
  def get_document(self, document_id: str) -> Dict[str, Any]:
      .......
    
  @self.cache
  def get_settings(self) -> Dict[str, Any]:
      .......

This approach could be useful if we want to prevent the meilisearch to get hit with the same parameters each time in a short delay (i faced that issue btw)…

Other Any other things you want to add.

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:11 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
sanders41commented, Jan 8, 2022

Seems reasonable to me.

I’m just a contributor not a maintainer so it won’t be my call, just giving my thoughts as a contributor 😄

1reaction
Sanix-Darkercommented, Jan 18, 2022

Thanks for this benchmark @sanders41 and for the update with the @static_method, the code i provided can be tweak and optimized even more 😃, so that the cache get activated depending on the size of the output data from functions… in a way, for some specifics incoming args, because we got small size in outputs, we can still hit meilisearch for next calls… and we activate the cache when the return size is bigger than the allowed limit of the previous call… (that’s why i prever a self implem instead of lru_cache and… yeah it could sound a little weird… but i still think it could be ‘nice’ to have the cache here… and we can implement it well)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Extremely FAST Caching Repository With Decorator Pattern in ...
How To Use The Specification Design Pattern With EF Core 6 · Intro to In- Memory Caching in C# · ⚡ Formal Concept...
Read more >
Caching in Python Using the LRU Cache Strategy
Caching is an optimization technique that you can use in your applications to keep recent or often-used data in memory locations that are...
Read more >
Simple Cache Decorator for Webapps that self-invalidates
Most of your optimizations should be done by optimizing a caching layer. In fact, if a cache doesn't help you optimize further then...
Read more >
Cheap Optimization with Memoization in Python - GDCorner
Memoization is a technique to cache the result of a function or program for a given ... providing a whole range of functions...
Read more >
Optimize performance with st.cache - Streamlit Docs
When you mark a function with the @st.cache decorator, it tells Streamlit that ... Think of the cache as an in-memory key-value store,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found