question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Split out mutable event content from event cache into new caches that are keyed by room ID

See original GitHub issue

In the hopes of fixing https://github.com/matrix-org/synapse/issues/11521 and paving the way towards an immutable external event cache (https://github.com/matrix-org/synapse/issues/2123), a new architecture for the event cache is proposed.

The current state

Currently there are a couple separate data structures related to caching event contents in memory in Synapse (see code for fetching an event from cache/db here):

  • EventsWorkerStore._get_event_cache - An instance of AsyncLruCache implemented as a map of EventID -> (EventBase, redacted EventBase | None).
    • This cache is populated after fetching an event from the database.
    • Entries in this cache are invalidated when an event is deleted from the database (in most cases, c.f. #11521), redacted or marked as rejected.
    • Entries in this cache are invalidated when the size limit of the LruCache are reached.
  • EventsWorkerStore._event_ref - A WeakValueDictionary which serves as a single point of reference for EventBase’s in memory, ensuring that we don’t end up with multiple, unnecessary copies of a single EventBase in memory.
    • This data structure is populated after fetched an event from the database.
    • Because this is a WeakValueDictionary, entries in this cache are invalidated when all other references to the EventBase in an entry are gone.
    • Entries in this cache are invalidated when an event is deleted from the database (in most cases, c.f. #11521), redacted or marked as rejected.
    • Entries in this cache are not invalidated when an entry is evicted from EventsWorkerStore._get_event_cache, as something else may still be processing the event, even if it’s been removed from that cache.

What’s the problem?

See https://github.com/matrix-org/synapse/issues/11521; because each of these caches are keyed by EventID alone, it becomes tricky to invalidate them when all you have is a RoomID (i.e. when purging a room completely). We could query all known events for a room from the database, but that may result in millions of events. Ideally we’d have some map of RoomID -> EventID which only covers the events that are actually currently held in memory. We could then use that to invalidate all three of these caches.

Additionally, as get_event_cache contains mutable EventCacheEntrys (comprised of EventBase, redacted EventBase | None), invalidating them is necessary when an event is both redacted or marked as rejected. These can differ per-homeserver, so removing this component from the cache entries opens up avenues for multiple homeservers sharing the same, immutable event cache.

Proposal

After speaking with @erikjohnston we’ve (mostly Erik 😃 came up with the following idea:

  • EventsWorkerStore._get_event_cache would simply become a map of EventID -> EventBase.
  • We add a separate cache which is a nested map of RoomID -> EventID -> {rejected_status: bool, redacted_event_content: Optional[EventBase]}.
    • Entries are added to this map when an event is pulled from the database. We know the RoomID at this point.
    • Entries are not invalidated from this map when an entry is EventsWorkerStore._get_event_cache is invalidated due to hitting the cache size.
    • This does mean that we’ll need to know the RoomID when querying for rejected/redacted status though… But we can get that from the event cache?

The beauty of this is that we no longer need to invalidate the _get_event_cache at all (unless the size limit is hit)! Even in the room purge use case! How? Here are some examples of using this system:

Fetch EventID A which is not in-memory

  1. Some calling function asks for EventID A.
  2. This does not exist in _get_event_cache (nor other caches) so we query from the database. The event and related metadata is fetched from the DB (event_json, redactions, rejections) and both the _get_event_cache and event metadata cache are populated.
  3. Return information from the database.

Fetch EventID A which is in-memory

  1. Some calling function asks for EventID A.
  2. This already exists in _get_event_cache, and presumably the metadata cache. We take the RoomID from the EventBase in the _get_event_cache and query the event metadata cache.
  3. Return information from both caches.

EventID A is not in-memory but the event has been purged

  1. Some calling function asks for EventID A.
  2. This already exists in _get_event_cache, and presumably the metadata cache. We take the RoomID from the EventBase in the cache and query the event metadata cache - but uh oh, there’s no matching entry in the metadata cache! The event must have been purged.
  3. We invalidate the entry in the event cache as well and return None.

Thus when purging a room, we only need to purge entries in the metadata cache (which we can easily do by RoomID due to the metadata cache’s structure). Entries in the get_event_cache and event_ref will be invalidated as they are fetched.

I’m curious for thoughts on whether this sounds reasonable from other members of the Synapse team + cc @Fizzadar.

Issue Analytics

  • State:open
  • Created a year ago
  • Reactions:1
  • Comments:9 (6 by maintainers)

github_iconTop GitHub Comments

2reactions
anoadragon453commented, Oct 14, 2022

If you think of EMS’ use case, where you have many homeservers together in a kubernetes cluster, it’d be nice if all of those homeservers could share a single external event store. And on top of that an external event cache. That way you don’t duplicate that information across every one of your hundreds of servers.

It should be feasible as long as the data you store is consistent across all homeservers. Event content is, whether the event has been rejection is not necessarily. The latter is currently stored in the event caches (and now the events table in the database) so moving that to a single store that’s shared amongst homeservers is trickier. You also need to think about access controls for events - but if you trust all of the homeservers in your cluster you may be able to get away with just doing that homeserver-side.

0reactions
MadLittleModscommented, Oct 13, 2022

What does “multiple homeservers” mean in this context (and other references like “multi-homeserver”, “shared with other homeservers”)? How can I share a cache with multiple homeservers? Multiple workers?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Split out mutable event content from event cache into new caches ...
This happens in PersistEventsStore._persist_events_txn where we have the EventBase and thus access to the Room ID. We currently invalidate the entry in ......
Read more >
Using an external cache server #2123 - matrix-org/synapse
I found that synapse uses a cache, implemented its own means (synapse/util/caches). This creates some difficulties: with scaling (workers ...
Read more >
Cached Events - Photon Engine
Each cached event can be defined by its code, its data and the actor number of the sender. The event cache can also...
Read more >
cache2k User Guide
This parameter instructs the cache to keep all cached values inside the heap. Background: cache2k stores its values in the Java heap by...
Read more >
Read Versioning in an Event Sourced System | Leanpub
A single update results in all data needing to be treated as mutable as there's no knowing if or what may have been...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found