Row node re-use and max clone count
See original GitHub issueJust wanted to ask if there are any objections to these ways of doing the rendering. I think Mikado opened the door on the first one, unless I’m misunderstanding the code.
-
The first idea is to not diff any rows at all but loop over the current rows in the table in the DOM and update the dynamic data (not creating new row nodes), if the list gets bigger create new rows and fill it with the data, if it gets smaller discard some rows. This is quite easy to do when you know the “path” to the dynamic parts of the row template. It’s more like overwriting than diffing.
-
With the first concept in mind are there any limitations on how many rows one can clone (with
cloneNode
)? I’m not fully sure this would improve performance but what if one keeps 10 or 100 rows in memory and instead of clone 1 by 1, clone 10 or 100 a time to get to the end count of 1000 and 10000. Again with the dynamic path the template holes can filled after.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:3
- Comments:10 (10 by maintainers)
Top GitHub Comments
If more frameworks go that way we might need filterable flags whether a framework is data-driven and whether it’s reactive to changes… Mikado is neither. It’s as explicit as the vanillajs version. (From a developer’s perspective I’d prefer a reactive data driven framework for almost all non trivial applications, but for the rest something like mikado might be nicer than vanillajs.)
🤦♂ my bad, those are indeed different structures I was comparing.
makes sense, yea it’s probably not worth the added complexity then.
it’s fun experimenting, it seems kind of counter intuitive too when we see how much gains the one row clone gives.
here is the on the fly cache experiment I created https://github.com/luwes/sinuous/commit/7156ede3fbec759f86bf8ab8e144c2b05e6d5407