[varLib] varLib.models.VariationModel assumes all locations are full-range
See original GitHub issueVariationModel seems to be based on the idea that all “master” locations apply across the full range of the axes:
def _computeMasterSupports(self, axisPoints):
...
# Compute min/max across each axis, use it as total range.
but it suggests that one day this will change and it will be possible to specify location that only apply to certain areas of the design space:
# TODO Take this as input from outside?
It would be helpful, particularly in the case of variable layout, if locations were able to specify their box min/max points explicitly.
This would require some upheaval - a new interface design for VariationModel.__init__(locations)
at least.
Issue Analytics
- State:
- Created 3 years ago
- Comments:24 (12 by maintainers)
Top Results From Across the Web
Update varLib to current designspace spec · Issue #916 - GitHub
behdad, does varLib need to be updated to implement the current designspace spec? When building a variable font for Noto Sans Cherokee with ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
VarLib used to do what you suggest. I changed it after Erik pointed out issues with my implementation:
https://www.youtube.com/watch?v=3RRoIYeJ3YQ
The immediate reason it turned into what we have now is that it was easier (but not necessary) to fix the code to do what Erik expected (and I agree with) by just making it behave at it does currently. So, that we can improve on. Here’s the commit, with extensive commit log: https://github.com/fonttools/fonttools/commit/42bef176a33dc7321eb70f5d49c90317e59ad0e5
However, there’s a legitimate reason for why the current output might be desirable: it produces smaller (and many more 0) deltas, which saves bytes. In a common example where eg. Regular and Black masters are designed, then Bold interpolated and imported as a new master and slightly modified, in the scheme that you propose, Bold datas will all be encoded separately. Whereas if the support for Black master reaches all the way to Regular as current code does, then the Bold master will only encode any data for actual manual modifications that were made to it, and nothing for just the interpolated-then-imported-unmodified parts.
Obviously there’s a filesize vs runtime tradeoff. Ideally we should be finding the sweet-spot of that tradeoff given the actual master configuration. But I don’t have good ideas about how to do that currently. I’m still thinking about it.
And yes, the error accounting is sloppy and errors add up. It could be done better, to account of accumulated rounding errors from previous deltas when computing deltas for each master. That way, the total error for any interpolation point will still be limited to 0.5.
OK, I’m convinced. The tooling can easily make sharp cornered regions by adding or subtracting epsilon from the previous master location. I have enough to do what I want in terms of variable layout.