Figure out the best way to handle our git-lfs build going forward
See original GitHub issueš§° Task
Our git-lfs setup is already causing headaches due to GitHubās bandwidth limitations on LFS. We need to consider sustainable options going forward.
- Based on our burn rate, 10x the bandwidth should cover us easily and would cost $600/y, which is not that terrible. So we (ie CZI š Thank you!!! šš) could just fork it out.
- We could urgently add more dynamically generated screenshots and remove a large number of objects from LFS, which could decrease our burn rate, while at the same time ensuring our documentation is up to date.
- The burn rate should be very small if we can correctly implement caching. The build-docs workflow has some breadcrumbs on how to implement that here. ie in theory itās already implemented but clearly itās not working correctly.
- It looks like some non-docs builds are running into the lfs quotas, which should never happen. This I donāt understand currently (it doesnāt look like the checkout action has lfs enabled without adding the
with: lfs: true
parameter to the action), but itās definitely happening, see e.g. this line in a failed build. - We can drop LFS and go back to āGHIFSā, ie having images served by GitHub links. @neuromusic has strongly opposed this approach because itās not officially supported by Github, though. Itās a fair objection but the LFS approach presents its own headaches. (One could say that Github already doesnāt really support LFS! š)
- On that note, it also makes no sense that the same bandwidth with a git submodule stored in github would be completely fine. We could try contacting github to ask them whether they can update their LFS bandwidth policy to something more reasonable.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:7 (7 by maintainers)
Top Results From Across the Web
Git LFS - large file storage | Atlassian Git Tutorial
Git LFS is a Git extension that improves handling of large files by lazily downloading the needed versions during checkout, rather than during...
Read more >Handling Large Files with LFS | Learn Version Control with Git
To clone an existing LFS repository from a remote server, you can simply use the standard "git clone" command that you already know....
Read more >How Git LFS Works: Overview of Git Large File Storage
Git LFS isn't the only way to manage large files in Git. There are alternatives. This includes other open source or third party...
Read more >Managing huge files on the right storage with Git LFS - YouTube
swampUP 2016 - JFrog User Conference - Tim Pettersen, Senior Developer / Developer Advocate at Atlassian. I'm an engaged, detail-orientedĀ ...
Read more >Setting up and using git large file storage (LFS) - Ian Belcher
Keeping large files within git can be real pain for performance and can create a number of other issues. LFS helps solve those...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Ah, looks like itās to test
pytest --pyargs napari
while not having any napari in the local directory. We could/should fix this by putting our code in asrc/
directory, but in the meantime, perhaps we can come up with a more lfs-compatible solution? For example, checkout, install, then delete.Come to think of it, maybe going to point 3 above, caching is working in the build-docs (which would explain why build-docs didnāt fail), and all our bandwidth was consumed by this pip install line???
Indeed, no build-docs or build-and-deploy docs actions have failed:
https://github.com/napari/napari/actions/workflows/build_docs.yml
So #4114 is not the right way forward, and we need to fix this ubuntu-latest action.