Changes suggested to fastai integration
See original GitHub issueA cluster of changes suggested to the fastai integration after the merge of #678
- Instead of moving the fastai.Learner from the directory, save in current directory (see comment in #678). This is an issue with how fastai exports the Learners.
- Consider using
packaging
andpackaging.markers
when reviewing if the local fastai and fastcore versions are supported (see comment in #678). - Right now we require that a
pyproject.toml
file exists inside the repo of a fastai model. We might not want to be super strict. Maybe if there is nopyproject.toml
or no versions specified there, we can just show a warning instead of breaking entirely.
Issue Analytics
- State:
- Created a year ago
- Comments:6 (6 by maintainers)
Top Results From Across the Web
Update on fastai2 progress and next steps - fastai dev
We have tried to avoid making significant changes to the fastai2 library during this time, since we wanted to ensure that all the...
Read more >Checking Out the New fastai / timm Integration - WandB
A long time ask from the fastai community has been the integration of timm (Pytorch Image models) into the fastai vision learner. Why?...
Read more >fastai/CHANGELOG.md at master - GitHub
It should not require any code changes except for people doing sophisticated tensor subclassing work, but nonetheless we recommend testing carefully. Therefore, ...
Read more >J.J. Allaire (RStudio) and Jeremy Howard (fast.ai): "2-way AMA"
In this wide-ranging discussion, Jeremy and J.J. share stories about their journeys, motivations, and methods in working on scientific ...
Read more >fastai - neptune.ai documentation
With the Neptune–fastai integration, the following metadata is logged ... You can change or extend the default behavior of NeptuneCallback() by passing the ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The third point means that instead of raising errors when loading a model if there is no
toml
specified, we could instead just load the model but show a clear warning to users.Pretty much making these less strict
I suggested Omar we can do this particular point in a follow-up PR since I think it’s better to get this out and get some feedback from users that will be using this from head (not from release) so we can also do other changes based on the feedback.
I agree the first point in particular should be fixed in the existing PR. The second seems like a minor internal implementation thing that should not change user behavior, so no strong opinions.
Would you mind letting us know why it’s closed as won’t fix?