[readability] Consolidate prune_heads logic to PretrainedModel.
See original GitHub issueMany models have identical implementations of prune_heads
it would be nice to store that implementation as a method on PretrainedModel
and reduce the redundancy.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:10 (2 by maintainers)
Top Results From Across the Web
Models
PreTrainedModel takes care of storing the configuration of the models and handles ... resize the input embeddings,; prune heads in the self-attention heads....
Read more >Proceedings of the 2020 Conference on Empirical Methods ...
Incremental Event Detection via Knowledge Consolidation Networks ... Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained ...
Read more >Interpretable Neural Predictions with Differentiable Binary ...
It was previously used to prune heads in multi-head attention (Voita et al., 2019). Our work is more similar in spirit to Bastings...
Read more >Paper Digest: EMNLP 2020 (Main Track) Highlights
Highlight: In this paper, we propose a Knowledge Consolidation Network ... logical forms, and implementing the missing execution engines.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Go for it!
I think this is done. Happy to find new bugs if anyone is on the hunt!