Tensor dtype
See original GitHub issueHi,
Tensors in the saved state_dict
have dtype float32
. Is there a reason for that? I was able to cut the size of pytorch_model.bin
roughly in half by converting tensors to float16 without loosing any accuracy. I’m using distilbert-base-nli-mean-tokens
.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:6 (3 by maintainers)
Top Results From Across the Web
torch.Tensor — PyTorch 1.13 documentation
Data types. Torch defines 10 tensor types with CPU and GPU variants which are as follows: Data type. dtype.
Read more >tf.dtypes.DType | TensorFlow v2.11.0
DType 's are used to specify the output data type for operations which require it, or to inspect the data type of existing...
Read more >Torch - How to change tensor type? - Stack Overflow
I created a permutation of the numbers from 1 to 3. th> y = torch.randperm(3 ); th> y 3 2 1 [torch.DoubleTensor of...
Read more >How to Get the Data Type of a Pytorch Tensor? - GeeksforGeeks
We can get the data type by using dtype command: Syntax: tensor_name.dtype. Example 1: Python program to create tensor with integer data ...
Read more >Pytorch Tensor and Types - Deep Learning University
torch.int64 is the default dtype assigned to integer tensors. This simply means that tensor_3 holds 64bit integer data. The table below lists the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@wboleksii Sounds good, will give it a try.
It seems like you can just call
.half()
on aSentenceTransformer
and it will use FP16, giving you a nice speedup and memory savings. The resulting embeddings are very close to those of the full FP32 model.The embeddings returned are still
float32
datatype though, so it must be converted internally. I would prefer if the original float16 outputs were returned instead.