Loading tapas model into pipeline from directory gives different result
See original GitHub issueHi, I am using the following versions of these packages : transformers = 4.3.2 pytorch = 1.6.0
I am using the following code to download and save a pretrained model:
from transformers import TapasConfig,TapasTokenizer,TapasForQuestionAnswering
import torch
config = TapasConfig.from_pretrained('google/tapas-base-finetuned-wtq',from_pt=True)
model = TapasForQuestionAnswering.from_pretrained('google/tapas-base', config=config)
tokenizer=TapasTokenizer.from_pretrained("google/tapas-base-finetuned-wtq",from_pt=True)
import sys
outdir = sys.argv[1]
model.save_pretrained(outdir)
tokenizer.save_pretrained(outdir)
config.save_pretrained(outdir)
When I then feed the model directory into pipeline, I don’t get any result for the table illustrated in the documentation… If I let pipeline download the model on-the-fly, I get results. Here is the code to feed model directory into pipeline…
import sys
from transformers import pipeline
nlp = pipeline(task="table-question-answering",framework="pt",model="tapas_model_dir")
#nlp = pipeline(task="table-question-answering")
import pandas as pd
data= { "actors": ["brad pitt", "leonardo di caprio", "george clooney"],
"age": ["56", "45", "59"],
"number of movies": ["87", "53", "69"],
"date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"]
}
import numpy as np
table = pd.DataFrame.from_dict(data)
print(np.shape(table))
result = nlp(query=["How many movies has Brad Pitt acted in","What is Leonardo di caprio's age"],table=table)
print(result)
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
TAPAS - Hugging Face
The TAPAS model was proposed in TAPAS: Weakly Supervised Table Parsing ... Results of the various sized models are shown on the original...
Read more >Errors in SQL Server while importing CSV file despite varchar ...
To solve the issue, in the "Choose a Data Source" for the flat-file provider, after selecting the file, a "Suggest Types.." button appears ......
Read more >TAPAS: Towards Automated Processing and Analysis of multi ...
One suggestion is to provide an example for a typical image analysis pipeline (e.g., image import, preprocessing, segmentation, feature extraction, result ...
Read more >U.S. Air Force Enlisted Classification and Reclassification
enlisted personnel reclassified into other occupational specialties has increased in ... Appendix B. Descriptive Statistics and Analytic Modeling Results .
Read more >Problems with Ink - Tapas
When something doesn't work (especially when it involves spending money) we know it can be pretty frustrating – but don't worry, we're here...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The tokenizer needs to be specified for every pipeline tasks. You should always specify the checkpoint for the tokenizer as well as for the model. The size is
442.79 MB
!@LysandreJik , what is the size of your pytorch_model.bin in outdir ?