question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Loading tapas model into pipeline from directory gives different result

See original GitHub issue

Hi, I am using the following versions of these packages : transformers = 4.3.2 pytorch = 1.6.0

I am using the following code to download and save a pretrained model:

from transformers import TapasConfig,TapasTokenizer,TapasForQuestionAnswering
import torch
config = TapasConfig.from_pretrained('google/tapas-base-finetuned-wtq',from_pt=True)
model = TapasForQuestionAnswering.from_pretrained('google/tapas-base', config=config)
tokenizer=TapasTokenizer.from_pretrained("google/tapas-base-finetuned-wtq",from_pt=True)
import sys

outdir = sys.argv[1]

model.save_pretrained(outdir)
tokenizer.save_pretrained(outdir)
config.save_pretrained(outdir)

When I then feed the model directory into pipeline, I don’t get any result for the table illustrated in the documentation… If I let pipeline download the model on-the-fly, I get results. Here is the code to feed model directory into pipeline…

import sys
from transformers import pipeline

nlp = pipeline(task="table-question-answering",framework="pt",model="tapas_model_dir")
#nlp = pipeline(task="table-question-answering")

import pandas as pd

data= {    "actors": ["brad pitt", "leonardo di caprio", "george clooney"],
    "age": ["56", "45", "59"],
    "number of movies": ["87", "53", "69"],
    "date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"]
}

import numpy as np
table = pd.DataFrame.from_dict(data)
print(np.shape(table))
result = nlp(query=["How many movies has Brad Pitt acted in","What is Leonardo di caprio's age"],table=table)

print(result)

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
LysandreJikcommented, Mar 4, 2021

The tokenizer needs to be specified for every pipeline tasks. You should always specify the checkpoint for the tokenizer as well as for the model. The size is 442.79 MB!

0reactions
mcharicommented, Mar 4, 2021

@LysandreJik , what is the size of your pytorch_model.bin in outdir ?

Read more comments on GitHub >

github_iconTop Results From Across the Web

TAPAS - Hugging Face
The TAPAS model was proposed in TAPAS: Weakly Supervised Table Parsing ... Results of the various sized models are shown on the original...
Read more >
Errors in SQL Server while importing CSV file despite varchar ...
To solve the issue, in the "Choose a Data Source" for the flat-file provider, after selecting the file, a "Suggest Types.." button appears ......
Read more >
TAPAS: Towards Automated Processing and Analysis of multi ...
One suggestion is to provide an example for a typical image analysis pipeline (e.g., image import, preprocessing, segmentation, feature extraction, result ...
Read more >
U.S. Air Force Enlisted Classification and Reclassification
enlisted personnel reclassified into other occupational specialties has increased in ... Appendix B. Descriptive Statistics and Analytic Modeling Results .
Read more >
Problems with Ink - Tapas
When something doesn't work (especially when it involves spending money) we know it can be pretty frustrating – but don't worry, we're here...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found