question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Can not reproduce article numbers

See original GitHub issue

🐛 Bug

https://github.com/huggingface/transformers/tree/master/examples/summarization/bertabs

Information

Bertabs Rouge1/2 F1 evaluation numbers that I am getting are much less than in their article. About twice less!

Model I am using (Bert, XLNet …): Bertabs

Language I am using the model on (English, Chinese …):

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below) Summarization

To reproduce

Steps to reproduce the behavior:

Expected behavior

Environment info

  • transformers version:
  • Platform:
  • Python version:
  • PyTorch version (GPU?):
  • Tensorflow version (GPU?):
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6

github_iconTop GitHub Comments

1reaction
vladgetscommented, May 14, 2020

These are numbers that I am getting when running the examples (in the article or here: https://github.com/nlpyang/PreSumm) they mention much higher numbers:

****** ROUGE SCORES ******

** ROUGE 1 F1 >> 0.195 Precision >> 0.209 Recall >> 0.186

** ROUGE 2 F1 >> 0.094 Precision >> 0.104 Recall >> 0.089

** ROUGE L F1 >> 0.221 Precision >> 0.233 Recall >> 0.212

0reactions
stale[bot]commented, Aug 18, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Why Many Papers Cannot Be Reproduced - Scientific Editing
A reasonable explanation may be that the others who tried to reproduce it do not understand the concept of the research.
Read more >
What should you do if you cannot reproduce published results?
Just publish. Publish your attempts to replicate the findings, documenting the discrepancies, together with the nice results you've obtained ...
Read more >
No raw data, no science: another possible source of the ...
Considering that any scientific study should be based on raw data, and that data storage space should no longer be a challenge, journals,...
Read more >
Collaborate on documents in Pages, Numbers, and Keynote ...
With collaboration in Pages, Numbers, and Keynote, invite others to collaborate on a document, set permissions for the document, ...
Read more >
APA Citation Guide (7th edition) : Journal Articles - LibGuides
You do not need to put a period after a DOI number. Formatting. Hanging Indents: All citations should be double spaced and have...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found