question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

About the experiment of MWPBert on math23k

See original GitHub issue

I got ‘value accu=40.0’ and found that the model uses ‘bert-base-uncased’ as the encoder by default. Could the reason be that I were not using a Chinese bert for math23k?

here is my instruction: python run_mwptoolkit.py --model=MWPBert --dataset=math23k --task_type=single_equation --equation_fix=prefix --test_step=5 --gpu_id=0


I tried to change ‘config[“pretrained_model”]’ to ‘bert-base-chinese’,but got some bugs which showed it doesn;t match the model…Is there any built-in method to change it?

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
LYH-YFcommented, May 10, 2022

there may be something wrong with my code when i update v0.0.6, i will check it. and i’m so sorry for that.

0reactions
LYH-YFcommented, Aug 14, 2022

I got value acc 82.5, the latest result of MWPBert on math23k.

here is my here is my instruction:

python run_mwptoolkit.py --model=MWPBert --dataset=math23k --equation_fix=prefix --task_type=single_equation --pretrained_model=hfl/chinese-bert-wwm-ext --test_step=5 --gpu_id=0 --train_batch_size=32 --epoch_nums=85 --learning_rate=3e-4 --encoding_learning_rate=3e-5 --vocab_level=char

and I publish the result at the result table

Read more comments on GitHub >

github_iconTop Results From Across the Web

(PDF) MWP-BERT: A Strong Baseline for Math Word Problems
In this work, we introduce MWP-BERT to obtain pre-trained token representations that capture ... Experiments on both Ape-clean and Math23k.
Read more >
MWP-BERT: A Strong Baseline for Math Word Problems
Experiments on both Ape-clean and Math23k dataset show that our MWP-BERT achieves 5-10% higher accuracy than previous baselines.
Read more >
MWP-BERT: Numeracy-Augmented Pre-training for Math ...
The evaluation results verify that the proposed analogical learning strategy promotes the performance of MWP-BERT on Math23k over the state-of-the-art model ...
Read more >
MWP-BERT: Numeracy-Augmented Pre ... - ACL Anthology
We conduct experiment based on these benchmarks, Math23k (Wang et al., 2017),. MathQA (Amini et al., 2019) and Ape-210k (Zhao et al., 2020)....
Read more >
Issues · LYH-YF/MWPToolkit - GitHub
About the experiment of MWPBert on math23k. #21 opened on May 7 by LzhinFdu · 5 · I get ValueError: ('No loss to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found