question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Unable to reproduce OGBN-MAG results

See original GitHub issue

Hi HGT authors,

I am not able to reproduce your OGB leaderboard results. I followed your instructions to run your latest code (commit 9c2182f) for 10 times and got average test accuracy 0.4883 and std 0.0053.

The testing accuracy numbers of the 10 runs are: 0.4852, 0.479, 0.4935, 0.4906, 0.496, 0.4911, 0.4912, 0.4861, 0.4889, 0.4817

The ogb version I was using is 1.2.1. I did make sure evaluation is using variance_reduce for better performance. The commands I used to run your code is the following:

python3 preprocess_ogbn_mag.py --output_dir OGB_MAG.pk
for ((run=0;run<10;run=run+1))
do
        dir_name=model_save_${run}
        python3 train_ogbn_mag.py --n_hid 512 --n_layer 4 --n_heads 8 \
                --data_dir ./OGB_MAG.pk --model_dir $dir_name \
                --prev_norm --last_norm --use_RTE --conv_name hgt
        python3 eval_ogbn_mag.py --n_hid 512 --n_layer 4 --n_heads 8 \
                --data_dir ./OGB_MAG.pk --model_dir ${dir_name} \
                --prev_norm --last_norm --use_RTE --conv_name hgt
done

Could you let me know if there is anything I missed?

Thanks! @acbull

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:18

github_iconTop GitHub Comments

1reaction
tsy19025commented, Dec 31, 2020

Hi, I have the same problem. If I follow your advice and run the following commands for 10 times, the numbers of testing accuracy on VR task are: 0.491 0.485 0.486 0.482 0.488 0.487 0.486 0.487 0.488 0.485

python3 train_ogbn_mag.py --n_hid 512 --n_layer 4 --n_heads 8 --n_epoch 200\
		--data_dir ./OGB_MAG.pk --model_dir $dir_name \
		--prev_norm --last_norm --use_RTE --conv_name hgt --sample_width 600 --sample_depth 6
python3 eval_ogbn_mag.py --n_hid 512 --n_layer 4 --n_heads 8\
	        --data_dir ./OGB_MAG.pk --model_dir $dir_name \
		--prev_norm --last_norm --use_RTE --conv_name hgt --sample_width 600 --sample_depth 6

And the training log of the best is here: https://drive.google.com/file/d/10lUs1AXJOKTlvQVZHJHBedwQSlN3lF0d/view?usp=sharing

Is there anything I can do to reproduce your result? Thanks.

0reactions
acbullcommented, Jan 26, 2021

Hi all:

Sorry for the trouble.

After you point this out, I run the following code from my side (with the script I mention in the issue) ten times, and the average result is 0.4927 (mean) 0.0061 (std).

python3 train_ogbn_mag.py --n_hid 512 --n_layer 4 --n_heads 8 \
                --data_dir ./OGB_MAG.pk --model_dir $dir_name \
                --prev_norm --last_norm --use_RTE --conv_name hgt --sample_width 520 --sample_depth 6

python3 eval_ogbn_mag.py --n_hid 512 --n_layer 4 --n_heads 8 \
                --data_dir ./OGB_MAG.pk --model_dir $dir_name \
                --prev_norm --last_norm --use_RTE --conv_name hgt --sample_width 520 --sample_depth 6

I changed the eval script so the performance should be more stable.

I’ve submitted the corrected result to the OGB leaderboard. Please tell me if you still have any problems to reproduce it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Unable to reproduce OGBN-MAG results · Issue #26 - GitHub
Hi HGT authors, I am not able to reproduce your OGB leaderboard results. I followed your instructions to run your latest code (commit ......
Read more >
Hetero_rgcn's results on ogbn-mag can not be reproduced
Hi everyone, I'm working on the node classification for ogbn-mag dataset, and I'm following the instructions here: dgl/examples/pytorch/ogb/ogbn-mag at ...
Read more >
Open Graph Benchmark: Datasets for machine ... - arXiv
We present the OPEN GRAPH BENCHMARK (OGB), a diverse set of challenging and realistic benchmark datasets to facilitate scalable, robust, ...
Read more >
Node Property Prediction - Open Graph Benchmark
Graph: The ogbn-arxiv dataset is a directed graph, representing the citation network between all Computer Science (CS) arXiv papers indexed by MAG [1]....
Read more >
Clinical approach to recurrent implantation failure
Any embryo entering the uterine cavity would implant and result in a preg- nancy, whatever the quality of the embryo, or the state...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found