After training by the default args, the result isn't good
See original GitHub issueExcuse me, after training by the default arguments, I get the recall and NDCG score, but the result isn’t as good as the report in the paper. Here is my result after 500 epochs:
recall@50, ndcg@50
0.0912251655629139 0.04411650302936786
recall@100, ndcg@100
0.28807947019867547 0.08591003366756622
recall@200, ndcg@200
0.43559602649006623 0.10980291110041669
Why is the result of pretty lower than the report of the paper?
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
c++ - Why is a parameter pack allowed after default arguments?
Non-default, non-parameter-pack arguments cannot follow a default argument without resulting the in the requirement that the default argument ...
Read more >Trainer - Hugging Face
The Trainer class provides an API for feature-complete training in PyTorch for ... Expand 11 parameters ... We provide a reasonable default that...
Read more >Why my Parameter Tuning gives worse results? - Kaggle
Parameter Tuning is a try and error process where you try different values and get different accuracies every time (It can be low...
Read more >Python SyntaxError: non-default argument follows default
To solve this error, make sure that you arrange all the arguments in a function so that default arguments come after non-default arguments....
Read more >Trainer — PyTorch Lightning 1.8.6 documentation
Automatically tries to find the largest batch size that fits into memory, before any training. # default used by the Trainer (no scaling...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
不是這樣的,因為我們的loss在優化時把常數項(正,但是無梯度)去掉了,你可以看一下論文,所以我們的loss會是負的。但是還是會降到一定程度就收斂的,而且收斂很快(相比於採樣的方法)。
By using the above settings, you will get a better result like us, (dropout is an important parameter, our code has been updated to the latest version).
recall@50, ndcg@50 0.31026490066225165 0.09599689344574573 recall@100, ndcg@100 0.45397350993377483 0.11925239149771916 recall@200, ndcg@200 0.6041390728476821 0.14029391165062766