question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[🐛BUG] GRU4Rec Defaults to Negative Sampling

See original GitHub issue

Describe the bug GRU4Rec Defaults to Negative Sampling with 1 negative sample with CE loss function. However, during the training loop, the negative samples are being fed into the model the same as the positive sample. The Interaction labels field are not used.

To Reproduce Steps to reproduce the behavior: Run GRU4Rec model with ml-100k dataset.

Expected behavior Expects no negative sample to be fed into GRU4Rec with CrossEntropy loss function.

Please correct me if I have any misunderstanding! Thank you!

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:11 (7 by maintainers)

github_iconTop GitHub Comments

4reactions
2017pxycommented, Mar 31, 2021

@jianshen92 Hi, about your questions, here are my replies.

Firstly, about the result of GRU4Rec:

In that case, the validation results for parameter training_neg_sample_num = 0 and training_neg_sample_num = 1 should be identical. However, running the default settings of with ml-100k after 1 epoch, gives mrr 0.0113, when setting neg_sample to 0, i get mrr 0.011.

In RecBole, we will use pytorch random function in both negative sampling and data shuffling (in training phase). Since they all use the pytorch random data list, the random number will be different in data shuffling if you don’t make negative sampling. In other words, the random data for data shuffling is different when training_neg_sample_num is 1 or 0. That’s why the results have a little difference. In my opinion, this kind of change is acceptable, since if you only change the random seeds in .yaml file and then run the same model, the results will also be different.

About the swap and flip of scores:

Anyway I have another question. I couldn’t figure exactly why at https://github.com/RUCAIBox/RecBole/blob/master/recbole/trainer/trainer.py#L348 you need to do the swapping? and at https://github.com/RUCAIBox/RecBole/blob/master/recbole/evaluator/evaluators.py#L68 that you need to flip?

It’s a trick to accelerate the evaluation in RecBole. To get a faster evaluation, we design an evaluation strategy and we will always put positive items’ scores at the head of scores matrix. For more details about our evalution design, you can see here

Finally,about your needs:

Can we just get the scores matrix at full_sort_predict and run torch.topk to get the top 10 items?

We have already implement an API called case study, which may be helpful to you. You can read the case study docs and the example code to get more information of this API.

1reaction
jianshen92commented, Apr 7, 2021

Thanks @rowedenny for the detailed explanation, it was something I wanted to point out but i wasn’t being clear enough!

Read more comments on GitHub >

github_iconTop Results From Across the Web

GRU4Rec/README.md at master
(Default: standard) -ss SS, --sample_store_size SS GRU4Rec uses a buffer for negative samples during training to maximize GPU utilization.
Read more >
Recurrent Neural Networks with Top-k Gains for Session- ...
In this section we revisit how GRU4Rec samples negative feedback on the output and dis- cuss its importance. We extend this sampling with...
Read more >
Porting theano function() with updates to Pytorch (negative ...
In the original code, a batch size is defined (default = 32) and additional negative samples (default n_sample = 2048 per batch afaik)...
Read more >
GRU4Rec — RecBole 1.1.1 documentation
In this way, negative sampling is necessary, such as setting --train_neg_sample_args="{'distribution': 'uniform', 'sample_num': 1}" . Defaults to 'CE' .
Read more >
Cross-Batch Negative Sampling for Training Two-Tower ...
Many two-tower models are trained using various in-batch negative ... an upper bound for the error of gradients of the scoring function, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found