question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

LightFM always produces same precision@6 result metric

See original GitHub issue

I’m running parameter optimization on a LightFM implicit factorization model using “warp” loss. When I run np.mean(precision_at_k(…)) on the test data, it is virtually always the same result, 0.316850870848, out to 12 digits. Is this expected for some reason? If not, any ideas how to figure out what might be wrong?

Here is sample output showing the param combinations and the output:

2017-01-20 21:10:02,363 INFO __main__: Starting training iteration 1, params:
{
    "alpha": 0.0041233076328981919,
    "epochs": 45,
    "learning_rate": 0.50174314850490254,
    "no_components": 184
}
2017-01-20 21:10:02,363 INFO lightfm_uma: Training model...
2017-01-20 21:25:20,518 INFO lightfm_uma: Finished precision@6 = 0.316850870848
2017-01-20 21:25:21,453 INFO __main__: Starting training iteration 2, params:
{
    "alpha": 0.0064873564172718886,
    "epochs": 63,
    "learning_rate": 0.50406151543722921,
    "no_components": 180
}
2017-01-20 21:25:21,453 INFO lightfm_uma: Training model...
2017-01-20 21:44:36,565 INFO lightfm_uma: Finished precision@6 = 0.316850870848
2017-01-20 21:44:37,495 INFO __main__: Starting training iteration 3, params:
{
    "alpha": 0.020473212242717205,
    "epochs": 62,
    "learning_rate": 0.74135691825946459,
    "no_components": 156
}
2017-01-20 21:44:37,496 INFO lightfm_uma: Training model...
2017-01-20 22:00:45,661 INFO lightfm_uma: Finished precision@6 = 0.316850870848

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:28

github_iconTop GitHub Comments

1reaction
maciejkulacommented, Jan 22, 2017

All SGD algorithms are sensitive to hyperparameters. NaNs are most often the result of excessive learning rates. Arguably I should warn when this happens, it can be easy to miss.

Can I suggest you pull the branch I linked above and use it for your optimization? It will have the advantage of giving you terrible precision scores when you algorithm has actually diverged or regularized itself to zero.

0reactions
maciejkulacommented, Jan 29, 2017

I think this is fixed in 1.12 — thanks for your help!

Read more comments on GitHub >

github_iconTop Results From Across the Web

LightFM 1.16 documentation
When multiplied together, these representations produce scores for every item for a given user; ... The same idea applies to the user_alpha parameter....
Read more >
Interpreting results from lightFM - Stack Overflow
I built a recommendation model on a user-item transactional dataset where each transaction is represented by 1. model = LightFM(learning_rate=0.05, loss='warp').
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found