Documentation: give more info on the input/output
See original GitHub issueHello,
This library seems interesting, however I have a hard time actually using it. I am not quite sure of what the input/output are supposed to be and what is needed to have it working.
For instance I currently have user and items representations, which are both matrices of features. When I run Tensorrec, even with a large number of epoch (ex: 2000) and after training it with interactions between 1 and -1 I get predictions like -2500
and 850
. In addition, for every given user the predictions of items seems to always be in the same order, even though they have different values (ex: for 3 items and user the prediction could be [[10, 15, 9], [20, 25, 19], [-10, -5, -19]]
which seems unlikely to have with a big dataset of user/items/interactions).
I have tried looking at the code but TBH it is not super easy as a first approach of a library to check the code to understand how you are supposed to use it.
I think it would be nice to clarify those points:
- What is the expected input: should features only be booleans 1/0 or can they be numbers
- What is the expected output: is it the expected behavior that you can get 1000000?
- How the application works in general: how can the representation “learn” from its mistakes for instance? My problem right now seems to be that the representation of my users are way too similar (when I use
predict_user_representation
and reducen_components
to 3 I have the same representation for all users even if their features are very different ) but I don’t really have an idea of why and things actually evolve through epochs.
Issue Analytics
- State:
- Created 5 years ago
- Comments:7 (3 by maintainers)
Top GitHub Comments
Great suggestion – I added the mixture of tastes and attention systems after reading this paper: https://arxiv.org/abs/1711.08379
I’ll add better documentation for it. It probably also merits a blog post outlining the thinking.
Hey @mijamo – this is great feedback, thank you for taking the time!
In general, I think you’re absolutely right about the documentation. I’m assigning this to myself to add more.
At a glance, getting very large positive/negative predictions is expected behavior for certain loss functions. Which loss function are you using? Only RMSE loss functions will try to reproduce the input interaction values, so if your loss isn’t RMSE then the predictions are unbounded.
Having the items in similar order for many users tends to happen when the dataset has some items which are far more popular than others. A good way to correct for this is through selection of an appropriate loss function. I’d recommend using
BalancedWMRBLossGraph
.Regarding your other questions, I will elaborate them in the documentation. To answer quickly for you here: Features can be any value. The expected output depends on the loss function, but is unbounded. If you’re using a learning-to-rank loss function, such as WMRB, the system is optimizing for output ranks, not output prediction values. The system learns in general (using the default prediction graphs and a learning-to-rank loss) by using the dot product of the user_representation and item_representation as a prediction value. These predictions are then compared against the interactions, a loss is calculated, and that loss is propagated back through the representation graphs.