Active Learning sampling quality
See original GitHub issueAfter using the dedupe library for a while in the context of video content reconciliation, we encountered some situations where the Active Learning sampling is very poor. This makes it difficult to built a good training set for the classifier and by consequence the reconciliation results are poor as well.
For instance, we made some tests and tried to reconcile 2 well known public data provider (iMDB and TMDB), which contain reciprocal references to be used as ground truth and good metadata, plus we could build the dataset knowing that all entries could be reconciled in a many-to-one
approach (set 1 is contained in set 2 -> 100% recall is theoretically possible).
We tried to reconcile episodes and used few fields in the process (episode title, series title, season number, episode number, series year). The Active Learning sampling were quite balanced between positive and negative examples, therefore it has been quite effortless to collect 10 samples of positive and negative pairs. The final results were quite satisfying as well: recall 78%, precision 98%. Moreover, by scrolling through the results, we noticed that the model learned to ignore the episode title field, which were not consistent between datasets.
Afterwards we decided to perform a second test by removing the episode title field, but keeping everything else as in the previous test (same dataset, same configurations). This time the Active Learning sample were quite poor: almost all pairs were wrong (it took more than 200 pairs to obtain 8 positive). The final reconciliation in this case were also poor: recall 15% and precision 91%.
I would ask then if it is possible to mitigate this kind of issues:
- Is it important to balance the active learning pairs? in the second test we fed 200 negative vs 8 positive pairs. Can this be the cause of a low recall?
- How is it the model explainable? How would you suggest to investigate bad reconciliation results in general?
- Do you have any idea of which are the possible causes of bad sampling in this specific test case?
Thank you for your great work,
Antonio
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (4 by maintainers)
Top GitHub Comments
can you try the better sampling branch and let me know if that is giving you better results?
closing for now due to lack of feedback.