inference with random dataset
See original GitHub issueHi,
Using the latest version of DLRM I am trying to run inference on random data. The command I am using is like this:
python dlrm_s_pytorch.py --arch-embedding-size=65000-65000-65000-65000-65000-65000-65000-65000 --arch-sparse-feature-size=64 --arch-mlp-bot=1440-720-64 --arch-mlp-top=40-20-10-1 --data-generation=random --mini-batch-size=128 --num-batches=10 --num-indices-per-lookup=32 --num-indices-per-lookup-fixed=True --inference-only
This used to work okay with earlier version of DLRM but now I get this error when using --inference-only
:
File "dlrm_s_pytorch.py", line 1452, in run ), "currently only dataset loader provides testset" AssertionError: currently only dataset loader provides testset
Training is totally ok but inference no. Can you please help me to figure out why I am getting this error and how it can be solved?
Thanks
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (3 by maintainers)
Top GitHub Comments
I believe I might know the reason for this discrepancy. In the past the inference run would use samples from the training set as inputs, while later we changed it to use samples from the test set. This might have inadvertently changed the behavior of the random data. We can take this as an action item to fix in the future.
@amirstar has made a commit https://github.com/facebookresearch/dlrm/commit/9acb4e1e9bb78995f32a08e76f1299db6a5d6834 that should allow you to run with random data.
Please give it a try and let us know if it works for you.