CI XLA failing test: test_ema_final_weight_distrib_single_device_xla
See original GitHub issuetests/ignite/handlers/test_ema_handler.py::test_ema_final_weight_distrib_single_device_xla
_test_ema_final_weight(get_dummy_model(), device=device, ddp=True)
...
actual = tensor([[4.0625, 4.0625]], device='xla:0', dtype=torch.float64)
expected = tensor([[4.0625, 4.0625]], device='xla:0', dtype=torch.float64)
> Operator __and__ is only supported for integer or boolean type tensors, got: f64[1,2]{1,0}
cc @sandylaker
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Update dynamo xla test to make it part of the xla CI #91130
I remember I encounter CI failure because of unrecognized arguments before when I added this. Please keep an eye on the CI results....
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@vfdev-5 Thanks for the cc. I will create a PR to fix it as soon as possible.
@vfdev-5 So a floating point will casted to
torch.float64
anyway, according to torch.testing. But I think the problem might be caused bytorch_xla
. Please have a look at my comments in the PR #2182 .