Some minor issues and loss=nan problem
See original GitHub issueHi, there!Sorry to reply you so late, I have been reading your code and running experiments recently.
But I ran into some small problems as follows,
1. When running the test code, the following code will make an error
python eval.py \
--dataset_name blender_ray_patch_1image_rot3d \
--root_dir ./synthetic_SinNeRF/lego/ \
--N_importance 64 --img_wh 400 400 --model nerf \
--ckpt_path ./ckpts/lego_s6_4ft/last.ckpt \
--timestamp test
When the default value of the --split
is test
, an error will be reported here. note that frame
https://github.com/VITA-Group/SinNeRF/blob/6f101f924fe9ba7793df5a9bbc52b2c82423e251/datasets/blender_ray_patch_1image_rot3d.py#L540
2. The dtu file you uploaded is missing a part or the code is wrong?
https://github.com/VITA-Group/SinNeRF/blob/6f101f924fe9ba7793df5a9bbc52b2c82423e251/datasets/dtu_proj.py#L433-L434
3. loss nan problem
I am running the latest code on an RTX 3090, 24G, and environments are created using environment.yaml
, but is still OOM, so I adjusted the patch_size
, precision
and --sH
, --sW
according to the README.
I set precision=16
, and --sH
, --sW
remain the same(=6).
I found that the loss=nan problem will appear when the patch_size is too small, such as patch_size=8
or patch_size=16
, even patch_size=32
.
It works(without loss=nan) when patch_size=50
, but that’s not a good number is it?
I would be grateful if you could provide advice on how to deal with this. Thank you !
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:8 (2 by maintainers)
@HannahHaensen For custom images, if the depth contains a lot of zeros, there are chances of nan as well. You can considering masking out the zero depth regions
@HannahHaensen Hi,
I set
--patch_size 56 --sW 8 --sH 8
, others remain the same.Since then, I have not continued to study this paper in depth. So I can’t offer much help.