Customized Run is not working.
See original GitHub issueGreat paper and thank you for making the source code public. The quick start code works for me in a conda environment, but it would be nice if the code can support virtual environment.
I am running into an error when I want to try the customized run on a random internet video. The video is 1920x1080. I don’t provide the camera model. The command I used is:
python main.py --video_file ./data/videos/1943413.mp4 --path ./results/pexels_video1 --make_video --model_type "midas2"
The error I got is:
Traceback (most recent call last): File "main.py", line 13, in <module> dp.process(params) File "/home/owen/Dev/consistent_depth/process.py", line 117, in process return self.pipeline(params) File "/home/owen/Dev/consistent_depth/process.py", line 88, in pipeline ft.fine_tune(writer=self.writer) File "/home/owen/Dev/consistent_depth/depth_fine_tuning.py", line 257, in fine_tune validate(0, 0) File "/home/owen/Dev/consistent_depth/depth_fine_tuning.py", line 248, in validate criterion, val_data_loader, suffix(epoch, niters) File "/home/owen/Dev/consistent_depth/depth_fine_tuning.py", line 323, in eval_and_save for _, data in zip(range(N), data_loader): File "/home/owen/anaconda3/envs/consistent_depth/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/owen/anaconda3/envs/consistent_depth/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data return self._process_data(data) File "/home/owen/anaconda3/envs/consistent_depth/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/home/owen/anaconda3/envs/consistent_depth/lib/python3.6/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/owen/anaconda3/envs/consistent_depth/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/owen/anaconda3/envs/consistent_depth/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/owen/anaconda3/envs/consistent_depth/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/owen/Dev/consistent_depth/loaders/video_dataset.py", line 172, in __getitem__ intrinsics = torch.stack([self.intrinsics[k] for k in pair], dim=0) File "/home/owen/Dev/consistent_depth/loaders/video_dataset.py", line 172, in <listcomp> intrinsics = torch.stack([self.intrinsics[k] for k in pair], dim=0) IndexError: index 64 is out of bounds for dimension 0 with size 13
I am suspecting that the camera parameter derived from colmap is not compatible with the dataloader.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:14
Top GitHub Comments
Hi, cameras.bin missing also implies COLMAP fails to create a sparse model for your scene. It could be because the sequence is too challenging like not enough baseline between the cameras, the scene is too dynamic, not enough features can be detected in your sequences. I’d encourage you to download COLMAP gui and inspect your COLMAP reconstruction. It would be much easier for COLMAP to register the cameras if the camera intrinsics can be pre-calibrated.
Hi, it seems that COLMAP fails to register camera poses on your video. Only 15 frames have been successfully registered. It could be because the sequence is too challenging like not enough baseline between the cameras, the scene is too dynamic, not enough features can be detected in your sequences. I’d encourage you to download COLMAP gui and inspect your COLMAP reconstruction.