No such file or directory: u'data/SdfSamples/ShapeNetV2/04256520/...npz'
See original GitHub issueWhen I run the data per-processing code,
$ python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip
It generates following log:
...
DeepSdf - INFO - ~/data/ShapeNetCore.v2/04256520/c955e564c9a73650f78bdf37d618e97e/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/c955e564c9a73650f78bdf37d618e97e.npz
DeepSdf - INFO - ~/data/ShapeNetCore.v2/04256520/c97af2aa2f9f02be9ecd5a75a29f0715/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/c97af2aa2f9f02be9ecd5a75a29f0715.npz
DeepSdf - INFO - ~/data/ShapeNetCore.v2/04256520/c9c0132c09ca16e8599dcc439b161a52/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/c9c0132c09ca16e8599dcc439b161a52.npz
...
It seems that the data are generated and written to data/SdfSamples/ShapeNetV2/04256520/<model_name>.npz
However, when I run the training code:
$ python train_deep_sdf.py -e examples/sofas
It complains that no data found:
...
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cba1446e98640f603ffc853fc4b95a17.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cbccbd019a3029c661bfbba8a5defb02.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cbd547bfb6b7d8e54b50faf1a96496ef.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc20bb3596fd3c2e677ea8589de8c796.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc4a8ecc0f3b4ca1dc0efee4b442070.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc4f3aff596b544e599dcc439b161a52.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc5f1f064a1ba342cbdb36da0ec8fda6.npz'
DeepSdf - INFO - There are 1628 scenes
DeepSdf - INFO - starting from epoch 1
DeepSdf - INFO - epoch 1...
Traceback (most recent call last):
File "train_deep_sdf.py", line 558, in <module>
main_function(args.experiment_directory, args.continue_from, int(args.batch_split))
File "train_deep_sdf.py", line 436, in main_function
for sdf_data, indices in sdf_loader:
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 582, in __next__
return self._process_next_batch(batch)
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 608, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
IOError: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "~/project/deepSDF/deep_sdf/data.py", line 151, in __getitem__
return unpack_sdf_samples(filename, self.subsample), idx
File "~/project/deepSDF/deep_sdf/data.py", line 67, in unpack_sdf_samples
npz = np.load(filename)
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 422, in load
fid = open(os_fspath(file), "rb")
IOError: [Errno 2] No such file or directory: u'data/SdfSamples/ShapeNetV2/04256520/949054060a3db173d9d07e89322d9cab.npz'
When I check the source folder, the model file is there:
$ ls ~/<...>/ShapeNetCore.v2/02691156/ff12c3a1d388b03044eedf822e07b7e4/models/
total 5.3M
-rw-rw-r-- 1 217 Jul 11 2016 model_normalized.json
-rw-rw-r-- 1 1.3K Jul 11 2016 model_normalized.mtl
-rw-rw-r-- 1 5.2M Jul 11 2016 model_normalized.obj
-rw-rw-r-- 1 24K Jul 12 2016 model_normalized.solid.binvox
-rw-rw-r-- 1 25K Jul 12 2016 model_normalized.surface.binvox
However, when I checked the output folder, I do found that it’s empty:
$ ls data/SdfSamples/ShapeNetV2/04256520
total 0
Does anyone know what’s the cause for this?
Thanks for your help!
Issue Analytics
- State:
- Created 4 years ago
- Comments:13 (4 by maintainers)
Top GitHub Comments
@tschmidt23 , hi , I’ve tried the tips above. I compile the latest Pangolin but have some problems in this and other issues.
I’d be appreciated if you could provide some help, thank you.
@B1ueber2y – Pangolin does not support X11 forwarding, so that definitely could have been the issue.