How to reconstruct 3D scene from just one rgb image with pretrained model?
See original GitHub issueHello @themasterlink Thanks for your great work for Indoor SingleViewReconstruction. Sorry that I’m newer in this deep learning task. I’m confused about how to reconstruct a 3D scene from just one RGB image with a pre-trained model. The input image may be like the following:
After analyzing the readme of all subfolders and discussion in #3, I am still very confused about how a jpg --> I try to develop the SingleJPGReconstruction.py with the following step:
-
Gain normal_img from the test.jpg with UNetNormalGen: I am blocked in h5py input and setting_file.yml. the input of generate_encoded_outputs.py is h5py and settings_file.yml that is seemed to be generated by BlenderProc. I have no idea how to generate this input from a single image without BlenderProc. I tried to use command
python generate_predicted_normals.py --model_path model/model.ckpt --path ../data
in SingleViewReconstruction/SingleViewReconstruction. -
Combine normal_image and color_image into one hdf5 file. #3 says the data should minus normal_mean_image and color_mean_image The color_normal_mean.hdf5 is used here:
SingleViewReconstruction/SingleViewReconstruction/generate_tf_records.py
Line 123 in f1475ca
normal_o -= normal_mean_img
and here:
SingleViewReconstruction/SingleViewReconstruction/src/DataSetLoader.py
Line 95 in f1475ca
color_img -= self.mean_img
- try predict_datapoint.py
I tried command
python predict_datapoint.py data/color_normal_mean.hdf5 --output OUTPUT --use_pretrained_weights
. there wasn’t train*.tf_record.
The plan of course didn’t work. Could you give me some help on how to make it? Thanks sincerely.
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (2 by maintainers)
Top GitHub Comments
great job
Thank you a lot.