Error when preprocessing data
See original GitHub issueI followed the instructions on how to setup the environment and when I ran the preprocessing script I got many lines with the following two errors.
OpenGL Error 500: GL_INVALID_ENUM: An unacceptable value is specified for an enumerated argument.
In: /usr/local/include/pangolin/gl/gl.hpp, line 205
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Unfortunately, nothing is generated in the output folder. I have used all the latest versions of the dependencies. I am running the script on a could VM in headless mode. What could be the problem?
Issue Analytics
- State:
- Created 4 years ago
- Comments:22 (2 by maintainers)
Top Results From Across the Web
Dealing with data preprocessing problems | by Lima Vallantin
Missing data; Manual input; Data inconsistency; Regional formats; Numerical units; Wrong data types; File manipulation; Missing anonymization. Let's talk about ...
Read more >Data Preprocessing In Depth | Towards Data Science
Data quality problems occur due to misspellings during data entry, missing values or any other invalid data. Basically, “dirty” data is transformed into...
Read more >Preprocessing error messages
If you encounter any preprocessing errors, review the cause and perform the required action to resolve the error.
Read more >Error while preprocessing input data for LayoutLMv2
I am facing an error: error during preprocessing data with LayoutLMv2Processor. I am getting this error "ValueError: Class label -100 less ...
Read more >U-Net Data preprocessing error - Grand Challenge Forums
Hello! When I executed 'prepare_data.py', I got an unexpected error. Here is a log file. "Provided mha2nnunet archive is valid.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I also have met the same problem like:
OpenGL Error 500: GL_INVALID_ENUM: An unacceptable value is specified for an enumerated argument. In: /usr/local/include/pangolin/gl/gl.hpp, line 205
However, I could get all processed npz files. But when I tried to train the network, it encountered the error below:
Traceback (most recent call last): File "/home/cuili/DeepSDF/train_deep_sdf.py", line 591, in <module> main_function(args.experiment_directory, args.continue_from, int(args.batch_split)) File "/home/cuili/DeepSDF/train_deep_sdf.py", line 511, in main_function chunk_loss.backward() File "/home/cuili/.local/lib/python3.6/site-packages/torch/tensor.py", line 166, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/cuili/.local/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: leaf variable has been moved into the graph interior
How can I fix it?
I have this same error.
The npz files are still produced, but the preprocessing runs unacceptably slow, something like 2 minutes per mesh per thread.