"Error launching CUDA compiler: 256" on OpenMM context creation
See original GitHub issueKeep having this error on the cluster running with nodes=2:ppn=4:gpus=4:shared
. The exception is raised after the solvent phase has been completed, during resuming of the complex phase. I’m looking into this.
Traceback (most recent call last):
File "/cbio/jclab/home/andrrizzi/miniconda/bin/yank", line 9, in <module>
load_entry_point('yank==0.9.0', 'console_scripts', 'yank')()
File "/cbio/jclab/home/andrrizzi/miniconda/lib/python2.7/site-packages/yank-0.9.0-py2.7-linux-x86_64.egg/yank/cli.py", line 105, in main
dispatched = getattr(commands, command).dispatch(args)
File "/cbio/jclab/home/andrrizzi/miniconda/lib/python2.7/site-packages/yank-0.9.0-py2.7-linux-x86_64.egg/yank/commands/script.py", line 34, in dispatch
yaml_builder.build_experiment()
File "/cbio/jclab/home/andrrizzi/miniconda/lib/python2.7/site-packages/yank-0.9.0-py2.7-linux-x86_64.egg/yank/yamlbuild.py", line 1203, in build_experiment
self._run_experiment(combination, output_dir)
File "/cbio/jclab/home/andrrizzi/miniconda/lib/python2.7/site-packages/yank-0.9.0-py2.7-linux-x86_64.egg/yank/yamlbuild.py", line 1778, in _run_experiment
yank.run()
File "/cbio/jclab/home/andrrizzi/miniconda/lib/python2.7/site-packages/yank-0.9.0-py2.7-linux-x86_64.egg/yank/yank.py", line 451, in run
simulation.run(niterations_to_run=niterations_to_run)
File "/cbio/jclab/home/andrrizzi/miniconda/lib/python2.7/site-packages/yank-0.9.0-py2.7-linux-x86_64.egg/yank/repex.py", line 827, in run
self._initialize_resume()
File "/cbio/jclab/home/andrrizzi/miniconda/lib/python2.7/site-packages/yank-0.9.0-py2.7-linux-x86_64.egg/yank/repex.py", line 1011, in _initialize_resume
self.platform = self._determine_fastest_platform(representative_system)
File "/cbio/jclab/home/andrrizzi/miniconda/lib/python2.7/site-packages/yank-0.9.0-py2.7-linux-x86_64.egg/yank/repex.py", line 901, in _determine_fastest_platform
context = openmm.Context(system, integrator)
File "/cbio/jclab/home/andrrizzi/miniconda/lib/python2.7/site-packages/simtk/openmm/openmm.py", line 15103, in __init__
this = _openmm.new_Context(*args)
Exception: Error launching CUDA compiler: 256
<built-in>:0:0: fatal error: when writing output to : Bad file descriptor
compilation terminated.
Issue Analytics
- State:
- Created 8 years ago
- Comments:19 (19 by maintainers)
Top Results From Across the Web
OpenMM bug in NVIDIA CUDA part - Folding Forum
Hi all, for a few days, I am encountering a problem with running Folding@Home on Linux (OpenSUSE Leap 15.2). My system consists of...
Read more >6.1. Data types used by CUDA driver
This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration...
Read more >8. Compiling OpenMM from Source Code
Compiling OpenMM from Source Code¶. This chapter describes the procedure for building and installing OpenMM from source code. In most cases, it is...
Read more >Package List — Spack 0.20.0.dev0 documentation
Versions: develop; Build Dependencies: cuda, cmake, ninja, fftw, parallel-netcdf ... The AOCC compiler system offers a high level of advanced optimizations, ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Some additional testing notes:
This might actually be an MPI version problem. We are using
mpich2-1.4.1p1
(not sure which package installs it). However,mpich2
got to version 1.5-ish then changed back in November 2012 the versioning to justmpich
starting at 3.0. There is now a 3.2 on conda-forge and it appears to not have this problem on the simple test. I will test with YANK itself first to see if that really is the problem.This should be fixed as #686, we can re-open if it crops up again