On newer versions of openmmtools, GPU can't be used if using mixed precision due to mixed precision check
See original GitHub issueThis function always fails due to this part:
if platform.getName() in ['CUDA', 'OpenCL']:
from simtk import openmm
properties = { 'Precision' : precision }
system = openmm.System()
integrator = openmm.VerletIntegrator(0.001)
try:
context = openmm.Context(system, integrator, properties)
del context, integrator
return True
except Exception as e:
return False
It always returns
TypeError: Wrong number or type of arguments for overloaded function 'new_Context'.
Possible C/C++ prototypes are:
OpenMM::Context::Context(OpenMM::System const &,OpenMM::Integrator &)
OpenMM::Context::Context(OpenMM::System const &,OpenMM::Integrator &,OpenMM::Platform &)
OpenMM::Context::Context(OpenMM::System const &,OpenMM::Integrator &,OpenMM::Platform &,std::map< std::string,std::string,std::less< std::string >,std::allocator< std::pair< std::string const,std::string > > > const &)
OpenMM::Context::Context(OpenMM::Context const &)
Try it:
precision = "mixed"
from simtk import openmm
properties = { 'Precision' : precision }
system = openmm.System()
integrator = openmm.VerletIntegrator(0.001)
context = openmm.Context(system, integrator, properties)
Thus, whenever configuring the platform, if trying to check available platforms that allow mixed precision, the CUDA and OpenCL ones will always fail.
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Train With Mixed Precision - NVIDIA Documentation Center
Mixed precision is the combined use of different numerical precisions in a computational method. Half precision (also known as FP16) data ...
Read more >Mixed precision | TensorFlow Core
Overview. Mixed precision is the use of both 16-bit and 32-bit floating-point types in a model during training to make it run faster...
Read more >TensorFlow mixed precision training: Conv2DBackpropFilter ...
I am using the keras mixed precision API in order to fit my networks in GPU. ... to the TensorBoard information it is...
Read more >Multiple state TIS on Alanine Dipeptide - OpenPathSampling
and we use the template to create the necessary OpenMM Topology object. ... For speed we pick mixed precision on GPUs. ... We...
Read more >Benchmarking GPUs for Mixed Precision Training with Deep ...
Most framework developers use single precision floating point numbers for all computing kernels while training deep learning models. This Paperspace blog by ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thanks for the fantastic bug report and minimal example to reproduce!
The syntax for line 551 should be changed from
to
I’ll open a PR.
Thanks @jchodera and @peastman for the very quick fix indeed!