50% performance drop going from 0.8 to 0.9 with the VVVR integrator on GPUs
See original GitHub issueI recently updated my openmmtools version from 0.8.3. After this I saw a huge (50%) performance drop and narrowed it down to the update from 0.8.3 to 0.9.0. When running the following code on my machine, the average speed on 0.8.3 is 106 ns/day while it is 48.3 ns/day for 0.9.0 and going to 0.10 didn’t fix it. The only difference between those two speeds is the openmmtools version, which I changed via conda. I also tried going to OpenCL and that also didn’t fix the problem.
from simtk.openmm import app
from simtk import unit
import simtk.openmm as mm
import openmmtools as omt
from sys import stdout
system = omt.testsystems.DHFRExplicit(
nonbondedMethod=app.PME,
nonbondedCutoff=1.1*unit.nanometers,
constraints=app.HBonds,
rigidWater=True,
ewaldErrorTolerance=0.0005)
integrator = omt.integrators.VVVRIntegrator(
298*unit.kelvin,
1.0/unit.picoseconds,
2.0*unit.femtoseconds
)
integrator.setConstraintTolerance(0.00001)
system.system.addForce(
mm.MonteCarloBarostat(
1*unit.atmospheres,
298*unit.kelvin,
25
)
)
platform = mm.Platform.getPlatformByName('CUDA')
properties = {'CudaPrecision': 'mixed', 'CudaDeviceIndex': '0'}
simulation = app.Simulation(system.topology, system.system, integrator, platform,
properties)
simulation.context.setPositions(system.positions)
simulation.context.setVelocitiesToTemperature(298*unit.kelvin)
simulation.reporters.append(app.StateDataReporter(stdout, 5000, step=True,
potentialEnergy=True, temperature=True, progress=True, remainingTime=True,
speed=True, totalSteps=2500000, separator='\t'))
print 'Running Production...'
simulation.step(2500000)
print 'Done!'
This is all running python 2.7, with openmm version 7.1.1 on a NVIDIA GeForce GTX TITAN X.
Issue Analytics
- State:
- Created 6 years ago
- Comments:17 (17 by maintainers)
Top Results From Across the Web
NVIDIA users experiencing performance drop after Windows ...
“Using MSI afterburner, I found that in games (BFV and NFS Heat), my CPU usage had dropped from around 50-60% to only 5...
Read more >HOW TO FIX Low GPU Usage and Low FPS [ 2022 Guide ]
Your browser can't play this video. Learn more. Switch camera.
Read more >my gpu is power limited in gpu-z with perfcap (PWR ) below ...
so my gpu alot of the time is sitting below 50 % usage in 1440p and in ... voltage won't got over 0.8-0.9...
Read more >How to Tune GPU Performance Using Radeon Wattman - AMD
In Frequency (%) mode (default mode), move the slider right or left to increase or decrease the GPU frequency in increments of 0.5%....
Read more >Use an external graphics processor with your Mac
An eGPU can give your Mac additional graphics performance for ... of this deep system integration, only graphics cards that use the same...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Great!
I wonder if there may be further optimizations @peastman could make in OpenMM that would be able to speed up heat measurement. Will post in the OpenMM issue tracker after we get a simple test case constructed.
We should definitely disable
measure_heat=True
as the default option, though.Glad to hear! The redundant velocity constraint solves shouldn’t affect the statistics of the paths sampled, but are there to make the
heat
/shadow_work
bookkeeping more convenient (by ensuring that, at the end of every substep, we are still on the constraint manifold).@jchodera : Perhaps
measure_heat=False
should be default? And when no bookkeeping is done, we can remove redundant velocity-constraint solves?