segmentation fault in OpenMM 5.0

The functionality of OpenMM will (eventually) include everything that one would need to run modern molecular simulation.
POST REPLY
User avatar
Fabian Paul
Posts: 6
Joined: Wed Apr 11, 2012 10:41 am

segmentation fault in OpenMM 5.0

Post by Fabian Paul » Tue Feb 26, 2013 8:32 am

Dear Peter,

I found something that looks like a bug in OpenMM 5.0. This was produced on intel 64 bit systems with gcc 4.6.3 or 4.3.4 using either a GeForce GTX 580 or a Tesla S2050 GPU and
the CUDA platform.

In the first example, I get a segmentation fault when I try to minimize the energy after
creating two instances of the simulation class. This error was repoduced with two completely
different pdb files and should not depend on the atom positions.

Code: Select all

from simtk.openmm.app import *
from simtk.openmm import *
from simtk.unit import *

def sim(i):
  collision_rate = 1.0 / picoseconds
  timestep = 2.0 * femtoseconds
  forcefield = ForceField('amber99sb.xml', 'tip3p.xml')
  pdb = PDBFile('Alanine_solvated.pdb')
  system = forcefield.createSystem(pdb.topology, nonbondedCutoff=1*nanometer, nonbondedMethod=PME)
  integrator=LangevinIntegrator(300.0*kelvin, collision_rate, timestep)
  device = i
  platform = Platform.getPlatformByName("CUDA")
  properties = {'CudaDeviceIndex':str(device), 'CudaPrecision':'mixed'}
  simulation=Simulation(pdb.topology, system, integrator, platform, properties)
  simulation.context.setPositions(pdb.positions)
  return simulation

sim0 = sim(0)
sim1 = sim(0) # or sim(1) 

sim0.minimizeEnergy() # crashes here
In this example I get a segmentation fault after calling reinitialize and subsequetly calling step.
This is just a minimal example. In my original project I call reinitialize after altering some
nonbonded force parameters. (I assume that this is neccessary when I use the CUDA platform?)

Code: Select all

from simtk.openmm.app import *
from simtk.openmm import *
from simtk.unit import *

collision_rate = 1.0 / picoseconds
timestep = 2.0 * femtoseconds
forcefield = ForceField('amber99sb.xml', 'tip3p.xml')
pdb = PDBFile('Alanine_solvated.pdb')
system = forcefield.createSystem(pdb.topology, nonbondedCutoff=1*nanometer, nonbondedMethod=PME)
integrator=LangevinIntegrator(300.0*kelvin, collision_rate, timestep)
platform = Platform.getPlatformByName('CUDA')
properties = {'CudaDeviceIndex':'0', 'CudaPrecision':'mixed'}
simulation=Simulation(pdb.topology, system, integrator, platform, properties)
simulation.context.setPositions(pdb.positions)

simulation.context.reinitialize()
simulation.step(10) # crashes here
Best wishes,
Fabian

User avatar
Peter Eastman
Posts: 2593
Joined: Thu Aug 09, 2007 1:25 pm

Re: segmentation fault in OpenMM 5.0

Post by Peter Eastman » Tue Feb 26, 2013 3:26 pm

Thanks! I was able to reproduce both problems.

The first one is in the routine for applying constraints, and happens when you use two different Contexts from the same thread. It's easy to fix, and I'll make sure it gets fixed in our next patch.

The second one is actually trying to throw an exception, but something is going wrong in translating the C++ exception to a Python exception, leading to the segfault. I'm not sure why. Anyway, the actual error it's trying to alert you to is that you've never set the positions on your Context. Well, you do call setPositions(), but then you immediately call reinitialize() which throws them out. Move the setPositions() call to after reinitialize() and it works correctly.

Peter

User avatar
Fabian Paul
Posts: 6
Joined: Wed Apr 11, 2012 10:41 am

Re: segmentation fault in OpenMM 5.0

Post by Fabian Paul » Wed Feb 27, 2013 3:27 am

Thanks for the swift reply. Setting the atom positions again in the second example solved the problem.
Concerning the first problem. How do I get the patch, once it's fixed? Will it be published on simtk?
Meanwhile I was able to find a workaround for the problem. The program will not crash if the minimizeEnergy call is moved into the sim function such that each context gets energy-minimized immediately after it's created.

Fabian

User avatar
Peter Eastman
Posts: 2593
Joined: Thu Aug 09, 2007 1:25 pm

Re: segmentation fault in OpenMM 5.0

Post by Peter Eastman » Wed Feb 27, 2013 11:06 am

Hi Fabian,

We'll be releasing a 5.0.1 update hopefully within the next few days, and it will include this fix.

Peter

POST REPLY