Simulations with >100k particles.

The functionality of OpenMM will (eventually) include everything that one would need to run modern molecular simulation.
User avatar
Spela Ivekovic
Posts: 26
Joined: Thu Mar 17, 2011 4:27 am

Re: Simulations with >100k particles.

Post by Spela Ivekovic » Tue Jul 09, 2013 10:19 am

Hi Peter,

I have run the Python script successfully and it reports 792450 atoms in its original setting (boxSize = 20). I tested both the CUDA platform and the OpenCL platform.

Here's the complete output:

Code: Select all

Adding solvent
Building system
792450 atoms
Creating context
Computing
initial energy: -9582238.80569 kJ/mol
final energy: -12133964.7558 kJ/mol
792,450 is quite a bit more than I can get if I run the HelloWaterBox example directly as a C++ executable. In the HelloWaterBox C++ case, the simulation crashes if increased over 45x45x45 water molecules (= 91,125 water molecules = 273375 atoms).

Why would there be such a discrepancy?

Spela

User avatar
Peter Eastman
Posts: 2573
Joined: Thu Aug 09, 2007 1:25 pm

Re: Simulations with >100k particles.

Post by Peter Eastman » Wed Jul 10, 2013 1:04 pm

Hi Spela,

It looks like HelloWaterBox needs to be cleaned up. The real problem is that it's creating water molecules that are very tightly packed together in a very high energy configuration, and the simulation is blowing up. Line 244 reads

Code: Select all

const double WaterSizeInNm = 0.3107; // edge of cube containing one water, in nanometers
If I increase the spacing very slightly to 0.3207, it has no problem simulating over a million atoms.

Peter

User avatar
Spela Ivekovic
Posts: 26
Joined: Thu Mar 17, 2011 4:27 am

Re: Simulations with >100k particles.

Post by Spela Ivekovic » Thu Jul 11, 2013 5:42 am

Hi Peter,
The real problem is that it's creating water molecules that are very tightly packed together in a very high energy configuration, and the simulation is blowing up.
yes, this sorted it out. Phew :)

I guess the issue is numerical round-off. When I find the time, I'll see if I get the same problem with the double-precision version of OpenMM. I expect the problems we have been experiencing in our other simulations then also have to do with the spacing of the atoms. We are using OpenMM to speed up a CPU-based MD library which runs in double precision and I suspect this is where most of the issues arise from, as we pass the initial simulation settings, generated in this library, to OpenMM. It's good to know where to look...

BTW, just for the record, with the correction you suggested, I managed to run a 70x70x70 water box in the HelloWaterBox example on a 2GB Quadro 4000. Not bad! :)

Thank you for your help!

Spela

User avatar
Peter Eastman
Posts: 2573
Joined: Thu Aug 09, 2007 1:25 pm

Re: Simulations with >100k particles.

Post by Peter Eastman » Thu Jul 11, 2013 10:41 am

Hi Spela,

I don't think numerical roundoff is the issue. The spacing used in the original version is actually an accurate value for liquid water, and would be entirely reasonable if it were starting from an equilibrated configuration. The problem is that it just arranges the water molecules in a uniform grid, which is nothing like liquid water and is an incredibly high energy configuration. If you could run it long enough, it would eventually equilibrate to something reasonable, but there's a really good chance that before that happens, some atom will get shoved right on top of another atom and the whole simulation will blow up. And the bigger your box is, the more likely that is to happen (since it just has to happen once anywhere in the box).

Presumably if you decreased the time step and increased the friction coefficient of the thermostat, that would also solve the problem. I haven't tested it though.

Peter

User avatar
Maxim Imakaev
Posts: 87
Joined: Sun Oct 24, 2010 2:03 pm

Re: Simulations with >100k particles.

Post by Maxim Imakaev » Thu Aug 01, 2013 3:22 am

Hi Peter,

I'm just asking if you fixed the "long chain" type error for the large number of particles (probably, stack overflow for the connected component search)?
We want to run some very long polymer simulations around the end of the summer.

Thanks,
Max

User avatar
Peter Eastman
Posts: 2573
Joined: Thu Aug 09, 2007 1:25 pm

Re: Simulations with >100k particles.

Post by Peter Eastman » Thu Aug 01, 2013 10:00 am

No I haven't. Thanks for reminding me. I'll see if I can do that in the next few days.

Peter

User avatar
Peter Eastman
Posts: 2573
Joined: Thu Aug 09, 2007 1:25 pm

Re: Simulations with >100k particles.

Post by Peter Eastman » Fri Aug 02, 2013 11:54 am

I've just merged the fix into the main repository.

Peter

User avatar
Maxim Imakaev
Posts: 87
Joined: Sun Oct 24, 2010 2:03 pm

Re: Simulations with >100k particles.

Post by Maxim Imakaev » Mon Aug 05, 2013 4:00 am

Thank you Peter!
It works now with a million particles, which is more than enough for us.

Out of curiosity I checked it further, and found that CUDA breaks down between 1.25 and 1.5 million due to a segfault in nonbondedForce.
OpenCL seems to be working up to 3M, but it explodes after 2M - maybe, because of my setup, I'll have to experiment with it if I need to.

Thanks,
Max

POST REPLY