Dear users,
first of all, I'm amazed by the rigorous design of OpenMM!
My question is as follows: I'm having a relatively large system (~200.000 Atoms) which I'm currently simulating with NAMD and VMD.
We're running NAMD on a supercomputing cluster with > 100.000 CPU cores!
The appealing C++ API of OpenMM is necessary and seems perfectly suited for us to use, because we need to integrate MD code in our simulation software for PDEs, which is not as easy with NAMD as it seems for me.
My question would be, since NAMD is parallized on > 100.000 CPU cores with MPI (mpirun), can I somehow achieve a MPI parallel version within OpenMM? I'm confused how to do that exactly, maybe you can point me to some resources.
All the best,
Stephan
OpenMM and MPI
- Yutong Zhao
- Posts: 6
- Joined: Wed Apr 18, 2012 11:40 am
Re: OpenMM and MPI
Hi Stephan,
OpenMM does not parallelize across multiple nodes in a cluster environment. It's designed for single node use with powerful GPUs and CPUs.
OpenMM does not parallelize across multiple nodes in a cluster environment. It's designed for single node use with powerful GPUs and CPUs.
- Stephan Grein
- Posts: 4
- Joined: Fri Dec 13, 2013 9:13 am
Re: OpenMM and MPI
I see, thanks for your quick reply.
I was reading the paper: OpenMM: A Hardware-Independent Framework for Molecular Simulations by Peter Eastman and Vijay Pande.
It suggested to me in a certain paragraph, that i could decompose my domain (molecule) with MPI:
I wanted to use OpenMM because it's clean and open, but if this would not be feasible i need to stick to another simulator maybe, e. g. GROMACS? But if i remember correctly they are not as clean and open (especially with respect to the API in C++) as OpenMM.
All the best,
Stephan
I was reading the paper: OpenMM: A Hardware-Independent Framework for Molecular Simulations by Peter Eastman and Vijay Pande.
It suggested to me in a certain paragraph, that i could decompose my domain (molecule) with MPI:
So there is no possibility within OpenMM to do a domain decomposition across nodes? Knowing this, I'm thinking of doing the domain decomposition on my own and collect results from the nodes in the cluster with MPI. Do you think this is too complicated?At the architecture’s lowest level are the actual
computational kernel implementations. These can
be written in any language and can use any tech-
nology appropriate for the target hardware. For
example, they might use a technology such as the
Compute Unified Device Architecture (CUDA) or
Open Computing Language (OpenCL) to imple-
ment GPU calculations, Posix threads (Pthreads)
or Open Multi-Processing (OpenMP) to imple-
ment parallel CPU calculations, message-passing
interface (MPI) to distribute work across a clus-
ter’s nodes, and so on.
I wanted to use OpenMM because it's clean and open, but if this would not be feasible i need to stick to another simulator maybe, e. g. GROMACS? But if i remember correctly they are not as clean and open (especially with respect to the API in C++) as OpenMM.
All the best,
Stephan
- Peter Eastman
- Posts: 2588
- Joined: Thu Aug 09, 2007 1:25 pm
Re: OpenMM and MPI
In principle an MPI based implementation of the API could be written. We just haven't written one! So right now, OpenMM will only use a single node per simulation.
Peter
Can you give some information on what you want to do? There are many ways of breaking up a simulation, and what is or isn't practical really depends on the problem you're trying to solve. For example, in many cases you're just interested in sampling and it doesn't matter whether that sampling comes from one long simulation or a lot of shorter simulations. But it's hard to say more without knowing a bit about your work.Knowing this, I'm thinking of doing the domain decomposition on my own and collect results from the nodes in the cluster with MPI. Do you think this is too complicated?
Peter
- Stephan Grein
- Posts: 4
- Joined: Fri Dec 13, 2013 9:13 am
Re: OpenMM and MPI
Hey peastman,
thanks for your advices.
In principle I need to calculate a relatively large system 100k atoms (with explicit water - maybe i can reduce that) with different concentrations solvated in my waterbox, i. e. i have a proteindimer, a waterbox around it, and in the waterbox also solvated ions (e. g. Kalium/Calcium) with different concentrations.
Part 1)
I need to simulate 0.1 or even better 1 ms of protein dynamics, which i do for now in parallel with NAMD on a Linux CPU cluster.
(is this feasible on a GPU card? - we have some tesla cards)
Part 2)
Do the same as in part 1) but use different start concentrations of ions, e. g. do 10 runs of Part 1) with 10 different Kalium/Calcium concentrations - this could be done in parallel on e. g. 10 GPU card computing nodes?)
Best,
Stephan
thanks for your advices.
In principle I need to calculate a relatively large system 100k atoms (with explicit water - maybe i can reduce that) with different concentrations solvated in my waterbox, i. e. i have a proteindimer, a waterbox around it, and in the waterbox also solvated ions (e. g. Kalium/Calcium) with different concentrations.
Part 1)
I need to simulate 0.1 or even better 1 ms of protein dynamics, which i do for now in parallel with NAMD on a Linux CPU cluster.
(is this feasible on a GPU card? - we have some tesla cards)
Part 2)
Do the same as in part 1) but use different start concentrations of ions, e. g. do 10 runs of Part 1) with 10 different Kalium/Calcium concentrations - this could be done in parallel on e. g. 10 GPU card computing nodes?)
Best,
Stephan
- Peter Eastman
- Posts: 2588
- Joined: Thu Aug 09, 2007 1:25 pm
Re: OpenMM and MPI
Hi Stephan,
Have you looked into MSMAccelerator? It sounds like it might be exactly what you want.
Peter
Have you looked into MSMAccelerator? It sounds like it might be exactly what you want.
Peter
- Stephan Grein
- Posts: 4
- Joined: Fri Dec 13, 2013 9:13 am
Re: OpenMM and MPI
Hey peastman,
i will look into that. Basically I need to run, e. g. 10 MD simulations with OpenMM in parallel, and afterwards collecting the result.
Best,
Stephan
i will look into that. Basically I need to run, e. g. 10 MD simulations with OpenMM in parallel, and afterwards collecting the result.
Best,
Stephan