OpenMM memory requirements on CPU/GPU

The functionality of OpenMM will (eventually) include everything that one would need to run modern molecular simulation.
POST REPLY
User avatar
Siddharth Srinivasan
Posts: 223
Joined: Thu Feb 12, 2009 6:49 pm

OpenMM memory requirements on CPU/GPU

Post by Siddharth Srinivasan » Mon Dec 13, 2010 10:21 am

Is there a way to check how the memory requirements of OpenMMM (CUDA platform) scale with the system size on bothy the CPU and the GPU? I ask because my cluster scheduler expects a maximum memory usage (CPU only), and I have found some pretty large inconsistencies in the memory usage on the GPU for the same system. In some cases the job runs on a GPU with 2G memory, in other cases the same system (though in a different configuration) fails to run on that GPU with the error
{{{
BornSum: cudaMalloc in CUDAStream::Allocate failed out of memory
}}}
(By the way, I don't do implicit solvent simulations, so I wonder why it fails in BornSum?)

I guess I'm asking if there has been some profiling done to see if I can better predict the memory usage of my jobs.

User avatar
Peter Eastman
Posts: 2593
Joined: Thu Aug 09, 2007 1:25 pm

RE: OpenMM memory requirements on CPU/GPU

Post by Peter Eastman » Mon Dec 13, 2010 4:27 pm

I don't know of any program for checking how much GPU memory a process is using. It would be a useful thing to have, if it existed.

In any case, the CUDA platform's GPU memory usage is deterministic and depends only on the System definition, not on the current conformation. Is there something else running on the same computer that's also using the GPU? That could explain variation in whether there's enough memory or not.

Peter

User avatar
Siddharth Srinivasan
Posts: 223
Joined: Thu Feb 12, 2009 6:49 pm

RE: OpenMM memory requirements on CPU/GPU

Post by Siddharth Srinivasan » Mon Dec 13, 2010 5:13 pm

> I don't know of any program for checking how much GPU memory a process is using. It would be a useful thing to have, if it existed.
I know of some tools like VAMPIRTrace for CUDA and recently the Allinea debugger/profiler that can be used to do this, they look very impressive from what I have seen so far for CUDA code. Their CPU versions at least have been useful before to us, and at Supercomputing 2010 I looked over their CUDA products, though I didn't use it.

> Is there something else running on the same computer that's also using the GPU? That could explain variation in whether there's enough memory or not.
That should not be possible, we compile CUDA in exclusive mode, allowing only one process per GPU. I'll check anyway though. I was wondering whether the creation of neighbor lists for domain decomposition, if used, could affect the memory usage, which would depend on the conformation.

User avatar
Szilard Pall
Posts: 3
Joined: Thu May 14, 2009 5:12 am

RE: OpenMM memory requirements on CPU/GPU

Post by Szilard Pall » Thu Jan 27, 2011 5:24 pm

The nvidia-smi tool (NVIDIA System Management Interface), distributed with the drivers, provides memory usage information. Moreover, the new 270.x drivers don't display the memory usage in percentage anymore, but provide the actual amount of memory used.

POST REPLY