Is the OpenMM 32-bit? In order to use it with GROMACS and CUDA, I should install 32-bit version of GROMACS, right?
I tested mdrun_openmm and got the error message:
libfftw3f.so.3 is not found, however, there is
libfftw3f.so.3 under directory /usr/lib64 which is 64-bit version.
Is 32-bit floating point good enough for MD simulation with GROMACS?
Is it possible to build MPI version of mdrun which also use openMM?
Is 32-bit floating point good enough?
- Peter Eastman
- Posts: 2573
- Joined: Thu Aug 09, 2007 1:25 pm
RE: Is 32-bit floating point good enough?
I think there are a few different questions mixed together there.
The precompiled binaries we distribute are all 32 bit. That refers to the size of pointers, not to the size of floating point numbers. Unless you want to access very large amounts of memory, 32 bit binaries are preferable. The copy of fftw under /usr/lib64 is a 64 bit binary, so that won't work.
Independently of that, Gromacs can be compiled to use either single or double precision floating point. Single precision is the default.
How Gromacs is compiled makes no difference when using OpenMM, though, because OpenMM will do all the calculations, not Gromacs. The question, then, is what precision OpenMM uses, and that depends on which platform it is using. Both the CUDA and Brook platforms use single precision, since double precision is only available on very recent GPUs, and is much slower than single precision.
Peter
The precompiled binaries we distribute are all 32 bit. That refers to the size of pointers, not to the size of floating point numbers. Unless you want to access very large amounts of memory, 32 bit binaries are preferable. The copy of fftw under /usr/lib64 is a 64 bit binary, so that won't work.
Independently of that, Gromacs can be compiled to use either single or double precision floating point. Single precision is the default.
How Gromacs is compiled makes no difference when using OpenMM, though, because OpenMM will do all the calculations, not Gromacs. The question, then, is what precision OpenMM uses, and that depends on which platform it is using. Both the CUDA and Brook platforms use single precision, since double precision is only available on very recent GPUs, and is much slower than single precision.
Peter