In our discussions with new OpenMM users, an oft-asked question is what hardware they should purchase for running OpenMM simulations. OpenMM supports both CUDA and OpenCL. As such, it can produce accelerated computations on a wide range of hardware, including both NVIDIA and AMD GPUs. To help guide users in choosing and setting up their hardware to maximize performance, we've provided the information below.
Although OpenMM can parallelize a single simulation across multiple GPUs, it is usually much more efficient to run a separate simulation on each GPU. The following recommendations assume that is how you will use it. Also, while OpenMM can run entirely on a CPU, the performance with a good GPU is many times faster, so that is almost always recommended.
CPU: You should have at least one CPU core for each GPU. Performance is generally independent of the exact CPU that you have. Most modern ones will be sufficient for whatever simulations you run.
Memory: At least 16GB system memory
GPU: For most users, NVIDIA's GTX 780 Ti is the fastest GPU currently available. The one exception is users who plan to run simulations entirely in double precision mode. The GTX 780 Ti (like all consumer grade cards from NVIDIA) has very poor double precision performance. For these users, an NVIDIA Titan Black is a better choice. For users who want a less expensive option, the GTX 780 also provides quite good performance. Among the "professional" grade cards, the Tesla K40 has the best performance. It is, however, still slower than the less expensive cards mentioned above. It is claimed to be more reliable, so when purchasing GPUs for use in a cluster where high reliability is critical, it may be a better choice. Finally, you should consider how much memory the GPU includes, since this will limit the size of the systems you can simulate. For typical systems (less than about 100,000 atoms) even the GTX 780 has more than enough memory. If you plan to simulate extremely large systems, a more expensive GPU may be required.
GPUs per node: Since we assume you will be running an independent simulation on each GPU, this is usually irrelevant.
Power supply: Make sure it is sufficient to power all your GPUs under full load.
Cooling: A critical but often overlooked aspect of your system. GPUs can make it quite hot, so make sure the cooling system is sufficient.
System Bus: Communication between CPU and GPU can become a bottleneck in some situations, so a fast system bus is critical. We recommend a computer that supports PCIe 3.0 or later.