(POSSIBLE BUG): GBn and GBn2 exhibit unexpected behaviors unlike HCT, OBC1, OBC2 in openmm and igb=8 in amber 18
Posted: Thu Sep 12, 2019 7:07 am
Hello all.
I am facing the following problem and I don't know if I am missing something really obvious or if it is an actual bug.
When simulating dsDNA with implicit solvent models, GBn and GBn2 exhibit unexpected behaviors unlike HCT, OBC1, OBC2 in openmm and igb=8 in amber 18.
Running implicit solvent simulations of dsDNA using GBn and GBn2 rapidly disrupts the interactions between both strands and even clusters the phosphate groups together. This behavior cannot be reproduced using HCT, OBC1, OBC2 nor running the simulation with amber 18 using igb=8 (the equivalent flag to GBn2).
This behavior was first detected on a run using the " linux-ppc64le/openmm-7.4.0-py37_cuda101" dev version with power9 processors and a Tesla V100 GPU, but has also been reproduced on and old 48 core cluster running the standard x64 release of openmm-7.4.0 (only the first 40 ps were computed given the limited performance, as it was enough to reproduce the behavior).
On contrast, the same system simulated via the pmemd.cuda_SPFP.MPI executable of an amber 18 ppc64le compilation, using a Tesla V100 and under igb=8 conditions, did not present this problem.
Here a tar.gz with the results and scripts used in these tests can be downloaded:
https://drive.google.com/open?id=1rdtb1 ... mJSLf-L1r-
I am facing the following problem and I don't know if I am missing something really obvious or if it is an actual bug.
When simulating dsDNA with implicit solvent models, GBn and GBn2 exhibit unexpected behaviors unlike HCT, OBC1, OBC2 in openmm and igb=8 in amber 18.
Running implicit solvent simulations of dsDNA using GBn and GBn2 rapidly disrupts the interactions between both strands and even clusters the phosphate groups together. This behavior cannot be reproduced using HCT, OBC1, OBC2 nor running the simulation with amber 18 using igb=8 (the equivalent flag to GBn2).
This behavior was first detected on a run using the " linux-ppc64le/openmm-7.4.0-py37_cuda101" dev version with power9 processors and a Tesla V100 GPU, but has also been reproduced on and old 48 core cluster running the standard x64 release of openmm-7.4.0 (only the first 40 ps were computed given the limited performance, as it was enough to reproduce the behavior).
On contrast, the same system simulated via the pmemd.cuda_SPFP.MPI executable of an amber 18 ppc64le compilation, using a Tesla V100 and under igb=8 conditions, did not present this problem.
Here a tar.gz with the results and scripts used in these tests can be downloaded:
https://drive.google.com/open?id=1rdtb1 ... mJSLf-L1r-