OpemMM 5.0 - segmentation fault

The functionality of OpenMM will (eventually) include everything that one would need to run modern molecular simulation.
POST REPLY
User avatar
Socrates Dantas
Posts: 2
Joined: Fri Oct 17, 2008 4:29 pm

OpemMM 5.0 - segmentation fault

Post by Socrates Dantas » Fri Feb 15, 2013 5:16 am

Dear All,

After downloading OpenMM 5.0 and installation, the testInstallation.py is running fine.
However, after "make all", all the compilations was sucessfull, and after running
"helloArgon" or any other test the results are:
"Segmentation Fault"
I am using g++ and gfortran 4.6.3, in a 64bit machine.

Could someone help me with that issue?

Beste Regards,

Socrates Dantas

User avatar
Peter Eastman
Posts: 2583
Joined: Thu Aug 09, 2007 1:25 pm

Re: OpemMM 5.0 - segmentation fault

Post by Peter Eastman » Fri Feb 15, 2013 11:47 am

Hi Socrates,

What operating system and hardware are you using?

Try running one of the programs in gdb. So instead of typing

./HelloArgon

type

gdb ./HelloArgon
> run

Then when it reaches the segfault, type

> bt

That will give a stack trace showing where the error happened. You might also be able to get a more informative stack trace by compiling in debug mode (in CMake, change CMAKE_BUILD_TYPE from "Release" to "Debug"). But even a release mode library may still give enough information to be useful.

Peter

User avatar
Socrates Dantas
Posts: 2
Joined: Fri Oct 17, 2008 4:29 pm

Re: OpemMM 5.0 - segmentation fault

Post by Socrates Dantas » Wed Mar 06, 2013 6:17 am

Dear Peter,

Thank you for your reply.
After your sugestions I got:
dantas@nefertare:~/Downloads/OpenMM5.0-Linux64/examples$ gdb ./HelloArgon
GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
<http://bugs.launchpad.net/gdb-linaro/>...
Lendo símbolos de /home/dantas/Downloads/OpenMM5.0-Linux64/examples/HelloArgon...concluído.
(gdb) run
Starting program: /home/dantas/Downloads/OpenMM5.0-Linux64/examples/HelloArgon
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Cannot find new threads: generic error
(gdb) bt
Target is executing.
(gdb)

Any other sugestions?

Best Regards,

Sócrates

User avatar
Peter Eastman
Posts: 2583
Joined: Thu Aug 09, 2007 1:25 pm

Re: OpemMM 5.0 - segmentation fault

Post by Peter Eastman » Wed Mar 06, 2013 10:57 am

Try typing "c" (for "continue"). I think it hasn't reached the crash yet.

Peter

User avatar
Spela Ivekovic
Posts: 26
Joined: Thu Mar 17, 2011 4:27 am

Re: OpemMM 5.0 - segmentation fault

Post by Spela Ivekovic » Wed Apr 10, 2013 6:15 am

I've experienced the same problem on Ubuntu 12.04 LTS.

It seems to work fine if you preload the pthread library:
LD_PRELOAD=/lib/x86_64-linux-gnu/libpthread.so.0 gdb --args ./HelloArgon

Or, alternatively, run the example directly:
LD_PRELOAD=/lib/x86_64-linux-gnu/libpthread.so.0 ./HelloArgon

Spela

User avatar
Spela Ivekovic
Posts: 26
Joined: Thu Mar 17, 2011 4:27 am

Re: OpemMM 5.0 - segmentation fault

Post by Spela Ivekovic » Fri Apr 12, 2013 6:54 am

I managed to resolve the problem above (segmentation fault) completely by including the "plugins" directory in the LD_LIBRARY_PATH.

In my case, the segmentation fault mentioned by Socrates above was masking a different problem. It turned out that libOpenMMRPMDOpenCL.so and libOpenMMRPMDCUDA.so were not loading correctly because their respective dependencies with libOpenMMOpenCL.so and libOpenMMCUDA.so were not resolved. OpenMM ignores the exception (Platform.cpp) and once the plugin loader reached libOpenMMCUDA.so, it crashed with a seg fault.

Even though all the plugin libraries were in the same directory, the dependencies were not resolved, e.g.:

Code: Select all

ldd libOpenMMRPMDCUDA.so 
	linux-vdso.so.1 =>  (0x00007fffdfbff000)
	libOpenMM.so => /home/spelai/openmm5.0.1/lib/libOpenMM.so (0x00007f08aa3f0000)
	libcuda.so.1 => /usr/lib/libcuda.so.1 (0x00007f08a97ec000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f08a95cf000)
	libOpenMMCUDA.so => not found
	libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f08a92ce000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f08a8fd4000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f08a8dbe000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f08a8a00000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f08a87fc000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f08a85e5000)
	librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f08a83dc000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f08aa977000)
This was fixed by adding the plugins directory (where libOpenMMCUDA.so resides) to the library path:

export LD_LIBRARY_PATH=/home/user/openmm/lib/plugins:$LD_LIBRARY_PATH

After that, the dependency listing read like this:

Code: Select all

ldd libOpenMMRPMDCUDA.so 
	linux-vdso.so.1 =>  (0x00007fff35dff000)
	libOpenMM.so => /home/spelai/openmm5.0.1/lib/libOpenMM.so (0x00007f2dd2bc2000)
	libcuda.so.1 => /usr/lib/libcuda.so.1 (0x00007f2dd1fbe000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2dd1da1000)
	libOpenMMCUDA.so => /home/spelai/openmm5.0.1/lib/plugins/libOpenMMCUDA.so (0x00007f2dd1a6a000)
	libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f2dd1769000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f2dd146f000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f2dd1259000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2dd0e9b000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2dd0c97000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2dd0a80000)
	librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f2dd0877000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f2dd3149000)
	libcufft.so.5.0 => /usr/local/cuda-5.0/lib64/libcufft.so.5.0 (0x00007f2dce8c1000)
	libcudart.so.5.0 => /usr/local/cuda-5.0/lib64/libcudart.so.5.0 (0x00007f2dce666000)
and the HelloArgon example ran without a problem on Cuda platform (no preloading of any other libraries required).

I should mention that the testInstallation.py ran without a problem and was detecting all 3 platforms (Reference, Cuda, OpenCL) from the very beginning, even when the HelloArgon was crashing because of unresolved dependencies.

Spela

POST REPLY