Request: sample mdps and starting structures
- Daniel Silva
- Posts: 5
- Joined: Mon Feb 09, 2009 5:13 pm
Request: sample mdps and starting structures
Hi, I am having several troubles and doubts regarding to run md simulations on the actuall release of Gromacs-openmm (preview2). I think it could be very useful/clarifying if you guys give us a sample file with a sample starting structure and a mdp (better if it haves some kind of temperature coupling/bath) and a list of steps to follow. Example: pd2pqr using xx-FF, create box, grompp compile and final md-openmm (run it). As well this “test set” could be helpful to check if our installation is working properly (name it "not segmentation faults when running it")
Thanks in advance.
Daniel Silva
Thanks in advance.
Daniel Silva
RE: Request: sample mdps and starting structures
Hi Daniel,
We are working on samples (with instructions) so that people can test their installations and hope to have those up soon. In the meantime, if you could provide more details about the error messages you're getting, platform you're running on, etc., perhaps we can help diagnose your problem.
Best,
Joy
We are working on samples (with instructions) so that people can test their installations and hope to have those up soon. In the meantime, if you could provide more details about the error messages you're getting, platform you're running on, etc., perhaps we can help diagnose your problem.
Best,
Joy
- Daniel Silva
- Posts: 5
- Joined: Mon Feb 09, 2009 5:13 pm
RE: Request: sample mdps and starting structures
Thanks by your quick response,
Ok my problem is as follows:
** Platform: Linux, 32-bits, Gromacs 4.0.3, Ubuntu 8.10, gcc 4.3.2, kernel: 2.6.27-11-generic.
** CUDA working with several tests of the SDK on an NVIDIA 8800GT.
**Library and bin paths already “correctly” defined (at least for CUDA it works), and include the addition of gromacs/lib/openmm/
--Two different approaches:
1. - I tried to setup a simulation with FF gromos53a6, then when runing mdrun_openmm I always get a segmentation fault with no more information (output of error pasted at the end of text), it does not matter if I use mdp parameters from zephir branch or if I make my own mdp , of course simulation runs with "stock" mdrun.
2. - Using a tpr for mdrun that actually works with zephyr I tried to run mdrun_openmm and get the same segmentation fault.
Thanks.
Daniel Silva
G R O M A C S (-:
Great Red Oystrich Makes All Chemists Sane
VERSION 4.0.1 (-:
Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
/usr/local/gromacs/bin/mdrun_openmm (-:
Option Filename Type Description
------------------------------------------------------------
-s villin-rpcessed.tpr Input Run input file: tpr tpb tpa
-o asd.trr Output Full precision trajectory: trr trj cpt
-x traj.xtc Output, Opt. Compressed trajectory (portable xdr format)
-cpi state.cpt Input, Opt. Checkpoint file
-cpo state.cpt Output, Opt. Checkpoint file
-c confout.gro Output Structure file: gro g96 pdb
-e ener.edr Output Energy file: edr ene
-g md.log Output Log file
-dgdl dgdl.xvg Output, Opt. xvgr/xmgr file
-field field.xvg Output, Opt. xvgr/xmgr file
-table table.xvg Input, Opt. xvgr/xmgr file
-tablep tablep.xvg Input, Opt. xvgr/xmgr file
-tableb table.xvg Input, Opt. xvgr/xmgr file
-rerun rerun.xtc Input, Opt. Trajectory: xtc trr trj gro g96 pdb cpt
-tpi tpi.xvg Output, Opt. xvgr/xmgr file
-tpid tpidist.xvg Output, Opt. xvgr/xmgr file
-ei sam.edi Input, Opt. ED sampling input
-eo sam.edo Output, Opt. ED sampling output
-j wham.gct Input, Opt. General coupling stuff
-jo bam.gct Output, Opt. General coupling stuff
-ffout gct.xvg Output, Opt. xvgr/xmgr file
-devout deviatie.xvg Output, Opt. xvgr/xmgr file
-runav runaver.xvg Output, Opt. xvgr/xmgr file
-px pullx.xvg Output, Opt. xvgr/xmgr file
-pf pullf.xvg Output, Opt. xvgr/xmgr file
-mtx nm.mtx Output, Opt. Hessian matrix
-dn dipole.ndx Output, Opt. Index file
Option Type Value Description
------------------------------------------------------
-[no]h bool no Print help info and quit
-nice int 19 Set the nicelevel
-deffnm string Set the default filename for all file options
-[no]xvgr bool yes Add specific codes (legends etc.) in the output
xvg files for the xmgrace program
-[no]pd bool no Use particle decompostion
-dd vector 0 0 0 Domain decomposition grid, 0 is optimize
-npme int -1 Number of separate nodes to be used for PME, -1
is guess
-ddorder enum interleave DD node order: interleave, pp_pme or cartesian
-[no]ddcheck bool yes Check for all bonded interactions with DD
-rdd real 0 The maximum distance for bonded interactions with
DD (nm), 0 is determine from initial coordinates
-rcon real 0 Maximum distance for P-LINCS (nm), 0 is estimate
-dlb enum auto Dynamic load balancing (with DD): auto, no or yes
-dds real 0.8 Minimum allowed dlb scaling of the DD cell size
-[no]sum bool yes Sum the energies at every step
-[no]v bool no Be loud and noisy
-[no]compact bool yes Write a compact log file
-[no]seppot bool no Write separate V and dVdl terms for each
interaction type and node to the log file(s)
-pforce real -1 Print all forces larger than this (kJ/mol nm)
-[no]reprod bool no Try to avoid optimizations that affect binary
reproducibility
-cpt real 15 Checkpoint interval (minutes)
-[no]append bool no Append to previous output files when restarting
from checkpoint
-maxh real -1 Terminate after 0.99 times this time (hours)
-multi int 0 Do multiple simulations in parallel
-replex int 0 Attempt replica exchange every # steps
-reseed int -1 Seed for replica exchange, -1 is generate a seed
-[no]glas bool no Do glass simulation with special long range
corrections
-[no]ionize bool no Do a simulation including the effect of an X-Ray
bombardment on your system
Back Off! I just backed up md.log to ./#md.log.10#
Reading file villin-rpcessed.tpr, VERSION 4.0.3 (single precision)
Note: tpx file_version 58, software version 59
***"this note about version is not show when tpr comes from zephir preprocessing"****
Segmentation fault
****************
Ok my problem is as follows:
** Platform: Linux, 32-bits, Gromacs 4.0.3, Ubuntu 8.10, gcc 4.3.2, kernel: 2.6.27-11-generic.
** CUDA working with several tests of the SDK on an NVIDIA 8800GT.
**Library and bin paths already “correctly” defined (at least for CUDA it works), and include the addition of gromacs/lib/openmm/
--Two different approaches:
1. - I tried to setup a simulation with FF gromos53a6, then when runing mdrun_openmm I always get a segmentation fault with no more information (output of error pasted at the end of text), it does not matter if I use mdp parameters from zephir branch or if I make my own mdp , of course simulation runs with "stock" mdrun.
2. - Using a tpr for mdrun that actually works with zephyr I tried to run mdrun_openmm and get the same segmentation fault.
Thanks.
Daniel Silva
G R O M A C S (-:
Great Red Oystrich Makes All Chemists Sane
VERSION 4.0.1 (-:
Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
/usr/local/gromacs/bin/mdrun_openmm (-:
Option Filename Type Description
------------------------------------------------------------
-s villin-rpcessed.tpr Input Run input file: tpr tpb tpa
-o asd.trr Output Full precision trajectory: trr trj cpt
-x traj.xtc Output, Opt. Compressed trajectory (portable xdr format)
-cpi state.cpt Input, Opt. Checkpoint file
-cpo state.cpt Output, Opt. Checkpoint file
-c confout.gro Output Structure file: gro g96 pdb
-e ener.edr Output Energy file: edr ene
-g md.log Output Log file
-dgdl dgdl.xvg Output, Opt. xvgr/xmgr file
-field field.xvg Output, Opt. xvgr/xmgr file
-table table.xvg Input, Opt. xvgr/xmgr file
-tablep tablep.xvg Input, Opt. xvgr/xmgr file
-tableb table.xvg Input, Opt. xvgr/xmgr file
-rerun rerun.xtc Input, Opt. Trajectory: xtc trr trj gro g96 pdb cpt
-tpi tpi.xvg Output, Opt. xvgr/xmgr file
-tpid tpidist.xvg Output, Opt. xvgr/xmgr file
-ei sam.edi Input, Opt. ED sampling input
-eo sam.edo Output, Opt. ED sampling output
-j wham.gct Input, Opt. General coupling stuff
-jo bam.gct Output, Opt. General coupling stuff
-ffout gct.xvg Output, Opt. xvgr/xmgr file
-devout deviatie.xvg Output, Opt. xvgr/xmgr file
-runav runaver.xvg Output, Opt. xvgr/xmgr file
-px pullx.xvg Output, Opt. xvgr/xmgr file
-pf pullf.xvg Output, Opt. xvgr/xmgr file
-mtx nm.mtx Output, Opt. Hessian matrix
-dn dipole.ndx Output, Opt. Index file
Option Type Value Description
------------------------------------------------------
-[no]h bool no Print help info and quit
-nice int 19 Set the nicelevel
-deffnm string Set the default filename for all file options
-[no]xvgr bool yes Add specific codes (legends etc.) in the output
xvg files for the xmgrace program
-[no]pd bool no Use particle decompostion
-dd vector 0 0 0 Domain decomposition grid, 0 is optimize
-npme int -1 Number of separate nodes to be used for PME, -1
is guess
-ddorder enum interleave DD node order: interleave, pp_pme or cartesian
-[no]ddcheck bool yes Check for all bonded interactions with DD
-rdd real 0 The maximum distance for bonded interactions with
DD (nm), 0 is determine from initial coordinates
-rcon real 0 Maximum distance for P-LINCS (nm), 0 is estimate
-dlb enum auto Dynamic load balancing (with DD): auto, no or yes
-dds real 0.8 Minimum allowed dlb scaling of the DD cell size
-[no]sum bool yes Sum the energies at every step
-[no]v bool no Be loud and noisy
-[no]compact bool yes Write a compact log file
-[no]seppot bool no Write separate V and dVdl terms for each
interaction type and node to the log file(s)
-pforce real -1 Print all forces larger than this (kJ/mol nm)
-[no]reprod bool no Try to avoid optimizations that affect binary
reproducibility
-cpt real 15 Checkpoint interval (minutes)
-[no]append bool no Append to previous output files when restarting
from checkpoint
-maxh real -1 Terminate after 0.99 times this time (hours)
-multi int 0 Do multiple simulations in parallel
-replex int 0 Attempt replica exchange every # steps
-reseed int -1 Seed for replica exchange, -1 is generate a seed
-[no]glas bool no Do glass simulation with special long range
corrections
-[no]ionize bool no Do a simulation including the effect of an X-Ray
bombardment on your system
Back Off! I just backed up md.log to ./#md.log.10#
Reading file villin-rpcessed.tpr, VERSION 4.0.3 (single precision)
Note: tpx file_version 58, software version 59
***"this note about version is not show when tpr comes from zephir preprocessing"****
Segmentation fault
****************
- Daniel Silva
- Posts: 5
- Joined: Mon Feb 09, 2009 5:13 pm
RE: Request: sample mdps and starting structures
Hi again.
--Sorry by the spell mistyping made previously for Zephyr--
Well I am making some advances on this, I decided to merge Gromacs-openmm sources to gromacs 4.0.3 ones and rebuild the whole thing using the openmm headers. After some trial and error I get the whole source compiled and the resulting mdrun depends completely on libOpenMM.so. By this reason I think that I reached the objective of compiling the mdrun_openmm. Results:
*This executable “also” does not let me run the zephyr precompiled tpr because it appears that the version is augmented by 1 on zephyr (maybe tomorrow I will check where is the line to change the version so I can test this mdrun "home-made-chimera" with zephyrs precompiled tprs).
*On the other hand now I can see specific errors regarding why mdrun-openmm does not run with my own files:
******
OpenMM Platform: Reference
starting mdrun 'VILLIN HEADPEICE'
5000 steps, 10.0 ps.
ImplicitSolventParameters::isNotReady
atomic radii appear not to be set correctly -- radii should be in nanometers
average radius=1e-06 min radius=1e-06 at atom index=0
Error:
CpuImplicitSolvent::computeImplicitSolventForces implicitSolventParameters are not set for force calculations!
*********
Thereby I think I made a mistake on the election of the FF. Well, at least now I can see errors and not segmentation faults.
Any suggestions/commentaries are welcome.
Daniel Silva
--Sorry by the spell mistyping made previously for Zephyr--
Well I am making some advances on this, I decided to merge Gromacs-openmm sources to gromacs 4.0.3 ones and rebuild the whole thing using the openmm headers. After some trial and error I get the whole source compiled and the resulting mdrun depends completely on libOpenMM.so. By this reason I think that I reached the objective of compiling the mdrun_openmm. Results:
*This executable “also” does not let me run the zephyr precompiled tpr because it appears that the version is augmented by 1 on zephyr (maybe tomorrow I will check where is the line to change the version so I can test this mdrun "home-made-chimera" with zephyrs precompiled tprs).
*On the other hand now I can see specific errors regarding why mdrun-openmm does not run with my own files:
******
OpenMM Platform: Reference
starting mdrun 'VILLIN HEADPEICE'
5000 steps, 10.0 ps.
ImplicitSolventParameters::isNotReady
atomic radii appear not to be set correctly -- radii should be in nanometers
average radius=1e-06 min radius=1e-06 at atom index=0
Error:
CpuImplicitSolvent::computeImplicitSolventForces implicitSolventParameters are not set for force calculations!
*********
Thereby I think I made a mistake on the election of the FF. Well, at least now I can see errors and not segmentation faults.
Any suggestions/commentaries are welcome.
Daniel Silva
- Daniel Silva
- Posts: 5
- Joined: Mon Feb 09, 2009 5:13 pm
RE: Request: sample mdps and starting structures
Last minute update:
Ok, the last problem reported with the home-compiled mdrun_openmm was a mistake, the real problem was that I forget to include the cuda library path on LD_LIBRARY_PATH, with this last correction the home-compiled executable ran flawlessly. However the precompiled mdrun_openmm that you are providing on the page still does not work on the exactly same environment, so I think that this version is not so portable.
REQUEST: ** Maybe to avoid confusions to further readers looking for help, you can edit this list and remove the last part of my previous post where I say that the “radius problem” depends on the election of the force field, and also this comment**
I do not known if the next comparison could be made directly (I am afraid is not, because stock mdrun does not take into account the implicit solvent), but let me say you that the result for this test-run (villin) is pretty impressive. With stock mdrun I got (for a very short run 100ps) an estimate of 125ns per day, while on the CUDA-openmm version I got 650ns. (GeForce 8800GT)
Please comment/advice.
Daniel Silva
Ok, the last problem reported with the home-compiled mdrun_openmm was a mistake, the real problem was that I forget to include the cuda library path on LD_LIBRARY_PATH, with this last correction the home-compiled executable ran flawlessly. However the precompiled mdrun_openmm that you are providing on the page still does not work on the exactly same environment, so I think that this version is not so portable.
REQUEST: ** Maybe to avoid confusions to further readers looking for help, you can edit this list and remove the last part of my previous post where I say that the “radius problem” depends on the election of the force field, and also this comment**
I do not known if the next comparison could be made directly (I am afraid is not, because stock mdrun does not take into account the implicit solvent), but let me say you that the result for this test-run (villin) is pretty impressive. With stock mdrun I got (for a very short run 100ps) an estimate of 125ns per day, while on the CUDA-openmm version I got 650ns. (GeForce 8800GT)
Please comment/advice.
Daniel Silva
- Christopher Bruns
- Posts: 32
- Joined: Thu Apr 07, 2005 1:10 pm
RE: Request: sample mdps and starting structures
Thank you so much Daniel for investigating this and bringing it to our attention.
Could you please try the Linux grompp and mdrun_openmm executables from this location:
https://simtk.org/websvn/wsvn/zephyr/tr ... nux/grompp
https://simtk.org/websvn/wsvn/zephyr/tr ... run_openmm
and let us know if those work for you?
Thanks in advance.
--Chris Bruns
Could you please try the Linux grompp and mdrun_openmm executables from this location:
https://simtk.org/websvn/wsvn/zephyr/tr ... nux/grompp
https://simtk.org/websvn/wsvn/zephyr/tr ... run_openmm
and let us know if those work for you?
Thanks in advance.
--Chris Bruns
- Lori Paniak
- Posts: 4
- Joined: Sat Feb 07, 2009 7:21 am
RE: Request: sample mdps and starting structures
I dislike editorials, but...
The project might be helped by opening up svn to public downloads. Given the variety of systems people use (ie. most are not 32-bit Windows), it will be difficult to supply pre-compiled binaries that will work without hiccups. The best way to overcome this complication is to allow people to compile OpenMM on their own systems.
At the end of the day, production systems using this code are overwhelmingly going to be 64-bit Linux systems. Development should take that into account.
Finally, the project is called OpenMM. The term "open" with respect to software refers to "open source".
An excellent model for openness, in the same GPGPU/MD space, is the HOOMD project:
http://www.ameslab.gov/hoomd/index.html
The project might be helped by opening up svn to public downloads. Given the variety of systems people use (ie. most are not 32-bit Windows), it will be difficult to supply pre-compiled binaries that will work without hiccups. The best way to overcome this complication is to allow people to compile OpenMM on their own systems.
At the end of the day, production systems using this code are overwhelmingly going to be 64-bit Linux systems. Development should take that into account.
Finally, the project is called OpenMM. The term "open" with respect to software refers to "open source".
An excellent model for openness, in the same GPGPU/MD space, is the HOOMD project:
http://www.ameslab.gov/hoomd/index.html
- Christopher Bruns
- Posts: 32
- Joined: Thu Apr 07, 2005 1:10 pm
RE: Request: sample mdps and starting structures
Thank you for your thoughtful comment. It is true that the subversion repository is not open for public downloads, but the source code is available from the downloads page of the openmm project.
https://simtk.org/project/xml/downloads ... oup_id=161
Perhaps you can recommend a way to make it more obvious that the source is right there among the binary downloads. Or is it your position that a source download package presented as a zip file is not "open source" enough?
https://simtk.org/project/xml/downloads ... oup_id=161
Perhaps you can recommend a way to make it more obvious that the source is right there among the binary downloads. Or is it your position that a source download package presented as a zip file is not "open source" enough?
- Lori Paniak
- Posts: 4
- Joined: Sat Feb 07, 2009 7:21 am
RE: Request: sample mdps and starting structures
I sincerely appreciate that the source code is available on the download page but it does not appear to have been updated in three weeks.
I am certain your team has made significant improvements to your codebase in that time and testing the tarballed code may well be irrelevant to your current development.
Continuously updating the source code links on the download page is possibly not the best use of your team's time. Especially when the equivalent code is available in real-time from svn.
- Daniel Silva
- Posts: 5
- Joined: Mon Feb 09, 2009 5:13 pm
RE: Request: sample mdps and starting structures
Hi again,
sorry by the delay, the new mdrun and grompp that you provided me worked for a standard gromacs installation on my system. Now I have some problems related to strange outs on the logs, I believe they are FF related, md runs only for short times but trajectories appears to be messed; mdrun -v option makes no difference, (it is to say that I can not see the progress of the dynamic). One important question is: what forcefields are compatible with the implicit solvent method?
Daniel
******
There are: 2360 Atoms
Successfully loaded plugin /usr/local/gromacs/lib/openmm/libOpenMM.so
Successfully loaded plugin /usr/local/gromacs/lib/openmm/libOpenMMCuda.so
agb parameter file line=<> is being skipped.
no type found for atom=<N> type=<NL>
no type found for atom=<H1> type=<H>
no type found for atom=<H2> type=<H>
no type found for atom=<H3> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH3>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH2>
no type found for atom=<CG> type=<CH1>
no type found for atom=<CD1> type=<CH3>
no type found for atom=<CD2> type=<CH3>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH2R>
no type found for atom=<CG> type=<CH2R>
no type found for atom=<CD> type=<CH2R>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH2>
no type found for atom=<CG> type=<CH2>
no type found for atom=<CD> type=<C>
no type found for atom=<OE1> type=<O>
no type found for atom=<NE2> type=<NT>
no type found for atom=<HE21> type=<H>
no type found for atom=<HE22> type=<H>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH1>
no type found for atom=<OG1> type=<OA>
no type found for atom=<HG1> type=<H>
no type found for atom=<CG2> type=<CH3>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH1>
no type found for atom=<CG1> type=<CH3>
no type found for atom=<CG2> type=<CH3>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH2>
**** and continues******
sorry by the delay, the new mdrun and grompp that you provided me worked for a standard gromacs installation on my system. Now I have some problems related to strange outs on the logs, I believe they are FF related, md runs only for short times but trajectories appears to be messed; mdrun -v option makes no difference, (it is to say that I can not see the progress of the dynamic). One important question is: what forcefields are compatible with the implicit solvent method?
Daniel
******
There are: 2360 Atoms
Successfully loaded plugin /usr/local/gromacs/lib/openmm/libOpenMM.so
Successfully loaded plugin /usr/local/gromacs/lib/openmm/libOpenMMCuda.so
agb parameter file line=<> is being skipped.
no type found for atom=<N> type=<NL>
no type found for atom=<H1> type=<H>
no type found for atom=<H2> type=<H>
no type found for atom=<H3> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH3>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH2>
no type found for atom=<CG> type=<CH1>
no type found for atom=<CD1> type=<CH3>
no type found for atom=<CD2> type=<CH3>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH2R>
no type found for atom=<CG> type=<CH2R>
no type found for atom=<CD> type=<CH2R>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH2>
no type found for atom=<CG> type=<CH2>
no type found for atom=<CD> type=<C>
no type found for atom=<OE1> type=<O>
no type found for atom=<NE2> type=<NT>
no type found for atom=<HE21> type=<H>
no type found for atom=<HE22> type=<H>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH1>
no type found for atom=<OG1> type=<OA>
no type found for atom=<HG1> type=<H>
no type found for atom=<CG2> type=<CH3>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH1>
no type found for atom=<CG1> type=<CH3>
no type found for atom=<CG2> type=<CH3>
no type found for atom=<C> type=<C>
no type found for atom=<O> type=<O>
no type found for atom=<N> type=<N>
no type found for atom=<H> type=<H>
no type found for atom=<CA> type=<CH1>
no type found for atom=<CB> type=<CH2>
**** and continues******