Decision and Simulation Modeling in Systematic Reviews
Posted: Mon Oct 07, 2013 11:33 pm
This is a public response to the manuscript Decision and Simulation Modeling in Systematic Reviews - Methods Research Reports. By: Karen Kuntz, ScD, Francois Sainfort, PhD, Mary Butler, PhD, MBA, Brent Taylor, PhD, Shalini Kulasingam, PhD, Sean Gregory, MBA, Eric Mann, BA, Joseph M Anderson, BA, and Robert L Kane, MD.
The manuscript is openly accessible on the web through the following link: http://www.ncbi.nlm.nih.gov/books/NBK127482/
This work is a great systematic review of the state of disease modeling. It is a valuable reference of great importance to disease modelers. The authors are congratulated for this manuscript.
Below you will find some comments and questions in hope these will improve and expand the already wide view the manuscript provides.
First, the historical introduction to decision making is fascinating and a recommended reading.
Table 3 Best Practices - the authors should refer to the simple rules of credible practice that are now being assembled by the CPMS committee:
http://wiki.simtk.org/cpms/Ten_Simple_R ... e_Practice
Hopefully revisions to this manuscript will include these best practices.
In the sensitivity analysis section the manuscript mentions it gets impractical for more than three parameters. I wish to disagree and claim that parameter analysis can be done for more than 3 parameters by combining information and showing differences in color in matrix format. Using HPC it is possible to have many more combinations of parameters and check many more assumptions. With modern pivot table techniques it is possible to expand this further.
I had issues with Table 8 which includes duplicate entries, is there a reason for that?
Regarding the last paragraph in the section Attitudes Toward Modeling and Appropriateness of Modeling in Systematic Reviews. It seems from the manuscript text there was no idea how to incorporate the outputs of models similar to how meta-analysis works. Well, allow me to suggest that once the source code of multiple models is available it is possible to compare their outputs for the same inputs. The Reference Model for disease progression is an example of such use. Here are some links to describe this work:
http://youtu.be/7qxPSgINaD8
http://web.cs.uh.edu/~cosine/?q=node/140
http://sites.google.com/site/jacobbarha ... _09_23.pdf
Systemic review of models is a key element in allowing to expand such work and help accumulate knowledge.
Systematic review is a key element to identify model similarities. Since models are used to explain our understanding of observed phenomena, identifying key similarities helps us focus and contributes to our accumulated knowledge. Moreover, systematic review that includes models is essential to build the next generation of models.
If such a systematic review can include pointers to model source code, it will be instrumental in bringing these models together in a repository. The authors are referred to a recent discussion regarding model sharing in the Multi Scale Modeling Consortium wiki:
http://www.imagwiki.nibib.nih.gov/media ... king_Group
This discussion is highly relevant since table 17 recommends a modeling database and a model repository is mentioned in the manuscript in several locations.
Under the training needs section it was mentioned that the skills requires from modelers are not well defined. Modeling is still relatively a new paradigm, the fluctuation and uncertainly with regards to skills is understandable. Never the less, computing and mathematical skills should be part of the requirements. Moreover, there are programs that certify for modeling and simulation already. It is very important that the manuscript raises this situation and hopefully future work by the authors will explore the above points I mentioned.
In the suggested framework chapter several ways to incorporate systematic review and models are discussed. And although implied in the title Value of Information, one interaction between systematic reviews and models that was not discussed: it is possible to figure out what observed phenomenon we explain well by cross validation of models against observed data to find out what pieces of data fit together. This way models can be used to cluster and filter data and better understand observed phenomena. The Reference Model mentioned above is an example. This kind of interaction is especially important in the biological environment where there is relatively much uncertainty associated with the data, see figure 1 in:
http://www.tbiomed.com/content/8/1/35
The manuscript addresses 4 gaps in best practice literature. Allow me to suggest another component - use of new technology. My observation is that the disease modeling world is overly conservative and slow to adopt new technology. Does the systematic review made support this across all the review papers mentioned in table 20?
See the following links for further details:
http://www.linkedin.com/groups/Technolo ... na_4158822
http://www.linkedin.com/groups/Public-R ... mp_4158822
Finally, with the same idea in mind, did the authors consider using Natural Language Processing (NLP) as part of systematic review process? Yet again this this beyond the scope of the manuscript.
In summary, this paper is an important reading for modelers in healthcare.
I hope others will support this review and continue this thread by providing further feedback.
The manuscript is openly accessible on the web through the following link: http://www.ncbi.nlm.nih.gov/books/NBK127482/
This work is a great systematic review of the state of disease modeling. It is a valuable reference of great importance to disease modelers. The authors are congratulated for this manuscript.
Below you will find some comments and questions in hope these will improve and expand the already wide view the manuscript provides.
First, the historical introduction to decision making is fascinating and a recommended reading.
Table 3 Best Practices - the authors should refer to the simple rules of credible practice that are now being assembled by the CPMS committee:
http://wiki.simtk.org/cpms/Ten_Simple_R ... e_Practice
Hopefully revisions to this manuscript will include these best practices.
In the sensitivity analysis section the manuscript mentions it gets impractical for more than three parameters. I wish to disagree and claim that parameter analysis can be done for more than 3 parameters by combining information and showing differences in color in matrix format. Using HPC it is possible to have many more combinations of parameters and check many more assumptions. With modern pivot table techniques it is possible to expand this further.
I had issues with Table 8 which includes duplicate entries, is there a reason for that?
Regarding the last paragraph in the section Attitudes Toward Modeling and Appropriateness of Modeling in Systematic Reviews. It seems from the manuscript text there was no idea how to incorporate the outputs of models similar to how meta-analysis works. Well, allow me to suggest that once the source code of multiple models is available it is possible to compare their outputs for the same inputs. The Reference Model for disease progression is an example of such use. Here are some links to describe this work:
http://youtu.be/7qxPSgINaD8
http://web.cs.uh.edu/~cosine/?q=node/140
http://sites.google.com/site/jacobbarha ... _09_23.pdf
Systemic review of models is a key element in allowing to expand such work and help accumulate knowledge.
Systematic review is a key element to identify model similarities. Since models are used to explain our understanding of observed phenomena, identifying key similarities helps us focus and contributes to our accumulated knowledge. Moreover, systematic review that includes models is essential to build the next generation of models.
If such a systematic review can include pointers to model source code, it will be instrumental in bringing these models together in a repository. The authors are referred to a recent discussion regarding model sharing in the Multi Scale Modeling Consortium wiki:
http://www.imagwiki.nibib.nih.gov/media ... king_Group
This discussion is highly relevant since table 17 recommends a modeling database and a model repository is mentioned in the manuscript in several locations.
Under the training needs section it was mentioned that the skills requires from modelers are not well defined. Modeling is still relatively a new paradigm, the fluctuation and uncertainly with regards to skills is understandable. Never the less, computing and mathematical skills should be part of the requirements. Moreover, there are programs that certify for modeling and simulation already. It is very important that the manuscript raises this situation and hopefully future work by the authors will explore the above points I mentioned.
In the suggested framework chapter several ways to incorporate systematic review and models are discussed. And although implied in the title Value of Information, one interaction between systematic reviews and models that was not discussed: it is possible to figure out what observed phenomenon we explain well by cross validation of models against observed data to find out what pieces of data fit together. This way models can be used to cluster and filter data and better understand observed phenomena. The Reference Model mentioned above is an example. This kind of interaction is especially important in the biological environment where there is relatively much uncertainty associated with the data, see figure 1 in:
http://www.tbiomed.com/content/8/1/35
The manuscript addresses 4 gaps in best practice literature. Allow me to suggest another component - use of new technology. My observation is that the disease modeling world is overly conservative and slow to adopt new technology. Does the systematic review made support this across all the review papers mentioned in table 20?
See the following links for further details:
http://www.linkedin.com/groups/Technolo ... na_4158822
http://www.linkedin.com/groups/Public-R ... mp_4158822
Finally, with the same idea in mind, did the authors consider using Natural Language Processing (NLP) as part of systematic review process? Yet again this this beyond the scope of the manuscript.
In summary, this paper is an important reading for modelers in healthcare.
I hope others will support this review and continue this thread by providing further feedback.