Page 1 of 1

Decision and Simulation Modeling in Systematic Reviews

Posted: Mon Oct 07, 2013 11:33 pm
by jbarhak
This is a public response to the manuscript Decision and Simulation Modeling in Systematic Reviews - Methods Research Reports. By: Karen Kuntz, ScD, Francois Sainfort, PhD, Mary Butler, PhD, MBA, Brent Taylor, PhD, Shalini Kulasingam, PhD, Sean Gregory, MBA, Eric Mann, BA, Joseph M Anderson, BA, and Robert L Kane, MD.

The manuscript is openly accessible on the web through the following link: http://www.ncbi.nlm.nih.gov/books/NBK127482/

This work is a great systematic review of the state of disease modeling. It is a valuable reference of great importance to disease modelers. The authors are congratulated for this manuscript.

Below you will find some comments and questions in hope these will improve and expand the already wide view the manuscript provides.

First, the historical introduction to decision making is fascinating and a recommended reading.

Table 3 Best Practices - the authors should refer to the simple rules of credible practice that are now being assembled by the CPMS committee:
http://wiki.simtk.org/cpms/Ten_Simple_R ... e_Practice
Hopefully revisions to this manuscript will include these best practices.

In the sensitivity analysis section the manuscript mentions it gets impractical for more than three parameters. I wish to disagree and claim that parameter analysis can be done for more than 3 parameters by combining information and showing differences in color in matrix format. Using HPC it is possible to have many more combinations of parameters and check many more assumptions. With modern pivot table techniques it is possible to expand this further.

I had issues with Table 8 which includes duplicate entries, is there a reason for that?


Regarding the last paragraph in the section Attitudes Toward Modeling and Appropriateness of Modeling in Systematic Reviews. It seems from the manuscript text there was no idea how to incorporate the outputs of models similar to how meta-analysis works. Well, allow me to suggest that once the source code of multiple models is available it is possible to compare their outputs for the same inputs. The Reference Model for disease progression is an example of such use. Here are some links to describe this work:

http://youtu.be/7qxPSgINaD8

http://web.cs.uh.edu/~cosine/?q=node/140

http://sites.google.com/site/jacobbarha ... _09_23.pdf

Systemic review of models is a key element in allowing to expand such work and help accumulate knowledge.


Systematic review is a key element to identify model similarities. Since models are used to explain our understanding of observed phenomena, identifying key similarities helps us focus and contributes to our accumulated knowledge. Moreover, systematic review that includes models is essential to build the next generation of models.

If such a systematic review can include pointers to model source code, it will be instrumental in bringing these models together in a repository. The authors are referred to a recent discussion regarding model sharing in the Multi Scale Modeling Consortium wiki:

http://www.imagwiki.nibib.nih.gov/media ... king_Group

This discussion is highly relevant since table 17 recommends a modeling database and a model repository is mentioned in the manuscript in several locations.

Under the training needs section it was mentioned that the skills requires from modelers are not well defined. Modeling is still relatively a new paradigm, the fluctuation and uncertainly with regards to skills is understandable. Never the less, computing and mathematical skills should be part of the requirements. Moreover, there are programs that certify for modeling and simulation already. It is very important that the manuscript raises this situation and hopefully future work by the authors will explore the above points I mentioned.


In the suggested framework chapter several ways to incorporate systematic review and models are discussed. And although implied in the title Value of Information, one interaction between systematic reviews and models that was not discussed: it is possible to figure out what observed phenomenon we explain well by cross validation of models against observed data to find out what pieces of data fit together. This way models can be used to cluster and filter data and better understand observed phenomena. The Reference Model mentioned above is an example. This kind of interaction is especially important in the biological environment where there is relatively much uncertainty associated with the data, see figure 1 in:
http://www.tbiomed.com/content/8/1/35


The manuscript addresses 4 gaps in best practice literature. Allow me to suggest another component - use of new technology. My observation is that the disease modeling world is overly conservative and slow to adopt new technology. Does the systematic review made support this across all the review papers mentioned in table 20?
See the following links for further details:

http://www.linkedin.com/groups/Technolo ... na_4158822

http://www.linkedin.com/groups/Public-R ... mp_4158822


Finally, with the same idea in mind, did the authors consider using Natural Language Processing (NLP) as part of systematic review process? Yet again this this beyond the scope of the manuscript.

In summary, this paper is an important reading for modelers in healthcare.

I hope others will support this review and continue this thread by providing further feedback.

Re: Decision and Simulation Modeling in Systematic Reviews

Posted: Thu Oct 10, 2013 4:51 pm
by johnmrice3
I have distributed this to several of the M&S academic degree granting institutions. Have had interest expressed in responding to it. This is a great way to give M&S professionals a view of things that are potentially good uses for simulation in medical domains along with insight into the issues of concern. Very constructive input could come form outside the medical culture is it is open to them.

Re: Decision and Simulation Modeling in Systematic Reviews

Posted: Sat Oct 12, 2013 3:03 pm
by johnmrice3
Just saw this Lockheed Martin (marketing) piece that seemed to be related to the topic.

I am a bit out of my domain of expertise but...

Let me see if I have this right. The humans who do Systematic Reviews of literature to try to make conceptual models they call protocols based on crude historical data scraps about what has happened to groups of patients who treatment cases happen to end up in the 'scientific literature" under different states and or conditions in the past, are now wondering if they can use data from models/simulation in their decision making.

While they are doing that, Lockheed is developing computer programs that use models ( guessing they must be neural networks) that learn to predict the results of alternative treatments base on millions of tiny bits of data from thousands of people and use it to create protocols called models for deciding exactly what treatment to use for a specific patient in near real time. Sort of.

http://www.lockheedmartin.com/us/news/f ... lness.html

Re: Decision and Simulation Modeling in Systematic Reviews

Posted: Sat Oct 12, 2013 4:16 pm
by jbarhak
Hi John,

You are getting this correctly. There is a move towards big data in several communities. The more data we have, the better we can extract useful knowledge about observed phenomena.

Think about it, there is little you can deduce from a single person even if we follow that person for several years like a doctor follows a patient. Yet a group of patients provides much better understanding of diseases, aging, and other biological phenomena. The point of view of a medical doctor in a single clinic is still narrow. However, a state, a country or an international organization have wider points of view that can increase understanding of phenomena.

Marty Kohn from IBM was speaking about another way to process medical data the 2012 IMAG-MSM meeting last year - He was talking about Natural Language Processing (NLP) with reference to Watson. Here are links about it:
http://www.imagwiki.nibib.nih.gov/media ... 201209.pdf
http://www.imagwiki.nibib.nih.gov/media ... OATLv3.pdf
http://www.imagwiki.nibib.nih.gov/media ... or_Keynote
http://www.fiercehealthit.com/press-rel ... e=internal

The larger the scale the more data information can be deduced from. And you can also incorporate the time scale. It is possible to include older observations in the mix of data if their age is properly accounted for - after all some things do change with time.

And there is so much data available today that can be harnessed to better understand clinical phenomena - and this amount is constantly growing. This is why Big Data is becoming a field of interest.

In the medical field this will continue to quickly grow. Electronic medical devices that record data will be providing so much more information about so many people that many more phenomena will be understood by big data analysis.

Never the less, note that there would be always some level of uncertainty - just because phenomena are dictated by many parameters and some phenomena are just rare. I doubt that we can ever provide exact predictions for all scenarios. I believe Tony will agree with this statement since his diagram shows this exact tension between engineering and biology. Yet the more data we have, the more we can narrow down the uncertainly levels we see today.

As an engineer, I would claim that we will never have an absolutely precise tool, yet with sufficient data we may get predictions within an acceptable tolerance.

We are just a t the beginning of this process of analyzing these huge amounts of data - we should prepare for the changes. Hopefully the products of this committee will be able to ease the transition.