Ten simple rules of credible practice

The Committee on Credible Practice of
Modeling & Simulation in Healthcare aims to establish a task-oriented collaborative platform to outline good practice of simulation-based medicine.
User avatar
C. Anthony Hunt
Posts: 23
Joined: Sun Apr 21, 2013 2:18 pm

Re: Ten simple rules of credible practice

Post by C. Anthony Hunt » Tue Aug 20, 2013 5:12 pm

Hunt’s entry #6.
Please consider the following for addition to the candidate rules list:
Take steps to preclude conflating (reifying) your model (or simulations) with its biological referent.

Explanation:

During a recent lecture at Google’s main campus, Rasmus Grønfeldt Winther http://www.rgwinther.com talked about the pernicious reification of models (abstractions), particularly, computational and mathematical models. Material dealing with or describing the problem can be accessed on his website. The model reification problem is touched upon in two of his papers (referenced below). The following text paraphrases his observations with a focus on computational models. His focus extends broadly to scientific abstractions and theories.

We know that a map is not the territory it represents. By analogy, a model is not the referent wet-lab aspect. Consequently, the model should not be conflated, unintentionally or not (as in a scientific paper or seminar), with its referent. As with maps, biomedical models can be used as well as abused. Models should not be treated as if they “capture” something real about the complex referent biology.

Winther points out that pernicious reifications are easy to find. Consider statistical abstractions such as the main effects of an Analysis of Variance (ANOVA), or the principal components of a PCA, which are useful for some purposes. They have been reified as main causes in, or true dimensions of, the world itself. In pernicious reification, a scientist or group develops a model and then comes to treat it as if it were an actual part of the wet-lab or clinical world. The same statement can apply to a model parameter. Or they conflate the model with the aspect of the wet-lab biology that the model is intended to represent. In a nutshell, pernicious reification is a consequence of (a) universalizing models beyond their conditions of proper application, (b) essentializing a single model from a rich family of models or (c) ontologizing by conflating model and world. Each model, no matter how complicated (no matter how grand and unified), is a partial representation; each representation is useful, if at all, only for particular purposes.

Reifying behaviors are the result of basic, all-too-human proclivities associated with our cognitive tools (e.g., biased reasoning) and social structures suffusing our scientific, philosophical, and everyday reasoning. The danger is that distortion and illusion eventuate from reified models and (sooner or later) that leads to loss of credibility or prevents achieving it. To build credibility it is crucially important that models always be considered fallible and limited. Once we conflate model with wet-lab biology, it requires inordinate effort for the grasping mind to open up again to alternative models, alternative explanations.

Winther References
Part-whole Science http://link.springer.com/article/10.100 ... 009-9647-0
Character Analysis in Cladistics: Abstraction, Reification, and the Search for Objectivity
http://link.springer.com/article/10.100 ... 008-9064-7

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Ten simple rules of credible practice

Post by Jacob Barhak » Wed Aug 21, 2013 3:26 pm

Tony raised several points In his last two posts that I wish to comment on:

First, the rules and guidelines this committee should publish should cover the scope of modeling and simulation in healthcare defined in the committee document. It should not be initially intentionally targeted to a specific journal or society. Derivatives for a specific target can come later.

Tony also mentions that societies have their own guidelines, modeling traditions, and sense of credibility. I also feel that some of these traditions require scrutiny and update - model credibility should be based on testing rather than use of traditional acceptable techniques. The fact that everyone is using method X does not mean method X fits for this specific use. Tony has a good point here.

Finally about the new rule Tony suggested regarding conflating reifying. I would ask Tony to give a very simple example using simpler language. I see the importance of keeping the reference and model separate - you can not use the phenomenon as a known truth when explaining the same phenomenon. However, it is ok to make an assumption about truthfulness and use a model to see if this truth fits with other information you consider truthful - this is a way to verify an assumption - it is used in mathematics in proofs using negation. The new rule as defined now may prevent this use of modeling and therefore be better explained. A few do and don't examples may help define the scope of your rule. Also simpler language may help this rule be accessible to a wider audience.

This is a very interesting discussion and I hope the meeting tomorrow may contribute additional points of view.

User avatar
Joy Ku
Posts: 81
Joined: Tue Oct 02, 2007 5:22 pm

Re: Ten simple rules of credible practice

Post by Joy Ku » Fri Sep 20, 2013 3:10 pm

I agree with Tony that many of the rules could be lumped together into more general rules. Given that, my thoughts on the top “three” rules are:
  1. Explicitly list your limitations: I like Tony’s refinement of this in terms of identifying specific scenarios where the model fails and providing explanations. I also agree with Tony that there is a general group of guidelines which would include this rule as well as the rules of “defining the use context for which the model is intended,” “providing error bars” and “providing examples of use.” I don’t know if I would call it “falsification trumps validation” but in my mind, the gist of the category is to clearly identify and document when the model is applicable and when it is not. I think the rules I just mentioned previously are specific ways to adhere to the more general rule. (So I’ve managed to lump many rules into one here, to try to subvert the request for only the top three)
  2. Use multiple implementations to check and balance each other
  3. Make sure your results are reproducible (although I like Tony’s refinement here better as well: Make it easy for anyone to repeat and/or falsify your results – so this encompasses writing readable code, sharing the code and data, etc.)
I also want to expand on the guideline for “developing with the end user in mind.” I think this is important, maybe my 4th choice, but it should read “developing and documenting with the end user in mind” instead. Or maybe these are two different guidelines. In either case, I think that whether the end user is a clinician, biologist, or computational scientist, the vocabulary that is used to describe the model must be geared for that user. It may be hard for them to buy into the model, but if their “language” is not used to describe the model, as well as what it does and doesn’t do, the barrier for acceptance is just that much greater. And so there may need to be multiple descriptions of the model to reach different user groups.

User avatar
C. Anthony Hunt
Posts: 23
Joined: Sun Apr 21, 2013 2:18 pm

Re: Ten simple rules of credible practice

Post by C. Anthony Hunt » Sun Sep 22, 2013 3:49 pm

Joy, about your 9/20 posting, I'd like to explain why I have a problem with your 4th rule or guideline ["developing and documenting with the end user in mind"] within our specified context: "stakeholders within the (biomedical) M&S community."

For me, the phrase "end user" implies a M&S product (a product like GastroPlus™, for example) built for a particular set of currently identified customers. We can easily envision a M&S project's use cases stipulating particular users. By so doing, however, we risk limiting broader reusability. We also risk limiting the model's lifetime.

I prefer rules or guidelines that encourage us (those doing the M&S research) to keep the door open to the unanticipated user who has in mind exploring a creative new use. My suggested rules 2 & 7 had that in mind (2: Do not simply document your code. Make your code readable. 7: Make it easy for anyone to repeat and/or falsify your results.).

A separate task may be to add interfaces to a particular MSM to make particular uses by a particular set of end users easy. A good practice (for scientifically useful multiscale, multi-attribute MSMs) should be to enable the model to be independent of any particular user interface.

User avatar
C. Anthony Hunt
Posts: 23
Joined: Sun Apr 21, 2013 2:18 pm

Re: Ten simple rules of credible practice

Post by C. Anthony Hunt » Sun Sep 22, 2013 4:22 pm

joyku wrote: ... I don’t know if I would call it “falsification trumps validation” ...
Joy, let me expand on why I say that. Envision a two-circle Venn diagram: a small circle (A) overlapping a much larger circle (B). Let A represent the "phenotype"of a multi-attribute, MSM, and B represent the corresponding phenotypic attributes of a wet-lab model that includes living parts. Envision several stick pins in the area of overlap. They represent results of experiments documenting that the MSM has achieved multiple validation targets.
Validation provides no new knowledge.

Of particular value is knowledge of the location and extent of the A-B overlap/non-overlap boarder. In A's area of non-overlap, the MSM's phenotype is not biomimetic: the hypothesis that results observed during A's execution are biomimetic is false. Getting that information provides new knowledge.

The better we are able to specify the A-B overlap/non-overlap, the greater the credibility of that MSM for uses consistent with the area of overlap.

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Ten simple rules of credible practice

Post by Jacob Barhak » Mon Sep 23, 2013 3:57 am

Thanks to Tony, Joy, and Pras, there is sufficient material to make a first draft of our team ranking,


To meet the deadlines posted on the wiki and allow the committee leaders to assemble those with other teams for the MSM-IMAG meeting I decided to post this today. Special thanks to Tony who nudged me a bit.

Here is the link to our combined ranking:
http://wiki.simtk.org/cpms/Ten_Simple_R ... ences_Team

In this link to the wiki you will first find all of our top ten ranked rules. Then you will find the draft rules with alternative suggested and scores that correspond to your preferences. Since no one could stick to the first request of only 3 most important rules scoring is somewhat hard and I had to make some balancing. I used the following scoring key:

[*] Top 3 rules =12, 11, 10 points - this is to distinguish those from others
[*] 4th rule if only 4 specified = 5 points – this is to distinguish Joy who was very particular and defined what was important after the first 3 rules
[*] Every rule after 3rd if more than 4 rules specified = 2 points – this is because focus was lost, yet this information is still valuable for sub ranking. Pras and Tony had such lists and I wanted to give those value.

You will find that the 9 first rules were specified by at least one of us as important in the tope 3. The first rule was specified by 3 of us in the top 3, the second rule was specified by 2 of us in the top 3. Rules 3-9 were specified by one of us as top 3 important. Rule 10 was specified by Joy as 4th important.

Note that this snapshot is true to 23-Sep-2013 around 5:30am CST. This may still change. Not all of our team members have responded and after seeing the results you may wish to suggest different scoring systems or analysis. Never the less, this is the best I could do with the information I had.

I urge you to accept these results as a draft and let this information flow and merge with the other teams. Recall that this is a draft that we will discuss at the MSM meeting - not our final set.

Also, please do not confuse with the rules in the parent link that will later include the summary for our entire committee and now holds the original draft:
http://wiki.simtk.org/cpms/Ten_Simple_R ... e_Practice

I hope you find this summary/ranking objective enough - at least for now.

User avatar
C. Anthony Hunt
Posts: 23
Joined: Sun Apr 21, 2013 2:18 pm

Re: Ten simple rules of credible practice

Post by C. Anthony Hunt » Mon Sep 23, 2013 8:26 am

jbarhak wrote: I urge you to accept these results as a draft and let this information flow and merge with the other teams. Recall that this is a draft that we will discuss at the MSM meeting - not our final set. ... I hope you find this summary/ranking objective enough - at least for now.
I accept, Barhak.

I really appreciate your efforts as chair of our Mathematical and Computational Sciences Team.

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Ten simple rules of credible practice

Post by Martin Steele » Tue Sep 24, 2013 10:50 am

joyku wrote:
[*]Make sure your results are reproducible (although I like Tony’s refinement here better as well: Make it easy for anyone to repeat and/or falsify your results – so this encompasses writing readable code, sharing the code and data, etc.)[/list]
This wording in the "Top Ten Ranked" does not read well: "Make it easy for anyone to ... falsify your results." I think you may mean "Make it easy for anyone to repeat and/or disprove your results."

I know I'm a little late coming into this, but I made a post today at: https://simtk.org/forums/viewtopic.php? ... t=0#p10390

User avatar
John Rice
Posts: 8
Joined: Thu May 30, 2013 10:08 pm

Re: Ten simple rules of credible practice

Post by John Rice » Fri Oct 11, 2013 7:36 pm

New rule candidate. Don't think I have seen this in your list nor anywhere else until this book. (also posted under user perspective)

Thou shalt not distribute a model or output from a model that has ever produced unexpected results that have not been document, investigated and explained. (JR)


Quoted (from introduction page 11 I think)

"c. Ignoring Unexpected Behavior

Although a validation process is recognized to be an essential stage in any modelling and simulation project, its main thrust generally is to confirm that expected behavior does occur. On the other hand, testing for unexpected behaviour is never possible. Nevertheless such behaviour can occure and when it is observed there is often a tendency to dismiss it particularly when validation test have provided satisfactory results. Ignoring such counterintuitive, or unexpected observations can lay the foundation for failure."
Brita, Louis G. and Arbez, G.
Modeling and Simulaiton: Exploring Dynamic System behavior, Springer-Verlag, London Limited 2007

User avatar
William Lytton
Posts: 6
Joined: Wed Jul 17, 2013 12:09 am

Re: Ten simple rules of credible practice

Post by William Lytton » Tue Oct 15, 2013 3:16 pm

I thought I would elaborate on my skepticism re 10 simple rules that I expressed during the meeting today. My notes below were from a prior version so I realize that some of these rules have already been deprecated. Overall I have no strong objection to any of this but find it hard to see that this project need take much time and effort (my naivete here exposed i fear).

I also only appreciated today that the committee (and the rules) are meant to apply to 4 different groups (basic research, clin research, ???, patient use) -- I'm not sure that I can see that the same 10 rules apply to all 4 or even to all organ systems within any one -- eg cardiology, pulmonary are well ahead and psych and neurology far behind; different stages of scientific development: in brain theory we can't even agree on what measures in model or physiol are relevant and which irrelevant (preparidigmatic)

here are a few of my earlier notebook notes on the topic -- I see now that i should have posted these but at the time I didn't see that any of my points were particularly constructive. I also don't have any strong feelings about any of these - eg I do strongly agree that version control is a necessity but I perhaps don't appreciate the need to explain this in a comput biol journal whose readers might be expected to be fairly sophisticated

prior notes:

Software management standard techniques obvious to any software developer:
Use version control
Document your code
Develop with the end user in mind
Make sure your results are reproducible
Get it reviewed by independent users/developers/members
Provide examples of use
Provide user instructions whenever possible and applicable
Disseminate whenever possible (source code, test suite, data, etc) -- specific to opensource

too general:
Learn from discipline-independent examples // true for anyone doing anything
Practice what you preach
Use consistent terminology or define your terminology

POST REPLY