Ten Simple Rules – Practice Standard and Guidelines team

The Committee on Credible Practice of
Modeling & Simulation in Healthcare aims to establish a task-oriented collaborative platform to outline good practice of simulation-based medicine.
User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Ten Simple Rules – Practice Standard and Guidelines team

Post by Lealem Mulugeta » Tue Sep 17, 2013 10:18 pm

In an effort to narrow down the Ten Simple Rules of Credible Practice of M&S in Healthcare as seen by the "Practice Standards and Guidelines" community, I propose the following steps:
  • 1. Start by providing a 1-3 sentence description of what the following list of rules mean to us. You are not required to define what they all mean to you. You can define just the top 10 you think are important. In addition, you are allowed to identify a new simple rule instead of sticking to the list below.
    2. Once everyone has had a chance to contribute their definitions of the simple rules they believe are most important, we will generate a survey to down select the top ten from the list of rules we've defined as a task team.
I also propose that we all contribute something to this by the end of September.
  • Use version control
  • Use credible solvers
  • Explicitly list your limitations
  • Define the context the model is intended to be used for
  • Define your evaluation metrics in advance
  • Use appropriate data (input, validation, verification)
  • Attempt validation within context
  • Attempt verification within context
  • Attempt uncertainty (error) estimation
  • Perform appropriate level of sensitivity analysis within context of use
  • Disseminate whenever possible (source code, test suite, data, etc)
  • Report appropriately
  • Use consistent terminology or define your terminology
  • Get it reviewed by independent users/developers/members
  • Learn from discipline-independent examples
  • Be a discipline-independent/specific example
  • Follow discipline-specific guidelines
  • Conform to discipline-specific standards
  • Document your code
  • Develop with the end user in mind
  • Provide user instructions whenever possible and applicable
  • Practice what you preach
  • Make sure your results are reproducible
  • Provide examples of use
  • Use traceable data that can be traced back to the origin
  • Use competition of multiple implementations to check and balance each other

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Ten Simple Rules – Practice Standard and Guidelines team

Post by Lealem Mulugeta » Wed Sep 18, 2013 11:15 am

I forgot to mention also that I will be posting my definitions in the next day or so. I'm about half done as we speak.

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Ten Simple Rules – Practice Standard and Guidelines team

Post by Lealem Mulugeta » Sun Sep 22, 2013 11:54 am

Hello everyone,

As promised, I've gone through the list and developed some definitions which I think are appropriate. In addition, I noticed that a lot of the rules seem to be sub components of some higher level rules. So I started developing subcategories that I think should be included as part of the higher level rule. So this is what I came up with. But please note that they are not prioritized in any way.
  • Use version control – it is important to ensure that we know exactly what version of the model or simulation was used to run a specific analysis
  • Use appropriate data (input, validation, verification)
  • [list]
  • Define your evaluation metrics in advance – your evaluation criteria should define what data you need to appropriately develop, verify and validate a model or simulation
  • Use traceable data that can be traced back to the origin – one of the proofs presented to show that the data used to develop, verify and validate a model or simulation can be traced to a source that is considered reliable and appropriate to the context or intended use of the model or simulation
[*]Attempt validation within context – if the model was validated for a specific analysis criteria for an application of interest that is vastly different from the specific condition under consideration, then the model or simulation would be deemed not appropriately applicable for that application. Therefore the model or simulation needs to be validated for the context or within application domain that is under consideration.[/*]
  • Attempt uncertainty (error) estimation – when performing validation, it is typically necessary to qualitatively or quantitatively assess the uncertainty of the predictions made by M&S. However, this is always dependent on the context of use. Some application scenarios, such as general analyses for trends and non-clinical applications, may not need uncertainty analysis. However, this context needs to be explicitly stated throughout the onset of model development and application.
  • Conform to discipline-specific standards – when performing validation, it should also be in the context of the discipline specific needs, standards and guidelines.
  • Follow discipline-specific guidelines – See previous for bullet. This bullet and previous bullet should probably be combined.
[*]Attempt verification within context – the description provided for validation are applicable to verification as well. This includes for the following sub-bullets.[/*]
  • Attempt uncertainty (error) estimation
  • Conform to discipline-specific standards
  • Follow discipline-specific guidelines
[*]Perform appropriate level of sensitivity analysis within context of use – this is self-explanatory.[/*]
[*]Report appropriately – the M&S needs to be documented appropriately for the context of use to include the following so that the M&S can be applied appropriately for the intended use, or expanded upon appropriately without compromising or violating the underlying stability, assumptions and V&V criteria.[/*]
  • Define the context the model is intended to be used for
  • Explicitly list your limitations
  • Provide user instructions whenever possible and applicable
  • Make sure your results are reproducible
  • Provide examples of use
  • Conform to discipline-specific standards
  • Follow discipline-specific guidelines
[*]Use consistent terminology or define your terminology – it is important that others understand the M&S and application intent so that the M&S can be leveraged appropriately.[/*]
[*]Get it reviewed by independent users/developers/members – this is important to ensure that the M&S is not evaluated a biased manner. In order to get independent review, it is also important to make the M&S available to the fullest extent possible to allow the greater community or the independent reviewers can sufficiently evaluate the M&S.[/*]
  • Disseminate whenever possible (source code, test suite, data, etc)
[*]Document your code – this differs from “Report Appropriately” in that it refers to documentation of the source code. This allows other users or developers to appropriately navigate the code to make augmentations and/or re-adaptation of the code and reproduce the results of the original model.[/*]
  • Make sure your results are reproducible – this is to ensure that others can take your M&S methods and build upon it or re-adapt reliably.
[*]Develop with the end user in mind – this relates to defining the context or intended use of the model from the beginning. This will then determine the V&V criteria, and other sub-elements that are related to V&V, sensitivity analysis, etc.[/*]
[/list]

The following points did not seem as high priorities to me since they are typically done automatically, or they did not quite make sense to me, or they did not seem as high a priority for standards and guidelines development.
  • Use credible solvers
  • Practice what you preach
  • Use competition of multiple implementations to check and balance each other
  • Learn from discipline-independent examples
  • Be a discipline-independent/specific example
These are just my views and I'm very open to alternative ways of looking at this problem. So please feel free to provide feedback on what I've written, or present your own definitions or organization of the rules. I look forward to everyone's contribution to this effort.

Lealem

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Ten Simple Rules – Practice Standard and Guidelines team

Post by Martin Steele » Tue Sep 24, 2013 9:25 am

I started out this task without looking at anyone’s input. Keeping this list to 10 “simple” rules is quite difficult for such an involved and complex subject. The idea behind these rules is that any M&S practitioner who follows them will produce more credible results.

The following items in the original list might be considered “givens” or perhaps axioms to the credible practice of M&S, and may not necessarily be included in the “10”:
  • Use consistent terminology or define your terminology
  • Document your code
  • Develop with the end user in mind
  • Define your evaluation metrics in advance
The “attempt” items (V&V & Uncertainty Estimation), if they are kept as they are, should be “perform” with the appending attribute of “and report the methods and results thereof.”

These items, to me, easily fall below the “10” line:
  • Learn from discipline-independent examples
  • Be a discipline-independent/specific example
  • Provide examples of use
  • Disseminate whenever possible (source code, test suite, data, etc)
  • Practice what you preach
After quickly reviewing the ideas on the Forum, I updated my “10 Items,” which includes the remaining items in the original list.
  1. Define & Document the requisite characteristics of the real world system
  2. Plan & Manage the M&S Project & Products
  3. Identify, Document, & Control source information/data and how it is changed from acquisition to use
  4. Follow & Document a rigorous M&S development process that includes:
    a. Use of recommended practices, guidelines, or standards, as appropriate
    b. Production of a User’s Guide
  5. Test (Verify & Validate) the M&S and Document its Intended Use
  6. Assess the M&S’s application and input data
  7. Develop complete Scenarios (Design of Experiment) to run & Perform Sensitivity Analysis
  8. Reproduce and/or Compare Results, as necessary, to source data or comparable analyses
    a. The level of independence of the reproduced or comparable results from the original should be considered
  9. Independently Review the
    a. M&S
    b. The Use of the M&S
    c. The Analysis using the M&S
  10. Along with Reporting the Results of an M&S-based Analysis, also
    a. Assess & Report the end-to-end Uncertainty in the M&S
    b. Report aspects of credibility (i.e., anything that supports or detracts from accepting) of the M&S-based analysis results
    c. Assess & Report the Risk of accepting the M&S-based analysis results

User avatar
Jerry Myers
Posts: 3
Joined: Wed Sep 05, 2007 7:58 am

Re: Ten Simple Rules – Practice Standard and Guidelines team

Post by Jerry Myers » Tue Sep 24, 2013 2:40 pm

Lots of good stuff in the first three posts..

Couple of comments.

In general I agree with the the content of the rules as listed this far.

Under Report Appropriately, one bullet says "State Limitations." In my experience with medical personnel this needs to almost be a rule by itself. Most users will not have had the background to accept the output of a computational model or will assume capability beyond the models intent because their context is often the integrated human physiology and not a limited scope model. I put up for discussion the idea we state the following

"Always clearly and concisely document all significant assumptions and known model limitations"

-----------------------------------

Martin,

Your #2 on the top 10 gave me pause - Plan & Manage the M&S Project & Products

I do not disagree about planning and managing the project and products and I firmly believe this is a necessary evil of all projects and tasks, especially within NASA. How acceptable is this going to be in an academic/research environment where the development of the model is in itself part of the research endeavor? I guess I have seen to many researchers and physiologists who, when faced with the overhead of project management, just toss it out the window altogether. Not a universal response by far, but one seen often enough that we should take care to explicitly argue the reasoning behind this statement and how this equates in importance to the more technical and research oriented rules we have put forth.

=================================

#1 on Martin's list is by far one of the most important in the early stages of this effort - I might have said it as "Ask the appropriate question" Or "What questions are you trying to address with this model with respect to the real world system"

I actually think also that martin's inital comment on defining the evaluation metrics fall under the perview of this first catagory and should be included as such. Basically this comment becomes, what is the scope of the model and how will the customer expects success to be measured.

We should consider combining and including both as a "step one: Definition type rule"

TTFA - more later as I have time to cogitate.

Jerry

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Ten Simple Rules – Practice Standard and Guidelines team

Post by Martin Steele » Wed Sep 25, 2013 6:50 am

jgmyers wrote: "Always clearly and concisely document all significant assumptions and known model limitations"
Slight modification (I like to distinguish abstractions & assumptions): "Always clearly and concisely document all significant abstractions, assumptions, and known model limitations"
jgmyers wrote: Martin's Item #2 on the top 10 - Plan & Manage the M&S Project & Products
... How acceptable is this going to be in an academic/research environment where the development of the model is in itself part of the research endeavor?
There are some general M&S steps that can still be adhered to
jgmyers wrote: I guess I have seen to many researchers and physiologists who, when faced with the overhead of project management, just toss it out the window altogether. Not a universal response by far, but one seen often enough that we should take care to explicitly argue the reasoning behind this statement and how this equates in importance to the more technical and research oriented rules we have put forth.
Yes, many people prefer to "do their own thing" or "shoot from the hip," but credible practice includes planning, following, & adapting/adjusting as we go.

Analogy: Great trips of exploration must have some framework (e.g., ships, navigation tools) for the journey with provisions (food, at least) to last until the next re-supply opportunity. M&S (even novel M&S) are no different (something defined to model, an initial direction as to how, and resources to at least start).

User avatar
Marlei Walton
Posts: 2
Joined: Sun May 12, 2013 10:08 pm

Re: Ten Simple Rules – Practice Standard and Guidelines team

Post by Marlei Walton » Thu Sep 26, 2013 2:33 pm

I, too, largely agree with what has been presented above.

It is very difficult to stay on task of developing ten “simple” rules since so much of M&S credible practice is “devil in the details”. If we don’t have to keep it too simple, I’d like to advocate for assuming nothing with the thought that including additional guidance is going to be more beneficial to all who might be attempting to apply these rules. That would translate into integrating Martin’s list of givens into the top 10 list as well as combining information from both Lealem’s and Martin’s posts above. (I concur with Lealem’s list of lower priorities not to be included here, and Martin’s “below the 10 line” list.)

I would actually put Martin’s #2 - Plan & Manage the M&S Project & Products
in the #1 slot with perhaps the rewording of “Establish the planning and management of M&S project and products”. I don’t think the planning and management necessarily has to be performed by the team doing the actual M&S development (this may be useful to add as a clarifying subpoint and address some of Jerry’s concerns). I do think that an established framework is imperative for the success of M&S projects. I would specify “communication with stakeholders (i.e. end users, customers, owners)” as an important subpoint as well.

That would move “Define and document the requisite characteristics of the real world system” to a close second, with the understanding that if #1 is accomplished, these RWS requisite characteristics for the M&S will have stakeholder input and not solely be the M&S team’s responsibility.

I would like to add one more edit to the “state limitations” sentence, and remove the word “significant”: “Always clearly and concisely document all abstractions, assumptions, and known model limitations”. I think it’s fine to include only the more relevant or “significant” model abstractions, assumptions, and limitations in a model standardized simulation report, however there should be a master document where all are described.

I don’t have strong feelings for prioritizing the remaining eight rules, although it did seem logical to list them in order of typical M&S development chronology.

User avatar
Donna Lochner
Posts: 2
Joined: Tue May 28, 2013 10:08 am

Re: Ten Simple Rules – Practice Standard and Guidelines team

Post by Donna Lochner » Fri Sep 27, 2013 5:10 am

Very nice discussion already, so I hope I am not too late to the game. My choices are below, I guess I only came up with 9 rules! I tried to group some items that seemed the same or similar.

The items under 'Disseminate whenever possible' are relevant to credible models as they relate to reproducibility. I think probably these could be stated differently to better capture this.

• Define the context the model is intended to be used for
• Explicitly list your limitations
• Use version control
• Use appropriate data (input, validation, verification)

o Use traceable data that can be traced back to the origin
• Attempt validation within context
• Attempt verification within context
• Attempt uncertainty (error) estimation

o Perform appropriate level of sensitivity analysis within context of use
• Disseminate whenever possible (source code, test suite, data, etc)
o Use competition of multiple implementations to check and balance each other
o Get it reviewed by independent users/developers/members
o Make sure your results are reproducible
o Report appropriately

• Use consistent terminology or define your terminology

User avatar
Tina Morrison
Posts: 6
Joined: Mon May 07, 2007 4:35 pm

Re: Ten Simple Rules – Practice Standard and Guidelines team

Post by Tina Morrison » Fri Sep 27, 2013 6:06 pm

The following, in my opinion, are geared towards model creation with an end use (specific use) in mind. I tried to list them to align with model formuation:
• Define the context of use
--- How are the results from the M&S going to be used?
• Based on how the results from the M&S are going to be use, define your evaluation metrics in advance
• Use appropriate data (input, validation, verification)
• Use data that can be traced back to the origin (e.g., citations, access)
• Use credible solvers
• For the following, try to utilize discipline-specific guidelines
--- Attempt verification within context of use
--- Identify sources of uncertainty (either in parameters or error)
--- Attempt uncertainty (error) estimation
--- Perform appropriate level of sensitivity analysis within context of use
--- Attempt validation within context
• Document assumptions, decisions, limitations, outcomes, and confidence in the results from the M&S
--- In the documentation, define the domain of validity (or invalidity)
--- Use consistent terminology or define your terminology
• Get it reviewed by independent users/developers/members

The following seem geared toward model development with sharing in mind (multiple use model)
• Disseminate whenever possible (source code, test suite, data, etc)
• Use version control
• Learn from discipline-independent examples
• Be a discipline-independent/specific example
• Conform to discipline-specific standards
• Document your code
• Develop with the end user in mind
• Provide user instructions whenever possible and applicable
• Make sure your results are reproducible
• Provide examples of use
• Use competition of multiple implementations to check and balance each other

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Ten Simple Rules – Practice Standard and Guidelines team

Post by Lealem Mulugeta » Tue Oct 01, 2013 6:57 am

Hey everyone,

After some thought, without reviewing other people's comments, I've attempted to prioritize the initial list of simple rules based on my perspectives. This is what I came up with:
  1. Develop with the end user in mind (appropriate planning and defining context of use)
  2. Use appropriate data (input, validation, verification)
  3. [/b][list] [i]
  4. Define your evaluation metrics in advance
  5. Use traceable data that can be traced back to the origin
  6. [/i]
[*]Attempt/perform verification within context[/*][/b]
  • Attempt uncertainty (error) estimation
  • Conform to discipline-specific standards
  • Follow discipline-specific guidelines
  • [/i]
[*]Attempt/perform validation within context[/*][/b]
  • Attempt uncertainty (error) estimation
  • Conform to discipline-specific standards
  • Follow discipline-specific guidelines
  • [/i]
[*]Perform appropriate level of sensitivity analysis within context of use[/*]
[*]Use version control[/*]
[*]Report appropriately[/*][/b]
  • Define the context the model is intended to be used for
  • Explicitly list your limitations
  • Provide user instructions whenever possible and applicable
  • Make sure your results are reproducible
  • Provide examples of use
  • Conform to discipline-specific standards
  • Follow discipline-specific guidelines
  • [/i]
[*]Document your code[/*][/b]
  • Make sure your results are reproducible
[*]Get it reviewed by independent users/developers/members[/*][/b]
  • Disseminate whenever possible (source code, test suite, data, etc)
[*]Use consistent terminology or define your terminology[/*][/b]
[/list]

After I provided my prioritized list, I reviewed everyone's comments and it seems that the following four themes/broad rules are prominent across the board:
  1. Defining the purpose which the M&S is intended for, and then planning and executing the development and application of the M&S accordingly.
  2. Stating limitations as an independent rule
  3. Clear documentation in general (code, model, limitations, etc)
  4. Testing of the M&S (V&V, UQ and sensitivity analysis)
In addition, I found Tina's perspective about dividing the rules into two categories quite interesting: Context based (application specific) models and Multiple use/framework models. In fact Grace Peng had very similar comments about this, and I've heard others in the the other teams mention something along similar lines. Maybe there is something there... With that said, I kind of interpret this as falling under the first broad perspective of "Defining the purpose which the M&S is intended for, and then planning and executing its development/application accordingly."

I'm going to dig deeper into your comments in attempt to arrive at an overall summary, with a possible general prioritization. This will then be integrated with the results of the other two teams to condense the results further into some kind of a survey to solicit input from the greater community. Given the government shutdown, we will unfortunately not be able to present the results and get input from the greater community at the IMAG/MSM meeting which was scheduled to take place this week. However, it is a blessing in disguise in that we will have some more time to summarize the results of the three task teams and design a survey that will help us gain insight on what we should consider in the development of the "Guidelines for Credible Practice of Modeling and Simulation in Healthcare".

Very interesting discussions indeed. Thank for all of your contributions.

Lealem

POST REPLY