Review of summary document providing Committee overview

The Committee on Credible Practice of
Modeling & Simulation in Healthcare aims to establish a task-oriented collaborative platform to outline good practice of simulation-based medicine.
User avatar
Ahmet Erdemir
Posts: 77
Joined: Sun Sep 10, 2006 1:35 pm

Re: Review of summary document providing Committee overview

Post by Ahmet Erdemir » Tue Apr 30, 2013 7:13 am

mjsteele wrote:All:
One of the things I’m finding interesting is the terminology this community is using. In several of my latest efforts, I’ve found the compilation/creation of a domain glossary/lexicon very important, and if it’s not done sooner, we’ll discover later that we should have done it sooner. Perhaps we can designate an IMAG/MSM glossary/lexicon location within this site.
Hi Martin,

We anticipate that one of the first tasks of the Committee will be compiling a glossary. It is likely that we have a forum topic for "definitions" and a couple members of the Committee leading to synthesize these discussion and start a glossary document in the source code repository.

Cheers,
ahm.

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Review of summary document providing Committee overview

Post by Jacob Barhak » Wed May 01, 2013 3:03 am

One more important thought regarding the committee overview document.

It seems that training in healthcare is not specifically mentioned.

I am now in MODSIM world and I am encountering many simulations for training purposes. There are many examples including games, and non computational simulation through manakins and actors.

Do you wish to specifically include those in the committee scope?

Also, do biological disease models fall in the scope?

It would be interesting to gain insight from non computational people on this.

Jacob

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Review of summary document providing Committee overview

Post by Lealem Mulugeta » Wed May 01, 2013 3:44 am

Jacob,

I will elaborate more when I have sufficient time, but the way I view it, our goal is to focus on models and simulations that are focused on research and clinical practice for therapeutics development and implementation. More specifically, the models and simulations of interest should be more predictive with the intent of driving toward a decision in research or clinical intervention. So I don't think training tools fall under this category. There are other reasons why I believe this, but I will get to them later when I have more time available to respond in detail.

Biological disease models, on the other hand, fall within our scope and our intent was to include them since IMAG and MSM are heavily focused on understanding the biological mechanism of various disease pathways. In order to conduct in-silico investigations for potential therapeutics, it is generally important to have a virtual representation of the condition you are trying to treat (e.g. metastases growth). Once you have these models, you can then perturb them via mathematical representation of the treatments of interest (e.g. pharmaceutical) to gain insight on the outcome of the treatment. Or you can play what-if games to identify a series of treatments that may work for the given disease.

With that said, I can be convinced otherwise regarding my first point if compelling arguments for why the Committee needs to tackle this. Also, training models are more established than the types of models the MSM community is focused on. So we may want to check whether or not there are any organizations that are already focused o standardizing the use of training models...

Lealem

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Review of summary document providing Committee overview

Post by Martin Steele » Wed May 01, 2013 4:31 am

lealem wrote:Jacob,

in-silico investigations for potential therapeutics

Lealem
"in-silico"? This is the 1st time I'm exposed to this word - makes me think we're growing something in a petri dish with a silicon base (ever here of a Horta?) To be technically accurate, I'd want to avoid this term, which means it will probably ... (insert biological-humor sub-routine) ... go viral.
lealem wrote:Jacob,

check whether or not there are any organizations that are already focused o standardizing the use of training models...

Lealem
The military has invested extensively in simulation training, with one specialty being simulation training for battlefield injuries. They have a large area for exhibitors at the Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) every year (1st week of December in Orlando, FL). http://www.iitsec.org/Pages/default.aspx

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Review of summary document providing Committee overview

Post by Lealem Mulugeta » Wed May 01, 2013 5:15 am

mjsteele wrote: "in-silico"? This is the 1st time I'm exposed to this word - makes me think we're growing something in a petri dish with a silicon base (ever here of a Horta?) To be technically accurate, I'd want to avoid this term, which means it will probably ... (insert biological-humor sub-routine) ... go viral.
In-silico (or In Silico) is a term used in M&S of biological systems. So it is technically correct. Here's a Wikipedia article to give you a quick overview: http://en.wikipedia.org/wiki/In_silico

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Review of summary document providing Committee overview

Post by Jacob Barhak » Thu May 09, 2013 11:46 pm

Hi Lealem, and all,

Your idea of keeping distance from some topics such as education and training makes sense. Yet there is a grey area there that may need attention.

If a predictive computational model is used to power an educational model. For example predict blood flow to calculate blood pressure in a manakin. In this case, is it of interest to the committee?

Note that such a manakin will be examined by doctors who can asses its credibility.

In a situation that doctors disagree amongst themselves or with the predictive model. What is considered credible? The model or the doctor?

I heard this discussion at MODSIM and it seemed relevant to our work. This brings another issue - any credibility guidelines we suggest should be accepted by the end user community to these models. How do we make adoption easier for this community. How do we deal with push back from a community that would not change its ways easily? Do we have enough representatives from the target communities to help with this?

Our committee is about practice as well as credibility. How do we make sure something becomes practical if it is theoretically credible?

What is the human role? and what is the scientific backing role? What if there is a conflict like in the example I suggested when different specialists claim differently?

I hope this is not too much of a detour from the original topic of committee presentation. Yet it is very much related.

As requested, I will try to sum up the overview presentation topics discussed in the next post to help focus.

Jacob

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Short Summary of the discussion so far

Post by Jacob Barhak » Fri May 10, 2013 12:54 am

As the chairs requested, I am attempting to briefly summarize the discussion so far. Details can be seen in the previous posts.

Tony, Martin, and I posted replies to the committee presentation.

Martin mentioned NASA definitions that may be helpful to define the scope. There are several standard documents Martin raised from NASA and even ASME.

Tony was interested in "Need: Clinical urgency". Ahmet gave examples of:
1) individualized medicine 2) expedited delivery of healthcare products

After some discussion it seems that a topic that requires further attention is innovation and computing technology replaces human decision. Ahmet phrased it as:
"Identify and promote innovative game changing technologies establishing model credibility".

Ahmet and Jacob reached an understanding that: Proposing guidelines and procedures for credible practice is a step towards towards endorsing models that directly tie claims to results.

There was also agreement about promoting a culture of self criticism and admitting error.

Promoting good practice is a key phrase that Ahmet pointed out.

The committee members wish to learn from multiple disciplines. Tony has provided a reference to a relevant paper with examples. A discussion has started on which types of models to include in the scope of the committee.

There was an in depth discussion regarding credibility and reproducibility that span off the discussion.

I hope this is a good enough summary and I am representing it correctly with the authors.

User avatar
Ahmet Erdemir
Posts: 77
Joined: Sun Sep 10, 2006 1:35 pm

Re: Review of summary document providing Committee overview

Post by Ahmet Erdemir » Fri May 10, 2013 2:47 am

jbarhak wrote:
Your idea of keeping distance from some topics such as education and training makes sense. Yet there is a grey area there that may need attention.

If a predictive computational model is used to power an educational model. For example predict blood flow to calculate blood pressure in a manakin. In this case, is it of interest to the committee?
Hi Jacob,

I need to clarify my position. Computational models for the purpose of training and education is certainly of interest (at least to me). And we should be inclusive. This can range from models of virtual surgery (that are coupled to physical systems and hardware) to a demo model (while not necessarily appropriate to support decision making, that can be used to get students accustomed to modeling practice).

What is not of interest to me (at this moment and within the context of the committee), are physical models, i.e. mock-ups, manakins or cadaver representations of lifelike situations. I value these areas as separate disciplines that can be coupled to computational simulations. For example, in cadaver simulations of walking to explore healthy and dysfunctional foot mechanics, we used simplified computational models of muscle load sharing to identify tendon forces that loaded the foot. I would consider that muscle model of interest to committee but not the whole cadaver experimentation setup.

Hope this is clear.
ahm.

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Review of summary document providing Committee overview

Post by Jacob Barhak » Fri May 10, 2013 10:47 pm

Thanks Ahmet,

Your approach seems to fit the ideas Lealem put in writing. And I understand the difference. I am just trying to figure out our scope.

It seems that we are focusing on computational models alone. And if they assist other modeling, we stop at the computational part.

What about simulation models that simulate queues or visual models for training medical teams to react in a situation such as mass casualties due to a disaster - if those are based on predictive models related to physiology or biology. It is somewhat computational and predictive. Yet it is not the mainstream we have been discussing so far and is far away from Multi scale modeling.

Also, non computational Biological models were not yet overruled if I understood correctly. What about those?

Should we define these as a gray area that we do not mean to address directly yet those modelers can be influenced by our work? In other words, should we stick with the scope definition we have and gage our intended influence on models according to the distance from our definition. This will distance all education models yet make those that have predictive computational components closer and perhaps within scope?

Again, I am trying to explore our boundaries by examples.

I hope you find my questioning useful.

Jacob

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Review of summary document providing Committee overview

Post by Martin Steele » Mon May 13, 2013 8:36 am

Regarding Jacob's & Ahmet's Discourse:

Any model, physical (person or manikin) or computational, will be assessed for credibility, either explicitly or implicitly. The criteria for credibility assessment may be somewhat different between physical & computational models.

To be credible is to be believed. Being credible to one person and not another is a normal condition. Assessing the credibility of the assessor of credibility is a long and winding road. What is needed is a consistent and objective manner for assessing credibility of: the practice of modeling and simulation, a model and/or simulation, and/or an M&S-based analysis.

As for Ahmet's remarks on 'cadaver experimentation setup' - if such a setup was used to gather data for constructing and validating the (muscle) model, then that is important to credible M&S practice. If that setup is limited in some way, the use of the model should also be limited, i.e., constrained to the limits for which it’s validated.

The practice of modeling & simulation is broad in scope, from the start of system analysis and model building to use of models & simulations for its intended purpose (e.g., analysis, training, etc.) The building of physical models is quite different from the building of computational models, as is the verification & validation process for those different types of models.

Perhaps a Scope Topic would be useful for this discussion, and then once finalized be a point of reference.

POST REPLY