Page 1 of 1

Link between credibility and statements of model uses

Posted: Mon Jun 10, 2013 1:35 pm
by huntatucsf
We can all envision a particular model being judged credible when put to particular uses in one biomedical research, development, or clinical context, but not when put to different uses in another context.

To stimulate discussion, I suggest that a prerequisite for discussion of a model’s credibility is having clear practical (application focused) and technical text stating commitments covering uses envisioned by the model’s engineers. I suggest that a prerequisite for actually building credibility for that model is evidence in support of those commitments.

I see model credibility involving connected quality issues: the quality (and credibility) of the M&S results depend on the quality of the executable, which depends on the quality of the model, which depends on the quality of the use cases.

From a technical perspective, all models of interest to the Committee will be characterizable as engineered, executable, software (SW) devices. Thus, I see model credibility having at least two, somewhat independent, separately addressable components: the device and the model, which can be independent of the device that implements.

I suggest that a prerequisite to building model credibility is having documentation supporting credibility of the executable. Software device engineering begins with identification of device use cases. In the larger healthcare M&S context, the quality of SW device use case statements will depend on identification and descriptions of the variety of model use cases, immediate and envisioned.

Note: my expectation is that the Committee’s credibility discussions will necessarily focus on the reuse (and improvement) of computational models and their components, both within a particular research group as well as by others.

If the Committee agrees with the above position, then clear statements of best practices can be developed.

Re: Link between credibility and statements of model uses

Posted: Mon Jun 10, 2013 2:15 pm
by jbarhak
Hi Tony,

You have my support for this.

And may I add that software and documentation will include examples of use that will be shipped with the implementation to demonstrate abilities and contribute to credibility?

In other words, I suggest promoting credibility by example of immediate applications. Envisioned applications should be documented as you suggested, yet immediate should be supplied as an example.

I hope you accept this extension to your idea.

Re: Link between credibility and statements of model uses

Posted: Tue Jun 11, 2013 7:48 am
by mjsteele
huntatucsf wrote: discussion of a model’s credibility is having clear practical (application focused) and technical text stating commitments covering uses envisioned by the model’s engineers. ... quality of the use cases.
This discussion is what the more general M&S community refers to as defining & documenting the Intended Use of an M&S and then, when used, performing and documenting the Use Assessment of the M&S to ensure that it is used as intended and within the domain of validation of the M&S.
huntatucsf wrote:
model credibility having at least two, somewhat independent, separately addressable components: the device and the model, which can be independent of the device that implements.
What do you mean by "device"?
huntatucsf wrote:
... documentation supporting credibility ...
My personal physician once said to me, about the results of medical tests: "If it's not documented, it isn't done."
Many of the requirements of the NASA Standard for Models & Simulations are for documentation.

Re: Link between credibility and statements of model uses

Posted: Sun Oct 13, 2013 11:36 am
by johnmrice3
In the years of M&S (good and bad) in defense and presumably NASA there has been GREAT difficulty communicating the concept of the purpose (Spirit and Intent) of the need to determine the creditability of a model. It is unfortunate that we did not start out M&S with a new word creditablyforeachuse. Creditability for each use was always the intention. It is what make credibility a separate construct from validity or verification. Making it more difficult was the intent was that the burden for determining Credibility for Use was on the USER not the developer. It is conceivable that and likely that many good questions have been answered constructively at least with models that were not valid at all. But for the purpose for which it was used it DID provide useful answers, or insights to answers to the QUESTION being asked for or of the model.

However in the way the credibility characteristic of a model is used and I think implied in Tony's start to this discussion, one should only ever make a statement to the effect that: In working on this kind of problem, the XYZ model has a good record for having been found creditable in the past. As a user I must ask whether or not I am going to ask the model the same question some else has already asked. (If purely true, what would you have to ask again because you will get the same answer) But if you are asking another question, or even feeding it different data your are going to have to check and then accredit the model for what you are doing with it or asking of it. Note trivial but not always hard or very time consuming. The model likely has assumptions about the data it will be given. Are the data I have consistent with those assumptions. Assumptions are most often at the heart of the decision to personally ACCREDIT a model for my use. It the model assumptions and mine do not match, I must now determine how the mismatch will effect the answer and its value to me.

So what about the model's keepers? They can NOT accredit a model for my use, but then it may not be possible for me to accredit it without their cooperation, which may be in my real time or in their time during which by being well behaved M&S professionals they kept notes every time they thought "well assuming ..... then......" and then documented how the assumption was formulated and how it was handled in the model. Other mathematicians and SMEs can validate and verify without the developer. BUT no one can accredit without knowing the assumptions the developer made and how effect the model's output.