jbarhak wrote: Credibility points:
- Reproducibility
- Publicly available Test Suite
- Documentation with examples
- Good service indicated by Responsiveness of developers
- Improvement with versions
- Open error reporting
- The system is blind tested
- The system is competitive compared to other systems
- Traceability of data to its source
- Open source
Qualification of my responses: my work with NASA’s Standard for Models & Simulations, which includes a credibility assessment, focuses my thoughts on the “credibility of M&S results.” This effort is on the “credible practice of M&S.” There is a difference, but I may confuse them on occasion.
Jacob – 1st to your credibility points:
Reproducibility – To what level is reproducibility required? I can re-run my M&S-based analysis, or I can have a separate, isolated, and independent team replicate the model and analysis on a different hardware and software platform. The latter is safest, but more costly & time-consuming.
Good service indicated by Responsiveness – this is not required for M&S results (or the practice of M&S) to be credible, even though it is a preferred trait. You can receive a “good answer” by a cantankerous personality (or hard to use system) and it can be late (or slow), while still having high credibility.
Competitive compared to other systems AND
Open Source – again, this is not required for the practice of M&S to be credible, even though it is a preferred trait. A single proprietary practice of modeling & simulation can be credible.
As you may know, NASA developed a Standard for Models & Simulation that includes a defined assessment of credibility, as well as requirements for reporting M&S-based results. We define credibility as “the quality to elicit belief or trust in M&S results.” As such, we also acknowledge that credibility is not something that can be directly determined. However, it is possible to assess key factors that contribute to
a person’s own assessment of credibility.
The core (minimal) set of
credibility factors for M&S were determined to be:
• Verification
• Validation
• Input Pedigree
• Results Uncertainty
• Results Robustness
• Use History
• M&S Management
• People Qualifications
• Technical Review
The
reporting requirements for M&S-based results must include:
a. Any un-achieved acceptance criteria.
b. Violation of any assumptions of any model.
c. Violation of the limits of operation.
d. Execution warning and error messages.
e. Unfavorable outcomes from the intended use and setup/execution assessments.
f. Waivers to any of the requirements in this standard.
g. An estimate of uncertainty in the M&S results.
h. An assessment of M&S results credibility.