Ten Simple Rules of Credible Practice/Practice Standards and Guidelines Team
---lealem 2013-10-01 19:30:00
Ranking Method
Practice Standards and Guidelines Team used a combination of general discussions and a raking scheme to identify the simple rules they felt were important. Since not all of the members explicitly listed ten rules, the feedback from the members were summarized to the following ten general messages. The ten general messages were then boiled down to ten simple rules.
The ten rules/messages were ranked based on:
The number of people that explicitly stated the given rule or M&S practice was important. Higher weight was given to rules or practices that were ranked higher by most people and/or that were pervasive in the feedback from all contributors.
- If there is a tie or close to a tie on how many people highlight a particular rule or practice, the rank given to the particular rule/practice was taken into consideration (for those that were ranked).
- The rules or practice that were indirectly viewed as being important through general consensus received lower weight.
Discussion Summary from the Practice Standards and Guidelines Team
Overall it seems that all contributors place a high priority on defining the purpose, for which the M&S is intended for, and then planning, developing and applying it accordingly. (Martin, Donna, Tina, Grace, Lealem, Jerry, Marlei)
For now, based on the general definitions/explanations offered in the original forum discussion, this seems to fit best under "Develop with the end user in mind" as a means to simplify summarizing the outcome of this. It would be best to reword this rule to state "Plan and develop the M&S with the intended purpose/context, as well as end-user in mind"?
In general, there seems to be a high priority placed on the quality/appropriateness of the data used (Martin, Tina, Donna and Lealem; Jerry, Marlei through general consensus with Martin and Lealem)
Although the priority of these different elements of testing the M&S to show reliability (V&V, UQ, sensitivity analysis (SA), test cases), it is clear that testing is a key element. So, to summarize the team’s values, V&V, UQ and SA were lumped under a general simple rule called "Test the M&S". (Martin, Donna, Tina and Lealem; Jerry, Marlei through general consensus with Martin and Lealem)
Documentation of at all levels (code, limitations, abstractions, user guide etc) seems to be echoed quite heavily. So perhaps "Document important element of the M&S (domain of validity/invalidity, intended use, user guide, code documentation, etc)" should be a simple rule of its own with sub-bullets to specify what should be documented? (Martin, Jerry, Marlei, Tina and Lealem)
Based on discussions and general feedback from the contributors, one of the simple rules that may need to stand out on its own is "Explicitly list your limitations" (Jerry, Marlei, Lealem, Donna and Tina; Martin agrees via additional discussion)
Three out of the six contributing team members explicitly point out that having the M&S independently reviewed is important. However, for the individuals that did list and rank it as a simple rule that should be included as part of credible practice, it seems to be ranked closer to the bottom of the 10. (Martin, Tina, Lealem; Marlei and Jerry through general consensus)
Version control was explicitly listed by three people. (Donna, Tina and Lealem; Jerry and Marlei seem to agree based on general consensus)
- In reading all of the feedback, it seems that adhering to Appropriate discipline specific guidelines and standards needs to be incorporated (Martin, Tina, Lealem; Marlei and Jerry through general consensus)
Use consistent terminology or defining terminologies was highlighted seems to be valued, but it generally seems to be ranked pretty low on the list (Donna, Tina, and Lealem; Marlei and Jerry through general consensus)
Dissemination of the M&S was expressed by some to be valuable (Martin, Tina, and Lealem; Marlei and Jerry through general consensus)
Top Ten Simple Rules
Plan and develop the M&S with the intended purpose/context, as well as end-user in mind
- Use appropriate data (input, validation, verification)
Test the M&S Appropriately within Context (V&V, UQ, sensitivity analysis (SA), test cases)
Document important element of the M&S (domain of validity/invalidity, intended use, user guide, code documentation, etc)
- Explicitly list your limitations
Have the M&S reviewed by independent users/developers/members
- Use version control
- Use appropriate discipline specific guidelines and standards
- Use consistent terminology or defining terminologies
Dissemination of the M&S
In addition, to the above outcomes, two team members expressed that the rules may need to be divided into two categories: “Context based (application specific) models” and “multiple use or framework models”. If similar comments have come up in the other two teams, we may need to explore this further. One possible way of dealing with this is to somehow address it under the first rule, “Plan and develop the M&S with the intended purpose/context, as well as end-user in mind”.
For more information, please see the forum discussions here: http://wiki.simtk.org/cpms/Ten_Simple_Rules_of_Credible_Practice/Practice_Standards_and_Guidelines_Team