Survey Design

The Committee on Credible Practice of
Modeling & Simulation in Healthcare aims to establish a task-oriented collaborative platform to outline good practice of simulation-based medicine.
User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Survey Design

Post by Lealem Mulugeta » Sat Nov 09, 2013 2:40 pm

Hi Pras and Tina,

Thanks for your thoughtful feedback. You both bring up some really excellent points. This is exactly the kind of feedback needed to help design a survey that will help the Committee acquire the data needed to accomplish the Committee's goal.

I will respond to each of your points when I have a bit more time to respond in full. I'm sure Ahmet and others will as well. But I'd like to address Tina's second question.
morrisontm wrote: 2. Images: is there a reason the logos were presented in that order?
The logos were put in order of relevance to each organization's activities and interests regarding M&S research and application in the healthcare/medical fields. In general, it makes sense to have NIH logo first based on this criteria and the fact that the committee is under IMAG/NIH. There could be erros in the order of the other logos based on the stated criteria. If you think the order is not quite right based on the criteria set, please feel free to suggest a refinement.

The other option is to leave the NIH logo first, and list the rest in alphabetical order. That is: NIH, DoD, DOE, FDA, IARPA, MITACS, NASA, NSF,USDA, VA.

Lealem

User avatar
Tina Morrison
Posts: 6
Joined: Mon May 07, 2007 4:35 pm

Re: Survey Design

Post by Tina Morrison » Sun Nov 10, 2013 10:07 am

Hi Lealem, thanks for the response. I dont think we have to change the order. I hosted wanted to know how they were order. thanks for the info.I think the process makes sense. --tina

User avatar
Jeff Bischoff
Posts: 4
Joined: Wed Nov 15, 2006 3:05 pm

Re: Survey Design

Post by Jeff Bischoff » Tue Nov 12, 2013 2:15 pm

Hi all - a couple comments:
- In motivating this study, a target of 20k responders was indicated. I think that we would be very fortunate to get 5% of that. Is that enough for our purposes?
- In the actual implementation - all questions are relative, but the responder does not know the full field of comparisons before making a judgment. This makes me think that responses are going to drift to the high end; and we start to lose the ability to learn b/c of the lack of discrimination. I would propose two changes:
1. A single table that has all entries and responses in full view and editable, until all are submitted at once
2. More laborious, but no less important - can we enforce a distribution here? Without this, it is too easy to say that everything is important (why not?). Our goal is the tougher question of which are the most important? I think we pass on this challenge to each responder. While prescribing a full distribution might be overkill, I think it would be appropriate to say that no more than <x> can be in any given ranking (where maybe <x> is ranking specific).

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Survey Design

Post by Martin Steele » Wed Nov 13, 2013 12:27 pm

I just took the Beta-Survey. My comments are:

I think the survey works well and does not take too much time.

The definition of ‘credibility’ and/or ‘credible practice of M&S’ should be provided in the preamble to this survey.

A description of the ‘key concept’ or ‘term’ contained in each survey item may be helpful to the respondents – to the left of each item

A comment box for each question is preferred – to the right of each item – but also leave the ‘end of survey’ comment box (never enough opportunity to comment in surveys!)

Consider a two-stage survey. First, allow them to answer the survey without restriction. Then, have them either a) Evaluate their initial input or b) Force them to put only x-number in each 1-5 group. This may make the survey more difficult. The Stage 1 & 2 results could be analyzed separately.

Change “Use credible solvers” to “Use credible solvers (tools, code, applications, etc.)”

There was nothing in the survey about qualifications of the people developing, using, or analyzing with the M&S. To credibly practice M&S, qualified people are required.

We may want to add an ‘end of survey’ question if the respondent would like to see the results of this survey.

For the last question (How easy/difficult was the survey to understand and complete?), there was nothing preceding it (like on all the pages of the main part of the survey) that oriented the response, and “Very Easy” and “Very Hard” are grey and different from the previous 0 – 5 grading.

User avatar
David Eckmann
Posts: 2
Joined: Mon Apr 29, 2013 8:32 am

Re: Survey Design

Post by David Eckmann » Thu Nov 14, 2013 4:50 pm

I agree with Martin's comments, which lend clarity for the survey participants. Could we change "4 = Highly Important; 5 = Very Important" to 4 = Very Important; 5 = Extremely Important" throughout? I often hear colleagues use the adverb "highly" to mean "the most", so the ranking of 4 and 5 may be confusing (although numerically the scale should be obvious).

The survey question stating "Practice what you preach" is so very ambiguous. Could this be rephrased to indicate our more directed meaning?

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Survey Design

Post by Lealem Mulugeta » Thu Nov 14, 2013 7:15 pm

Hello everyone,

Thanks for your detailed feedback on the survey.

Martin,
I believe we had a definition for ‘credible practice of M&S’ prior to the initial release, but the upfront text was getting too much, so we eliminated it. But let's wee what we can do about making the definition more succinct.

There is really no way of adding a comment box to the right of each question. It would have to be below each question. We initially looked at adding a comment box for each question, but the survey became more cumbersome complete and manage. So we decided to use the single comment box as a catchall. So maybe we will add a statement encouraging the survey taker to make any comments they may have about any specific rule.

Regarding people's qualification - can you perhaps suggest a wording we can use as a simple rule to be surveyed?

We plan to publish the results of the survey, so I don't know if it would be necessary to ask the survey taker. Is there any specific reason why we should ask anyway?

The last question was only meant for the internal testing to help us gauge user-friendliness of the survey. We don't plan to keep it as part of the official survey. But good point about the wording.

David:
Great reword suggestions on the scaling. We will move forward with your suggestion.

You are not the first to bring up the "Practice what you preach." as being not very clear. Let's see what we can do about making it more clear. If one cannot get a general sense for what the rule is intended to mean, then it is not a simple rule.

Tina and Pras,
I have not forgotten about your feedback. I need a bit more time to process your goldmine of information.

Thanks for the great feedback, everyone. This is great!

Lealem

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Survey Design

Post by Martin Steele » Fri Nov 15, 2013 6:17 am

eckmanndavid wrote: The survey question stating "Practice what you preach" is so very ambiguous. Could this be rephrased to indicate our more directed meaning?
I agree, this item is way too general, and sounds like an admonition. Unless it can be adequately defended in this context, we should consider deleting it from this survey.

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Survey Design

Post by Martin Steele » Fri Nov 15, 2013 6:46 am

lealem wrote: There is really no way of adding a comment box to the right of each question. It would have to be below each question. We initially looked at adding a comment box for each question, but the survey became more cumbersome complete and manage. So we decided to use the single comment box as a catchall. So maybe we will add a statement encouraging the survey taker to make any comments they may have about any specific rule.
Question: Does the one comment box in the current survey have a size limit (number of characters)?

A comment box with (beside or below) each question is preferred, as it encourages comments/explanations. With the breadth of respondents expected, they will have a variety of contexts from which they will respond. This gives them the opportunity to explain their response. I realize that could be a lot of information to sift through, but the insight gained could be great. We may need to separate the objective response analysis from the textual, at first.

From the couple of surveys I've initiated, the information obtained from such comment boxes has proved their worth. From the surveys I've taken, there are always some questions that I feel the 'objective response' is wholly inadequate. Only with the ability to qualify my answer was I satisfied.

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Survey Design

Post by Martin Steele » Fri Nov 15, 2013 10:01 am

lealem wrote: Regarding people's qualification - can you perhaps suggest a wording we can use as a simple rule to be surveyed?

We plan to publish the results of the survey, so I don't know if it would be necessary to ask the survey taker. Is there any specific reason why we should ask anyway?
This suggestion is for the main part of the survey about the qualifications of the M&S Practitioners, not those of the survey respondents.

1st: The 'statement' at the top of each page is a 'question' and should have a "?" instead of a period.
How important are the following "simple rules" for credible practice of M&S in healthcare?

2nd: The survey item for people qualifications could be a single item or multiple items. Depending on the situation, the M&S Practitioner (e.g., developers, users, & analysts) could be a few as a single person to as many as a team of people for each role.

A Single Item is less simple:
M&S Practitioners are educated, trained, & experienced in both the modeling & analysis methodology and the scientific domain that is being modeled and analyzed.

Multiple Items are simpler, but makes the survey longer:
We could 1st ask what qualifies someone’s expertise in M&S:
Education of the M&S Practitioner
Training of the M&S Practitioner
Experience of the M&S Practitioner

And then ask about the domain of expertise:
Expertise in the modeling & analysis methodology
Expertise in the scientific domain modeled and analyzed

IF we do not include at least one item on M&S Practitioner Qualification, then we presume/assume it is not important to the Credible Practice of M&S.

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Survey Design

Post by Martin Steele » Fri Nov 15, 2013 10:30 am

pras wrote: > Perform uncertainty (error) estimation/quantification within context of use

I don't understand this question. Uncertainty quantification and numerical error estimation/quantification are two separate things.
This was an attempt to be succinct. We should separate them.
pras wrote: > Report appropriately

This should be rephrased in my opinion. What do you mean by appropriately? I'm not sure what this option is after. A high score might only tell you the scientific community doesn't advocate 'reporting inappropriately'.
As with other survey items, having a description of some of the key terms could be instructive to the respondent. Perhaps "Report completely the results of an M&S-based analysis" would be better.

What I'm trying to avoid is the reporting of simply "the answer" of an M&S-based analysis. There is other pertinent information to ascertaining the credibility of the results. The NASA Standard for Models & Simulations requires the following (somewhat re-worded, updated):

Reports of M&S-based analysis results shall include explicit warnings for any of the following occurrences, accompanied by at least a qualitative estimate of the impact of the occurrence:
a. Any unachieved acceptance criteria
b. Violation of any assumptions of any model
c. Violation of the limits of operation
d. Execution warning and error messages
e. Unfavorable outcomes from the intended use and setup/execution assessments
f. Waivers to any of the requirements in this standard.

Reports of M&S-based analysis results shall include an estimate of their uncertainty and a description of any processes used to obtain this estimate.
a. Reported uncertainty estimates shall include one of the following:
(1) A quantitative estimate of the uncertainty in the M&S results, or
(2) A qualitative estimate of the uncertainty in the M&S results, or
(3) A clear statement that no quantitative or qualitative estimate of uncertainty is available.

Reports of M&S-based analysis results shall include an assessment of the risk associated with accepting or rejecting the results of the M&S-based analysis.

POST REPLY