Survey Design

The Committee on Credible Practice of
Modeling & Simulation in Healthcare aims to establish a task-oriented collaborative platform to outline good practice of simulation-based medicine.
User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Survey Design

Post by Jacob Barhak » Fri Nov 01, 2013 5:21 am

Hi Martin,

In short, this is intended for the following reasons:
- Make the contribution of each person the same.
- We are interested in relative importance rather than absolute importance, i.e. a person ranking all rules 5 is the same as one ranking all rules 1.
- This is to simulate the idea of a person having a limited amount of points that are distributed around rules with more important ones to gain more weight. Think about 100% where you give each rule the percent you think are important.
- 0 or unanswered will not have influence of a rule while other weights will make improve the rule ranking.

I hope our internal testing will prove this idea works well.

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Survey Design

Post by Martin Steele » Fri Nov 01, 2013 6:55 am

jbarhak wrote:Hi Martin,

In short, this is intended for the following reasons:
- Make the contribution of each person the same.
- We are interested in relative importance rather than absolute importance, i.e. a person ranking all rules 5 is the same as one ranking all rules 1.
- This is to simulate the idea of a person having a limited amount of points that are distributed around rules with more important ones to gain more weight. Think about 100% where you give each rule the percent you think are important.
- 0 or unanswered will not have influence of a rule while other weights will make improve the rule ranking.

I hope our internal testing will prove this idea works well.
It may be beneficial to consider the results as truly categorical, which they are, and NOT numbers, which they really are not. The only group being surveyed are "modelers," so, their weight is equal by default - a single population of respondents. We're not comparing the responses from different groups; therefore, the only math we should do is count the number of each response. Weighting responses can be an exercise in abstraction.

Internal testing can be of great benefit. Perhaps producing a very simplified example of the data analysis would also be helpful.

User avatar
Joy Ku
Posts: 80
Joined: Tue Oct 02, 2007 5:22 pm

Re: Survey Design

Post by Joy Ku » Fri Nov 01, 2013 10:18 am

Hi Jacob,

Thanks for the thorough survey design document. A couple of questions:

1) Does the survey have a mechanism to prevent multiple voting?
2) 32 seems like a large number of questions for a survey. Do we have any way to pare down that number? If we feel like we absolutely have to have that large a number, then I might suggest randomizing the order in which they appear. For someone like me, I tend to answer things in the order that they appear. So, if I got tired of answering all the questions and quit, I'd always be skipping the last questions. If most people respond similarly, the items at the end of the survey will end up appearing not important or not applicable due to the survey design, rather than being reflective of people's beliefs.

Joy

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Survey Design

Post by Lealem Mulugeta » Fri Nov 01, 2013 11:06 am

Hey Joy,

Thought I'd respond since Jacob had turned the survey over to me a while ago and I've been working on refining it for a while now.

To answer your first question, no, there is currently no way of preventing people from filling out the survey more than once. I'm not sure how we can do that without tracking some kind of token or identifier code. As discussed earlier, for us to maintain IRB exemption, we cannot track any unique identifiers. Given you have more experience with surveys, do you have a recommendation on how we can tackle this challenge?

Yes, 32 does sound like a lot. But the limited testing we've done so far indicates that the survey statements and questions are brief enough that a person can complete the survey in 5-10 minutes. But we're hoping we will get better insight on this from the internal testing.

You have a good point about randomization, and I agree with your reasoning. I had also considered this, and I tried to figure out a way to make the survey questions more random, but google forms does not have a feature to allow us to do this. At least not easily and in an automated way. As a partial solution to this, I first randomized the statements so that they don't appear in any particular order (e.g. our initial rankings or alphabetical) prior to finalizing survey form. But I realize this does not fully address the problem.

Tina had suggested the idea of first letting people read all of the simple rules before filling out the survey. Doing this would add to the participants' time to complete the survey, but it also gives opportunity for them to think about the survey questions more while they are filling out the survey. This may solve some of these problems.

The other option is for one of us to manually randomize the order of the questions every few days.

But before we do any of this, perhaps should see how the internal evaluation turns out?...

User avatar
Martin Steele
Posts: 37
Joined: Tue Apr 23, 2013 9:52 am

Re: Survey Design

Post by Martin Steele » Fri Nov 01, 2013 12:20 pm

I’ve found it always useful to provide an optional comment block with each question in the survey for the participants to provide additional information, rationale, or explanation for their responses.

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Survey Design

Post by Lealem Mulugeta » Fri Nov 01, 2013 1:47 pm

Hi Martin,

Good point, and that has been provided.

Hope all is well!

Lealem

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Survey Design

Post by Lealem Mulugeta » Fri Nov 01, 2013 2:21 pm

joyku wrote: then I might suggest randomizing the order in which they appear.
Although google form does not have a way of randomizing the survey, Jacob's idea about creating multiple versions gave me an idea. But it will take some web scripting.

Basically, if we were to create several different versions of the survey, we could use script that randomly directs the survey participant to one of the survey links. But to do this, would need a temporary webpage with some back-end script randomize the links and keep the survey URLs fully hidden from the public.

I explored a couple of possible sources on my end to do this, but no luck so far. Do you know if SimTK can facilitate this? I don't know, just a thought...

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Survey Design

Post by Jacob Barhak » Sat Nov 02, 2013 9:59 am

Hi Joy,

It is possible to add javascript code to the Google spreadsheet. Similar to how Visual Basic for Applications can be embedded in an XL spreadsheet.

However, this is not trivial and requires some programming effort.

A URL randomized redirection may be another simple solution.

If you can program a simple and quick solution, it will be worth testing. However, programming a survey tools may be counter effective.

I hope you are successful in this.

User avatar
Pras Pathmanathan
Posts: 6
Joined: Thu Apr 25, 2013 4:23 pm

Re: Survey Design

Post by Pras Pathmanathan » Fri Nov 08, 2013 5:35 pm

For the record, the survey didn't feel long when I tried it, since the statements are short.

Some comments about specific questions:

> Use appropriate data (input, validation, verification, etc.)
> Validate the M&S within context of use
> Verify the M&S within context of use

How do you know all the surveyed will know the definitions of validation vs verification? Also, personally I would prefer the phrasing 'Perform validation of' and 'Perform verification of'.

> Perform uncertainty (error) estimation/quantification within context of use

I don't understand this question. Uncertainty quantification and numerical error estimation/quantification are two separate things.

> Report appropriately

This should be rephrased in my opinion. What do you mean by appropriately? I'm not sure what this option is after. A high score might only tell you the scientific community doesn't advocate 'reporting inappropriately'.

> Use consistent terminology or define your terminology

Again, should get high marks just because it seems a low score here means the you (the respondent) prefer inconsistent terminology or not defining your terminology.

> Practice what you preach

This seemed out of place

> Be a discipline specific example of good practice
> Learn from discipline specific and/or independent guidelines for good practice

Should be 'discipline-specific'

Completely separately, something to consider regarding unbiased results: you potentially will have a small but unknown number of respondents googling CPMS before performing the survey, coming across these forums, and being influenced by the discussions on the questions, when answering the questions.

User avatar
Tina Morrison
Posts: 6
Joined: Mon May 07, 2007 4:35 pm

Re: Survey Design

Post by Tina Morrison » Sat Nov 09, 2013 8:00 am

Good day to you all.

Lealem, thank you for your detailed and careful post about IRB - excellent research friend!

Survey design team, I apologize that I was unable to particiapte in the intial design of the survey. Thank you for your efforts. I have the following comments.

1. Introduction: I'd like to change "filling" to "bridging" in the first sectence. Following the last sentence of the first paragraph, I'd like to add this sentence: We invite you to participate by taking our brief survey.

2. Images: is there a reason the logos were presented in that order?

3. After the opening question, I'd like to add the following sentence (or something to this effect) before you present the scale: "In an effort to indentify the best practices and simple rules, we've compiled a list of potential important considerations. The following scale will enable you to rank each consideration as not important to very important.

4. Overall comments: after marching through the survery, it became clear to me that, depending on the context of use of the model (i.e., the intended use of the outcomes of the M&S results), the responders might rank the considersations/rules differently. For example, if my goal is to contribute this model to an open source community, then "engage potential end-user base" might be very important. If the results of my M&S are to be used to determine if I should invest in some test equipment or define the bounds of an experiment, then this might not be very important. I believe that if we provide a few different, general scenarios, then we can elucidate better the "simple rules" needed and get more meaningful results. Therefore, I propose the following scenarios (and these can be discussed as well):

>>>A. M&S is used for research purposes (e.g., hypothesis development)
>>>B. M&S is being developed/used by an open-source community
>>>C. M&S outcomes are being used in a regulatory setting
>>>D. M&S outcomes are being used for clinical decision support
>>>E. M&S outcomes are being used for product development

I propose that we present ALL of the simple rules (i think we can shorten the list - some seem redundant - like #3 and #25) for each of the scenarios (i know this lengthens the survey but we could ask participates to pick the one that they have the most expertise with). The end results would be a ranking of simple rules based on how M&S are used.

I welcome your comments. I'd also propose that we spend some time modifying the list - maybe we can aim for 25.

Respectfully prepared,
Tina

POST REPLY