Survey Design

The Committee on Credible Practice of
Modeling & Simulation in Healthcare aims to establish a task-oriented collaborative platform to outline good practice of simulation-based medicine.
User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Survey Design

Post by Jacob Barhak » Fri Nov 15, 2013 2:50 pm

There was very good feedback in this forum.

I myself posted my comments as part of the survey form. Yet allow me to comment on some of the issues discussed here.

I also think it is important to channel the user to keep the number of high scores to a minimum. Otherwise the planned analysis suggested may not work as well as intended. You cannot easily direct the user into weighted ranking, yet it will help if you explain what you expect at the introduction.

And I suggest leaving the rule "practice what you preach. If it really not that important we will get confirmation about it in the results. The idea is to rank the rules, so many will sink to the bottom of the list.

I do think the open question about rules is important. Lealem gave this as argument to me once when I argued more rules should go in. He has a good point, by having an open question the user can add new rules we did not think of.

Also, regarding changes in the survey tool. You should understand that the Google forms tool we are using is easy to use if we keep a certain format. There are more complex technological solutions that we can take, yet those may take a lot of effort and may delay us for quite a bit. So compromises will be made to allow launching in reasonable time. In general, text changes are easy, format chanes are limited, logic is harder.

We are already a bit behind schedule, so I suggest we focus on critical issues that are an easy fix.

I also urge everyone to contribute additional forums/groups to our external contact list so that we will have the largest pool of candidates we can have.

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Survey Design

Post by Jacob Barhak » Fri Nov 22, 2013 1:44 pm

Tina wanted to add more context to the survey by adding more questions related to the modeling context. This was raised in the team meeting.

The idea was to add this additional information during analysis - probably for stratification. Lealem and Tina were assigned to work on defining these questions.

My request is that all stratifying questions at the end of the survey be reviewed by Lealem and Tina and reformatted to figure out what exactly is of most interest - Please do try to keep those to a minimum of options.

Please consider both expected number of responses and the size of the invited population to maintain anonymity - even though such questions are not considered identifying by current law, the more questions we add we essentially get close to identification. The law maker may not have considered new technologies and the digital visibility of people when the regulations was written, and we do want to respect the spirit of the law by avoiding identification if we are shooting for IRB exemption. Please recall that we intend to release the raw data to the public, so we need to be thoughtful.

Also consider that others may be analyzing our raw data after publication, so we should be extra thoughtful. We may need to add another test before release considering all possible changes - I looked at the number of comments - there are lots of them already.

And we should also update the design wiki once we make all changes - it needs an update. We may even link to the design page from the survey itself in case someone has questions

I hope we can get a stable version next month.

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Survey Design

Post by Jacob Barhak » Mon Dec 16, 2013 6:46 pm

The newer version of the survey is better from several perspectives:

1. All questions are on the same page which allows comparison between score visually.

2. The score is nicely visible on the right side of the screen.

3. The red text that asks to limit the number of high ranking answers is crucial to help float the best rule to the top. Also the red text at the bottom is a helpful reminder - good idea.

4. The questions are randomized in this version.

However, this version has the following disadvantages:

1. The survey does not fit the screen of a smartphone. On an iPhone 3GS the user has to swipe right and left constantly to the point it is annoying - even in landscape mode - this may decrease participation especially since this makes the use of smartphones harder which is important in conferences and other non office settings where people are more tuned to this kind of work.

2. The red instruction text should be more specific. We really want only very few 5 scores - less than 7. Perhaps 3 at most. It is ok to have around 7 rules scored at 5. Note that the current red instructions can be ambiguous, are 7 total allowed at score 4-5 or do 14 rules are allowed to score 4-5?

3. The text box near the other entry in the education and training questions may be too much from an anonymity point of view. If someone does not fall in the broad categories above then asking specifics may essentially de-anonymize the person. Especially if free text is allowed. Also there are now 5 questions that address personal information of sorts that lead to 7x7x9x6x5 = 13,230 stratification categories. If population was totally uniform then inviting this many participants will conflict with anonymity. I suggest a safety factor of 10 to maintain anonymity. This means that with the current version we will need more than 130k participants. I am not sure our target population is that large - at least our external contact list does not reflect this. In other words, if we are interested in the breakdown of modelers and simulators it may be better to design a different survey or use another mechanism. If we want to know what certain groups think we rather focus on important aspects. For example what Ahmet suggested of using professional society as an indicator of opinion may be useful here to resolve the anonymity issue.

4. The back button in the last page of the survey leads to the first age question and the human identification phase rather than back to the questions. This is simple to fix - note that time limits should be ignored if these exist and cause this issue. We do not wish to loose work for someone who spent time to answer the survey.

5. The top panel has options such as "ignore validation" and "do not show hidden questions" these should not be at the final survey - these seem to be design tools.


Finally a neutral notes:

- Note that the government agencies logos are not visible in this version. This may simplify things from the need for approval perspective and resolves affiliation issues.

- The ability to save a response is interesting - how does this work? How is this related to anonymity?

Some of these issues can be easily fixed while others should be weighted between alternatives.

Finally, I wish to add that the best survey tool would be one that will allow the user to rearrange the order of rules according to importance using drag/drop or other easy user interface. I have seen such an Internet tool yet I do not know if it is suitable for anonymous surveys. I guess we will have to do with the best tool available within our time window.

In any case, this is progress toward a better survey tool.

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Survey Design

Post by Jacob Barhak » Thu Dec 19, 2013 2:08 pm

During the committee meeting we addressed the issue of anonymity.

I mentioned that I am uncomfortable releasing a survey asking too many personal questions - even if these to not fall in the specific criteria Lealem noted as identifying according to the current law.

I suggest one of the following ways of action to resolve this tension:

A. Remove one of the questions in the current version
B. Increase the invited population size
C. Not release the raw data to the public
D. Invest efforts in taking precautions and preventative actions before releasing the data to the public
E. Change the questions to have less options or to something else such as affiliated groups

In any case, I would move towards removing open hand answer questions other than the notes question that allows suggesting new rules - answers may be so unique that anonymity becomes questionable.

Note that my current suggested cut off criteria of (Number of Categories) X (Confidence Factor) is somewhat arbitrary - yet seems reasonable to keep survey anonymity - especially if we are about to release the raw data.

I really rather compromise on a path that supports the spirit of the law in this day of digital visibility.

User avatar
Ahmet Erdemir
Posts: 77
Joined: Sun Sep 10, 2006 1:35 pm

Re: Survey Design

Post by Ahmet Erdemir » Thu Dec 19, 2013 4:07 pm

Thanks Jacob for providing various options. Here are some responses
jbarhak wrote: A. Remove one of the questions in the current version
If the Committee can identify any question that can be removed without loss of interpretation of the data, essentially anything that does not have an acceptable justification, we should indeed remove the question.
jbarhak wrote: B. Increase the invited population size
We should increase the invited population size, not only because of anonymity related issues but also the likelihood of accumulating a broader response database.
jbarhak wrote: C. Not release the raw data to the public
This is a possibility. I recently browsed a survey data in a processed form, see http://static.wileyprojects.com/oasurvey/. Maybe this is a good way to share the dataset for its navigation. This may not allow others to reanalyze the data but it may be something that we may have to live with. One issue is that, if we do not share the raw data with public, committee members from federal agencies may not be able to look at the data in a raw form as well. We need to follow up on this, in particular if the advice we seek from them requires access to raw data.
jbarhak wrote: D. Invest efforts in taking precautions and preventative actions before releasing the data to the public
Certainly.
jbarhak wrote: E. Change the questions to have less options or to something else such as affiliated groups
See my remark for A.

ahm.

User avatar
Lealem Mulugeta
Posts: 42
Joined: Tue Dec 21, 2010 11:03 am

Re: Survey Design

Post by Lealem Mulugeta » Mon Jan 06, 2014 2:48 pm

Hello everyone,

Sorry I took so long to respond to this.

Before we consider whether or not to implement what Jacob is recommending regarding limiting the number of questions or increasing our population reach based on the formula he is proposing, I think we need to step back and consider why the committee decided to implement the survey, and how we implement the survey to give us the information we need while still maintaining anonymity.

1. Why is the committee conducting this survey?
The reason we are implementing the survey is to pole the global stakeholders to narrow down "Ten Simple Rules of Credible Practice", which the Committee will use as the foundation to develop a "Guidelines for Credible Practice of Modeling and Simulation in Healthcare"; the Committee's primary deliverable for the first two year term.

2. How do we implement the survey to give us the key information we need in order to establish "Ten Simple Rules of Credible Practice" and "Guidelines for Credible Practice of Modeling and Simulation in Healthcare" that capture global stakeholder community’s interests/perspectives?
In order to gauge how well we are capturing the perspectives of the global stakeholders, and we interpret the results to develop the guidance document in a way that appropriately represents the perspectives of the greater community, key questions should be incorporate in the survey to help the Committee assess if the survey results appropriately represent the greater stakeholder community. This includes, questions that help assess if we have input from stakeholders of varying:
  1. Geographical location – M&S for healthcare related applications has become an international endeavor. Moreover, perspectives on how M&S should be applied differ around the world based on research methodologies and clinical practices established in the different global locations.
  2. Professional environment influences - the perspectives and interests of a given stakeholder can vary drastically based on their professional environment influences (e.g. clinical, research, commercial, government etc)
  3. Academic/professional training - one’s education/training tends to have a heavy influence one’s perspectives
  4. Levels of education - the perspectives of a person can vary drastically depending on the level/depth of education that person has in their respective field of interest
  5. Varying degree of familiarity/experience in computational M&S – level of familiarity with M&S this will clearly influence one’s perspective.
  6. Capturing this information is important because the committee is targeting both experienced modelers, as well as clinicians and researchers who may not have any experience with M&S but have an interest in incorporating M&S as part of their work in clinical practice and/or research.[/*]
  7. Degree of interest in applying M&S – The degree to which the stakeholder intends to use the M&S will also influence perspective on how much confidence should be
Therefore, to gauge these factors, the following six questions were initially added to the survey.

What is your geographical location?
What is the primary setting you work in?
What is your primary field of academic/professional training?
What is your highest level of education?
How familiar are you with Computational Modeling and Simulation (M&S)?
How interested are you in leveraging M&S for healthcare research and practice?


After further discussions, some people suggested that a question should also be added gauge how the survey taker intends to use M&S (i.e. context of use), since the context within which one plans to use M&S has a heavy influence on perspective and rigor of credibility assessment required. Therefore I would recommend adding a seventh question in this regard.

If we do not collect this information, I fear that we may end up establishing ten simple rules and a guidance document that may not be an accurate representation of the greater community, or may have substantial limitations (e.g. there may be skewed input due to larger contribution from one group). Even if we are not able to get balanced input from the global community, the information from the above questions will help us to appropriately caveat our findings. This in turn will allow the stakeholder community to apply the guidance document with a clearer understanding of the strengths and limitations of the processes outlined therein. Moreover, it will also help Committee identify where further input from different stakeholders will be needed for future updates of the guidance document.

3. How do we maintain anonymity?
Based on past discussions and the extensive research I had done on CFR45, it was clearly demonstrated that the responses we are asking for the above questions do not fall under personally identifiable data: https://simtk.org/forums/viewtopic.php? ... 474#p10641

Therefore, according to the CFR, if we maintain the above questions (which I view as being important), we are not compromising anonymity of the survey takers.

I think limiting the number of questions based on population reach, or any other criteria, in an attempt to further increase anonymity on what is already deemed anonymous is not the right way to go about things. I think careful thought needs to be given to the purpose which the different questions were designed for before arbitrarily deciding to drop questions. Not doing so can substantially compromise the intended purpose of the survey.

Furthermore, I have yet to find a protocol in any standard, regulation or guideline that supports Jacob’s recommendations. Additionally, it is important to note that the data we are collecting are not personally sensitive. So I fail to see why there is a need to add an additional layer of rigor to ensure anonymity.

In conclusion, my recommendation/advice to the survey design manager is to incorporate as many or few quests necessary to allow appropriate interpretation of the survey results so that the Committee can meet the goal of establishing "Ten Simple Rules of Credible Practice", and "Guidelines for Credible Practice of Modeling and Simulation in Healthcare" that appropriately reflect the perspectives of the stakeholder community. However, adherence to CFR 45 should be maintained whenever designing such questions.

Thanks,
Lealem

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Survey Design

Post by Jacob Barhak » Mon Jan 06, 2014 10:55 pm

Lealem provided a good explanation that makes sense.

Yet we are digressing and need to refocus. To resolve this minor internal conflict regarding anonymity there are several simple solutions I suggested. The best solution is increase invited population size - which should be doable considering the size of the committee and their connections - if everyone contributed lists of contacts to the pool of contacts we would not be having this discussion. We have specialists from many groups - if everyone would contribute one larger organization they are affiliated with we would be in a much better position. I suggest we focus on positive resolution here. And I thank those who added to the external committee contact list already. Here is a link to the list we have so far:
http://wiki.simtk.org/cpms/Committee_Ex ... ntact_List

If you are reading this and have not contributed a name of a relevant organization that deals with modeling/simulation in healthcare please update the wiki, or if easier send me the organization name and contact and I will update the wiki for you.

For example I know from a medical source that some radiation treatments today are calculated by computer for precision - I believe there is plenty of modeling there - do any of the medical people in the committee have contact with the professional organization that gathers specialists in that field? I believe there are other similar examples. So this is a call for action from our well connected committee members.

Yet let us refocus. Our primary goal with the survey is to figure out the best rules - our secondary goal is to connect with the modeling community - and I am rephrasing here.

To achieve the main goal we still need to go through a few hoops of internal testing that establish that our tool is good enough and can capture what we want - I still have more pressing concerns there since we have not established that our score system actually works by analyzing real data internally.

I do hope we can all act quickly to resolve this and allow us to capture opportunities.

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Survey Design

Post by Jacob Barhak » Mon Feb 03, 2014 12:25 pm

The survey purpose is to rank rules. This is what we actually did when we internally decided on internal rule rankings.

The current implementation as a survey form with scoring was imposed by the availability of the tool and the technology.

Since we are delayed so much and since I just recently became aware of new technology that is geared specifically towards ranking, I want to send you an example of a simple drag and drop ranking web page example.

http://jqueryui.com/draggable/#sortable

Josh Marshall from uStudio pointed me in that direction on the flight back from IMSH. I asked a web programmer I know personally to play with this example and it is easy to do.

If we would to create a survey based on this technology we would:
1. Get exactly what we want to do - rank rules rather than score them
2. The human interface would be quicker and easier to use than a survey

You can see from he code that this example is simple and an experienced web programmer can turn this into a good ranking survey tool in no time.

This is specifically directed to Joy - Joy you support much more complicated web interfaces and you supported the ranking approach in the past - actually your words made me think of this interface as an option. Considering the simplicity and the advantages, what would you say about a tool like this for the survey.

We delayed so far, why not get something that matches what we need better with a little more effort?

I hope this open minds and options.

User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Re: Survey Design

Post by Jacob Barhak » Tue Feb 11, 2014 4:20 pm

Apparently there is already an implementations of a ranking question in an existing survey tool.

Here is a link to a demo:
http://m.youtube.com/watch?v=F4KU0ytv5yI#

I hope the tool we use has this option.

A ranking question is superior to the previous scoring approach from several perspectives:
1. It is our final goal
2. The user visually sees the ranking and can compare and contrast
3. We can save time for he user by asking to rank only the first 3 or first N. This will be sufficient to analyze the data and save time. Giving a score to the least important question is fruitless towards our goal.

I also saw a randomization option that may be useful.

POST REPLY