Review of ISPOR task force drafts discussing credibility

The Committee on Credible Practice of
Modeling & Simulation in Healthcare aims to establish a task-oriented collaborative platform to outline good practice of simulation-based medicine.
POST REPLY
User avatar
Jacob Barhak
Posts: 64
Joined: Wed Apr 17, 2013 4:14 pm

Review of ISPOR task force drafts discussing credibility

Post by Jacob Barhak » Fri Oct 04, 2013 12:05 pm

This is a public response to the ISPOR requests to review the following good practice task force draft reports:

PROSPECTIVE OBSERVATIONAL STUDY QUESTIONNAIRE TO ASSESS RELEVANCE AND CREDIBILITY TO INFORM HEALTHCARE DECISION-MAKING: AN ISPOR-AMCP-NPC GOOD PRACTICE TASK FORCE REPORT

MODELING STUDY QUESTIONNAIRE TO ASSESS RELEVANCE AND CREDIBILITY TO INFORM HEALTHCARE DECISION-MAKING: AN ISPOR-AMCP-NPC GOOD PRACTICE TASK FORCE REPORT.

INDIRECT TREATMENT COMPARISON / NETWORK META-ANALYSIS STUDY QUESTIONNAIRE TO ASSESS RELEVANCE AND CREDIBILITY TO INFORM HEALTHCARE DECISION-MAKING: AN ISPOR-AMCP-NPC GOOD PRACTICE TASK FORCE REPORT.

I will start by noting that the task forces are generating good tools to help decision makers. This is commendable. Never the less, there is room for improvement for the drafts.

There was a common theme and rhetoric in all these reports. They addressed the issue of credibility through a questionnaire. And they all dismissed scoring systems and instead looked for fatal flaws.

This approach itself can be seen as a fatal flaw. Our understanding of phenomena we are seeing is still limited to the little data we have. Any set of questions we ask will reduce the amount of information we have and with sufficient reduction of information we may reach the wrong conclusions even though we followed all the right questions - just because we reduced our input data. Also questions do change periodically so a less strict scoring system may survive better with time.

Providing such criticism without providing an alternative would not constitute a good review on my part. Therefore I am pointing the authors back to the alternative of scoring. Smart scoring systems can easily be used to provide a good overview while still noticing fatal flows. For example, consider a scoring of 1-10 where a fatal flaw is worth the out of scale number of -1000. Such a scoring system is much more flexible yet allows comparison of multiple less than perfect variants. This system may be better since it can still persist when the question are changes - and any questionnaire will eventually change and have many variants. Moreover, a scoring system is better when there are multiple decision makers or sources of information.


The Reference Model is an example of a model that compares multiple models and populations using a scoring system. The scoring system is capable of testing multiple assumptions against what was observed in reality. The modeling task force should be aware of this approach by now. Never the less the meta-analysis group should look at this alternative approach of comparing information while considering multiple assumptions and fitness to reality. Here is a recent link to describe this modeling technique:

http://sites.google.com/site/jacobbarha ... _09_23.pdf


As for credibility, the ISPOR task forces may wish to look at the following link that discusses ten best practices:

http://wiki.simtk.org/cpms/Ten_Simple_R ... e_Practice

Note that this link is a wiki and therefore subject to constant flux, yet you will find best modeling practices there that may help adjust your reports. This specifically is of interest to the modeling task force and it would be appropriate that they address this in their report.

In this context I would like to thank the task forces for tying conflict of interest issues to credibility. This is an important connection that the CPMS committee should learn from and incorporate in its candidate rules with more direct reference and phrasing.

It would be advised that the task forces will provide the questionnaires to the public in an electronically accessible format such as a form. Many decision makers especially in managerial levels may just skip reading the full length of these documents and jump to the conclusions. It should take little effort of a few hours to a compadre this crowd. The task forces can easily share an electronic form that gathers user responses and provides a final score for the decision maker. Such a form will also allow combining decisions from multiple decision makers. Also if such a form is shared in a common platform such as google forms many other users could create their own versions of the form easily. This would simplify things so much for the decision maker with so little effort that it seems a reasonable service that the task force should provide. In other words a published implementation of these papers is highly desirable and attainable. It will certainly help dissemination of the work.

Please note that this review was made public to avoid concealed conflict of interest and hidden influence on the task forces. This forum can accommodate additional discussion. This response represents only the opinion of the author yet hopefully others will followup in this discussion thread.

POST REPLY