Advertisement

Criteria for Evaluating Clinical Outcome Studies

Review Article | DOI: https://doi.org/10.31579/2835-2882/014

Criteria for Evaluating Clinical Outcome Studies

  • Nelson Hendle *

Mensana Clinic 1718 Green Spring Valley Road Stevenson, Maryland 

*Corresponding Author: Vilma Umanzor, Tegucigalpa, Francisco Morazán, Honduras.

Citation: Nelson Hendle (2023), Criteria for Evaluating Clinical Outcome Studies, Clinical Research and Studies, 2(1) DOI:10.31579/2835-2882/014

Copyright: © 2023, Nelson Hendle. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: 03 February 2023 | Accepted: 10 January 2023 | Published: 17 February 2023

Keywords: .

Abstract

.

Clinical outcome measure evaluate the status of a patient after a clinical intervention, whether there be improvement, no change or even a worsening in the treated condition. The researcher reporting the results of the clinical intervention, be it a test of a medication, result of a surgery or results of a cluster of treatments, has a vested interest in reporting positive results of his intervention. There are many factors influencing the outcome results.

Some of the most obvious variables which influence outcome reporting are patient selection, criteria for improvement, quantification of improvement, inter-rater reliability, the use of objective versus subjective measurements. These factors are compounded when trying to report outcome results for patients with pain, which is a totally subjective experience, which needs to be further divided into acute versus chronic pain patients.  

A few examples of published clinical outcome studies, with the area of potential errors of reporting follow.

A meta-analysis is a review of the medical literature, selecting papers which report a particular type of treatment for a particular type of disease. One would think that this method would give a good idea of the proper treatment of a disease. However, Payne reports a surgical meta-analysis in which the positive response to a sympathectomy as a treatment for reflex sympathetic dystrophy (RSD) ranged from 12% to 97%. [1].  Why was there such a wide range of efficacy?  Hendler and Dellon et al reported that 71% to 80% of patients referred with the diagnosis of reflex sympathetic dystrophy (RSD) or complex regional pain syndrome (CRPS) actually had nerve entrapments [2,3]. While Payne made a valiant effort to select articles for the meta-analysis which conformed to a consistent criteria, he could not control for faulty patient selection on the part of the surgeons who reported their results in the medical literature. Therefore, if a research group did not use a precise diagnostic criterion for selecting their patient population for surgery, they most likely picked patients mistakenly “diagnosed” with RSD or CRPS, who really had nerve entrapments 71%-80% of the time. Then, instead of performing the appropriate surgery for nerve entrapment, which is a nerve decompression [3], they performed the appropriate surgery for RSD or CRPS. It is no wonder that their success rate was only 12%. On the other hand, if the surgical group was very precise in their selection of RSD or CRPS patients, using criteria well defined by Hendler [9], then all the patients this group selected were RSD or CRPS patients, all of whom received the appropriate surgery for the correct diagnosis, resulting in a 97% improvement rate.  Of course, one may cynically interpret these results to mean that physicians should never refer a patient to the surgical group which had a 12% cure rate, because they were poor surgeons, or that the surgeon with the 97% cure rate was over-stating his successes. Once again, it is clear that the definition of a patient population, diagnostic and outcome criteria need to be questioned, and precisely defined.

A study should examine inclusion or exclusion criteria of patients from the study, and on what basis exclusion occurs. One pain center will not see patients involved in litigation. Litigation will impair return to work statistics, [4], so excluding any patients involved in litigation will improve outcome results. Another will not see patients out of work for more than 6 months.  However, the insurance company literature reports that 80% of injured workers will return to work within 6 months of their injury, even without any treatment. A third clinic doesn’t include patients who do not complete their 10hr/day 7 day/week PT program. Of 100 patients, 85% of them don’t complete the program, but these data are not published. This information was available to the author only after 5 patients who has attended the clinic reported this statistic to him. If patients fail to complete the pain program, they are labeled “Uncooperative.” Of the 15% that do complete the program, 85% return to work. The center claims an “85% return to work rate,” but 85% of 15 pts is 13 pts so really 13% return to work rate (13/100). The patient selection criteria were very important.  The program costs about $30,000/month. The clinic accepted patients primarily upon referral from insurance companies, which clearly indicated that the insurance companies have a vested interest in the outcome of treatment. When the clinic sends a letter saying the patient is uncooperative, the insurance company uses that as a reason to discontinue payment of workers’ compensation loss wage payments, and to prevent the patient from seeking additional medical care. For obvious reason, these outcome studies are not referenced, to protect the identity of this questionable clinic.

Another variable is the length of time after treatment to assess the efficacy of treatment. In one study on the benefit of epidural steroid for lumbar pain, 24 patients with chronic cervical radicular pain for more than 12 months received a randomized trial of epidural saline or steroids. Follow-up continued for as long as 48 months. There was transient improvement in 86% of patients (one to three months) but no long-lasting benefit in any of the patients [5].

Just like RSD (reflex sympathetic dystrophy) and CRPS (complex regional pain syndrome) patient selection criteria for studies of fibromyalgia are compromised by lack of proper diagnoses. Two former presidents of the American Academy of Patient Management, one of whom was on the committee to establish the diagnostic criteria of fibromyalgia from the American Rheumatological Society, evaluated 38 patients referred to them with a “diagnosis” of fibromyalgia. Of these 38 patients, only one met the diagnostic criteria from fibromyalgia. In the other 37 patients, the physicians found 133 other medical diseases which had been overlooked, all of which would have required surgery to improve [6].  Despite the 97% overdiagnosis rate, researchers continue to advocate the use of pregabalin alone or in combination with other medication to “treat” fibromyalgia [7]. This samephenomena is seen with the use of extracranial onabotulinumtoxin A (Botox) for the treatment of “migraine” [8]. It is well established that true migraine is due to intracranial arteriospasm, so it is illogical that an injection into extra-cranial muscles is treating intra-cranial arteriospasm.  Logically, the onabotulinumtoxin is reducing the muscle spasm of a mixed muscle tension-vascular headache, which is misdiagnosed as migraine.  A number of article in the literature report that “migraines” are over-diagnosed 35%-70% of the time [9,10].  

The most objective method of reporting outcome results would be using objective measures of improvement, verified by third party observers, such as the referring doctor, the patient, or an attorney. 

The best outcome reporting should include the demographics of the study population, as well as inclusion and exclusion criteria. Once the population is defined, then the more objective the reporting of the results are, the more credible they are. These would include: 

  1. Unsolicited comments posted on social media-copy of posting 
  2. Unsolicited comments mailed to doctor- copy of letter
  3. Mention of previous errors of testing and/or diagnosis
  4. Failure to improve with previous treatment
  5. Mention of improved level of activity after treatment
  6. Mention of reduced drug use after treatment
  7. Expressed thank you to doctor
  8. Referral of other patients 

It is nearly impossible to present in a summary form the compilation of these results, using third party reporting. One method would be tabulating these 8 outcome criteria on a spread sheet, and mark each item present or absent thus obtaining a score per patient out of the highest possible total score expressed as a percentage. The only way to document these results is display actual copies of the original correspondence, posted on Drop Box or on SlideShare.net. In this fashion, readers could judge for themselves the credibility of the reported improvement, after reading the third party verification of the results. Samples of third party verification of improvement are found in Appendix A. The author has over 1,000 of these unsolicited comments on file, some of which are posted on SlideShare.net, under the title of “Third Party Reporting of Patient Improvement.”

Appendix A 

References

Clinical Trials and Clinical Research: I am delighted to provide a testimonial for the peer review process, support from the editorial office, and the exceptional quality of the journal for my article entitled “Effect of Traditional Moxibustion in Assisting the Rehabilitation of Stroke Patients.” The peer review process for my article was rigorous and thorough, ensuring that only high-quality research is published in the journal. The reviewers provided valuable feedback and constructive criticism that greatly improved the clarity and scientific rigor of my study. Their expertise and attention to detail helped me refine my research methodology and strengthen the overall impact of my findings. I would also like to express my gratitude for the exceptional support I received from the editorial office throughout the publication process. The editorial team was prompt, professional, and highly responsive to all my queries and concerns. Their guidance and assistance were instrumental in navigating the submission and revision process, making it a seamless and efficient experience. Furthermore, I am impressed by the outstanding quality of the journal itself. The journal’s commitment to publishing cutting-edge research in the field of stroke rehabilitation is evident in the diverse range of articles it features. The journal consistently upholds rigorous scientific standards, ensuring that only the most impactful and innovative studies are published. This commitment to excellence has undoubtedly contributed to the journal’s reputation as a leading platform for stroke rehabilitation research. In conclusion, I am extremely satisfied with the peer review process, the support from the editorial office, and the overall quality of the journal for my article. I wholeheartedly recommend this journal to researchers and clinicians interested in stroke rehabilitation and related fields. The journal’s dedication to scientific rigor, coupled with the exceptional support provided by the editorial office, makes it an invaluable platform for disseminating research and advancing the field.

img

Dr Shiming Tang

Clinical Reviews and Case Reports, The comment form the peer-review were satisfactory. I will cements on the quality of the journal when I receive my hardback copy

img

Hameed khan