The Bayesian approach allows external and
subjective information about the size of the treatment
effect, expressed as a prior probability distribution,
to be combined with trial evidence to give a posterior
probability distribution for the size of the treatment
effect. This approach ensures that the trial is reducing
the uncertainty about the treatment effect from a level
that already exists. For example, if there is relatively
strong prior evidence that the treatment is effective, either
through subjective belief or by summarizing existing
evidence, then a small trial that supports this may be all
that is required to change clinical practice. However, the
situation becomes more complex if the trial data and prior
evidence conflict. An additional key advantage with the
Bayesian approach is that the results from a trial can be
expressed in terms of direct probabilities of the treatment
effect being a certain size. For example, the sort of result
that one would be able to conclude in terms of a survival
outcome is that, given prior evidence and the trial data,
there is a 70% chance that the treatment truly reduces
the hazard of death by at least 10% (i.e., hazard ratio
<0.9). In small studies, this type of reporting could be
used practically by clinicians in discussion with patients
and enable evidence-based treatment decisions, whilst
a non-significant result from hypothesis testing would
simply be regarded as inconclusive or, at worst, evidence
of no treatment effect.
“…although small sample sizes are not ideal, there
are ethical arguments to consider.”
Further to the proposal by Lilford and colleagues, a
strategy was developed for designing trials to evaluate
interventions in rare cancers, specifically in terms of
survival time as an outcome measure [7–9]. It proposed
a methodology for creating a prior distribution from
existing evidence. The strategy suggests searching the
literature for all evidence relating to a proposed trial,
even including studies where there are only tentative
similarities in terms of type of cancer, treatment and
end points, and including all levels of evidence from
randomized controlled trials to single case study
reports. This evidence can then be combined into a
prior distribution for the treatment effect with weights
allocated in relation to pertinence, validity and precision.
In principle this idea is sensible, but in practice such an
approach is problematic, as discovered when applying this
methodology to design a trial of adjuvant chemotherapy
in stage I-III Merkel cell carcinoma. Such broad search
strategies can produce large numbers of potentially
relevant papers and in rare diseases it is unlikely that
any of these will be high-level evidence. From around
27,000 references identified in searches related to the
planned Merkel cell carcinoma trial, approximately 1000
were found to be potentially relevant and the majority
were case studies with a single-arm study as the best
level of evidence. Reviewing these and extracting data is
extremely time-consuming and estimating hazard ratios
from such studies without direct treatment comparisons
is not straightforward. More importantly, such evidence
is potentially so biased that the prior probability
distribution would not be believable. In addition, the
poor quality evidence is allocated very low weights in
the strategy, 0.3 for single arm study to 0.05 for case
study compared to 1 for a randomized controlled trial
and, therefore, despite the large effort needed to extract
and combine such information, it ends up contributing
very little to the prior. One has to question the value of
undertaking such a strategy.
“If one starts from the premise that there is
considerable uncertainty regarding this unknown
quantity, then data from even small numbers of
patients in a well-designed clinical trial will make
steps towards reducing that uncertainty.”
Given this difficulty in producing an evidence-based
prior and the fact that many clinicians find it difficult
to accept the inclusion in the analysis of a prior based on
subjective beliefs, we need to consider the alternatives.
Actually, the Bayesian approach can still be applied
by using a noninformative prior distribution. This is
effectively a uniform probability distribution that reflects
the fact that every size of treatment effect is equally likely,
because there is no evidence to believe otherwise. An
analysis with this type of prior ensures that the posterior
probability distribution for the treatment effect is totally
dominated by the data from the trial. Technically, the
distribution coincides with the likelihood, which is a
probability function that shows how strongly the data
support every possible value of the treatment effect.
When combined with a noninformative prior, this is
often referred to as a ‘standardized likelihood’ [4] and
such an approach could be called a ‘likelihood-based
Bayesian analysis’. The reason that such an approach
is still useful is that, as specified earlier, it enables the
results to be expressed in terms of direct probabilities of
the treatment effect size being within a certain range
but this time based purely on the results from the trial.
This type of approach can be effective in maximizing
the value of the information from a trial that has failed
to recruit. In terms of rare diseases, if such an analysis is
planned, then a sample size can be chosen that is feasible
and ensures that the posterior probability distribution
has an acceptable level of uncertainty that will enable
clinical decisions. This approach using standardized
likelihood has been suggested before as a useful approach
to presenting trial results to clinicians [10,11], not only
www.future-science.com
future science group
656
EDITORIAL Billingham, Malottki & Steven