Scolaris Content Display Scolaris Content Display

Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children ‐ a review of clinical study reports

Esta versión no es la más reciente

Table 1. Reporting bias testing framework for comparing evidence from multiple regulators, manufacturer clinical study reports, trial registries, and published trials for the following outcomes: harms, complications, and compliharms, by priority of testing. First priority null hypotheses to test.

Null hypothesis

Definition

Potential Impact

Framework to test hypothesis

There is no under‐reporting (overview hypothesis) (Hopewell 2009; McGauran 2010)

Under‐reporting is an overall term including all types of bias when there is an association between results and what is presented to the target audience

Tailoring methods and results to the target audience may be misleading. The direction of the effect could change or the statistical significance of the effect could change or the magnitude of the effect could change from clinically worthwhile to not clinically worthwhile and vice versa

1. Is there evidence of under‐reporting?
2. What types of under‐reporting are apparent (list and describe them)?
3. What is the overall impact of the under‐reporting on the results of a meta‐analysis (compare estimates of effects using (under)reported data and all data)?
4. What is the impact of under‐reporting on the conclusions of a meta‐analysis i.e. are conclusions changed when all data is reported?

There is no difference between analysis plan in the protocol and final report (or the differences are listed and annotated) (McGauran 2010)

When protocol violations especially if not reported and justified, are not associated to study results

Post hoc analyses and changes of plan lead to manipulation of reporting and choice of what is and not  reported

1. List any discrepancies between what is pre‐specified in protocol and what was actually done
2. Can these discrepancies be explained by documented changes or amendments to the protocol?
3. Were these changes made prior to observing the data?
4. What is the perceived impact of these changes on the results and conclusions?

There is no difference between published and unpublished conclusions of the same study (McGauran 2010)

A specific bias relating to the selective reporting of data in association with target audience

Results have been tailored to the intended recipient audience

1. Compare reporting of important outcomes (harms, complications) between published reports and other reports such as those to regulatory bodies e.g. FDA
2. Document any differences in conclusions based on separate reports of the same studies

Presentation of same data set is not associated with differences in spelling,  incomplete, discrepant, contradictory, or duplicate entries (Doshi 2009; Golder 2010; Jefferson 2009a)

Different versions of the same data set are associated with discrepancies

Raises questions of whether these discrepancies are mistakes or deliberate?

1. Document any differences or similarities in separate reports of important outcomes (harms, complications) based on the same studies
2. Report any discrepancies to the manufacturer and ask them to clarify and correct any errors
3. What is the impact on the evidence base of including or excluding material with similar discrepancies?

There is no evidence of publication bias (Hopewell 2009; McGauran 2010)

Publication status is not associated with size and direction of results

Negative or positive publication bias can have major impact on the interpretation of the data at all levels especially

1. Are there studies that have not been published (yes/no)?
2. How many studies have not been published (number and proportion of trials not published and proportion of patients not published)?
3. Construct a list of all known studies indicating which are published and which are not
4. What is the impact on the evidence base of including or excluding unpublished material?

There is no evidence of outcome emphasis bias (McGauran 2010)

When over or under emphasis of outcomes is not associated with size or direction of results 

Can lead to wrong conclusions because over emphasis on certain outcomes

1. Are all of the pre‐specified outcomes in the study protocol reported?
2. Are the outcomes reported in the same way as specified in the study protocol?
3. Are there any documented changes to outcome reporting listed in the study protocol?
4. What is the impact on the evidence base of including or excluding emphasised outcomes?

There is no evidence of relative versus absolute measure bias (McGauran 2010)

When choice of effect estimates is not associated  with size or direction of results

Can lead to wrong conclusions because of apparent under or overestimation of effects (e.g. in the use of relative instead of absolute measures of risk)

1. Are both relative and absolute measures of effect size used to report the results?
2. Is the incidence of each event reported for each treatment group?
3. What is the impact on the evidence base of including estimates of effect expressed either in relative and absolute measures?

There is no evidence of follow up bias (McGauran 2010)

When there is no evidence that length of follow up is related to size and direction of results

Can lead to wrong conclusions due to over or under emphasis of results

1. Are reported results based on the complete follow up of each patient?
2. Are important events (harms, complications) unreported because they occurred in the off‐treatment period?
3. What is the impact on the evidence base of including or excluding material with complete follow up?

There is no evidence of data source bias (Chou 2005; McGauran 2010)

There is no difference between the evidence base presented to regulators (for approval for an indication) and that produced by or in possession of drug’s the manufacturer (Chou 2005)

Can lead to approved indications inconsistent with full data set

1. Have regulators been presented with all data sets resulting from trials sponsored by the drug’s manufacturer?
2. Have all national regulatory agencies been presented with the same trial data sets?
3. Can differences between national regulatory agencies be explained by access to different data sets?

FDA: Food and Drug Admnistration

Figuras y tablas -
Table 1. Reporting bias testing framework for comparing evidence from multiple regulators, manufacturer clinical study reports, trial registries, and published trials for the following outcomes: harms, complications, and compliharms, by priority of testing. First priority null hypotheses to test.
Table 2. Reporting bias testing framework for comparing evidence from multiple regulators, manufacturer clinical study reports, trial registries, and published trials for the following outcomes: harms, complications, and compliharms, by priority of testing. Second priority null hypotheses to test.

Null hypothesis

Definition

Potential impact

Framework to test hypothesis

There is no difference by funder (Jefferson 2009b; McGauran 2010)

When results and tone of conclusions are associated with type of funder

Funder influences results, conclusions and study visibility

1. Are there substantial numbers of comparable trials with different funding?
2. Is type of funder associated with quality, relationship between conclusions and data presented and prestige of the journal of publication?
3. Is the type of funder associated with publication status?  

There is no evidence of authorship musical chairs bias (Cohen 2009; Doshi 2009; Jefferson 2009a; MacLean 2003)

When different authors for the same data set are presented to different target audiences

Raises an accountability question: who is responsible for the study?

1. Are the names of the people responsible for the unpublished report the same as those of the published reports?
2. Is the responsibility for conducting the trial clear?

There is no evidence of time lag bias (McGauran 2010)

When result reporting time frame is not associated with size or direction of results 

Can lead to wrong conclusions

1. Are there significant differences in on‐t and off‐t treatment data?
2. Does the reporting or not reporting of on‐t and off‐t treatment data impact on the conclusions?

There is no evidence of location bias (Higgins 2011)

The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of results

Can lead to wrong conclusions in a specific setting or mislead generalisation to another context

1. Is there an association between publishing trials in journals with similar ease of access and data basing and size or direction of results?
2. How does this relate to unpublished material?

There is no evidence of disclosure pressure bias (McGauran 2010)

When external stimuli to publish or not are not associated with size or direction of results 

Can lead to wrong conclusions because of blocks on what is reported or not

1. Why were some data and/or studies not published?
2. What impact do these motives have on interpretation of the evidence base?

There is no evidence of off‐label bias (McGauran 2010)

When reporting is not associated with a higher or lower probability of unregistered indications use or recommendations thereof

Can lead to wrong conclusions because of reporting of data which leads to off label use or is a product of off label use

1. Is there any difference in the on label indications and dosage between published and unpublished clinical study reports?
2. If so, how does the inclusion or exclusion of off label data  impact conclusions from the evidence base of this drug?

There is no evidence of commercial confidentiality bias (McGauran 2010)

When commercial confidentiality rules  do not impact on presentation of results

Can lead to wrong conclusions because IPR or commercial confidentiality prevent full disclosure of results

1. Is there evidence of commercial confidentiality being invoked for the decision to publish or otherwise.
2. If so, how do the inclusion or exclusion of commercial confidentiality restricted data impact conclusions from the evidence base of this drug?

There is no evidence of inclusion of previously unpublished data bias (Golder 2010; McGauran 2010)

When there is no evidence of inclusion of heterogeneous unpublished data of variable quality and sometimes difficult to interpret either because of swamping or absence of methods chapters

Can lead to wrong conclusions because of the inclusion of biased data not clearly identified as such

1. Is there any evidence of published review studies (particularly meta‐analyses) containing previously unpublished data?
2. If so what is the impact of including or excluding  unpublished data on the conclusions from the evidence base of this drug?

There is no evidence of blank cheque bias

When there is no evidence that third‐party independent researchers agree to having a trial’s sponsor fill in their data extraction sheets for unpublished data

Can lead to wrong conclusions because of the impossibility of independently assessing data. If the practice is not declared, it can mislead readers, giving conclusions a spurious impression of robustness

1. Are there unpublished data included in the third‐party data set or meta‐analysis that were gained without independent verification?
2. If so, how does the inclusion or exclusion of trusted data impact conclusions from the evidence base of this drug?

There is no evidence of competition bias (McGauran 2010)

When there is no evidence that any type of reporting bias is related to market competition, leading to a better positioning of the drug

Can lead to wrong conclusions because what you see may be due to market pressures

1. Do the types of bias detected (outcome emphasis, time lag. etc) favour NIs versus other drugs or interventions in particular ways?
2. Do they present a picture or tell a story which is different from all the evidence and position the NI favourably or the competitor unfavourably?
3. How does competition bias impact conclusions from the evidence base of this drug?

There is no evidence of language bias (Higgins 2011)

When there is no evidence that reporting is associated with language of target audience

Can lead to wrong conclusions because what you see may be due to the type of market being targeted

1. Is there evidence of presentation of unpublished (e.g. slide shows, product inserts) or published evidence in a particular language?
2. If so does the text in the source language differ from destination language?
3. If so, how does language bias impact conclusions from the evidence base of this drug?

There is no evidence of differences in methodological quality (McGauran 2010)

When there is no evidence of difference on methodological quality by source and outcome

Can lead to wrong conclusions because methodological quality affects estimates of effect, so if quality is not in fact equivalent, then differences ascribed to drug performance may be false

1. Is there difference in methodological quality between published and unpublished data?
2. How do differences in methodological quality impact conclusions from the evidence base of this drug?

There is no evidence of differences in sample size bias (McGauran 2010)

When there is no evidence of the presence of  differences in sample in association with size and direction of results

Same potential impact as methodological quality, but with respect to sample size

1. Are there significant differences in sample sizes between published and unpublished material?
2. If so, do these impact on conclusions drawn from the evidence base?

There is no evidence of multi‐centre status bias (McGauran 2010)

When there is no evidence that there the presence of many or few centres is associated with size and direction of results

Can lead to wrong conclusions because what you see may be due to selection of centres and may not be generalisable

1. Are the methods used different from centre to centre?
2. If so, how do different methods impact conclusions from the evidence base of this drug?

There is no evidence of citation bias

When there is no evidence that citation of a selected study is associated with size and direction of results

Pressure is placed on authors of reports of study to provide an unbalanced interpretation or perspective by selecting citations or misreporting their content

1. Are the references in the published studies comprehensive?
2. Do they refer to unpublished material?
3. If so, how do the inclusion or exclusion of cited unpublished material impact conclusions from the evidence base of this drug?

There is no association between affiliation of authors and positive research conclusions (McGauran 2010)

When there is no evidence that differences in affiliation/employer of authors may be  associated with differences size and direction of results or conclusions drawn

This form of bias is particularly dangerous when readers’ understanding or policy are based solely on the abstracts or conclusions of studies

Are there differences in study conclusions associated with affiliation of authors?

There is no evidence of publication constraints (McGauran 2010)

When there is no evidence that obstacles to publication are associated with size and direction of results

What you see has been filtered on the basis of its results

1. If unpublished studies exist, why were they not published?
2. Were data presented to regulators not published?  If so, why?

There is no evidence of study design bias

When there is no evidence that there may be differences in design to emphasise size and direction of selected results

Can be misleading as design affects results and generalisability and the choice of design is influenced by considerations other than study objective and ethics

1. Is there any relationship between study design and study conclusions?
2. If so, how does the relationship impact conclusions from the evidence base of this drug?

on‐t: on time frame
off‐t: off time frame
IPR: intellectual property rights

Figuras y tablas -
Table 2. Reporting bias testing framework for comparing evidence from multiple regulators, manufacturer clinical study reports, trial registries, and published trials for the following outcomes: harms, complications, and compliharms, by priority of testing. Second priority null hypotheses to test.