Scolaris Content Display Scolaris Content Display

Vacunación de la hepatitis B durante el embarazo para la prevención de la infección en lactantes

Appendices

Appendix 1. Methods for data collection and analysis in subsequent updates of this review

Selection of studies

Two review authors will independently assess for inclusion all the potential studies identified as a result of the search strategy. We will resolve any disagreement through discussion or, if required, we will consult the third review author.

Data extraction and management

We will design a form to extract data. For eligible studies, two review authors will extract the data using the agreed form. We will resolve discrepancies through discussion or, if required, we will consult the third review author. We will enter data into Review Manager software (RevMan 2014) and check for accuracy.

When information regarding any of the above is unclear, we plan to contact authors of the original reports to provide further details.

Assessment of risk of bias in included studies

Two review authors will independently assess risk of bias for each study using the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011). We will resolve any disagreements by discussion or by involving a third assessor.

(1) Random sequence generation (checking for possible selection bias)

We will describe for each included study the method used to generate the allocation sequence in sufficient detail to allow an assessment of whether it should produce comparable groups.

We will assess the method as:

  • low risk of bias (any truly random process, e.g. random number table; computer random number generator);

  • high risk of bias (any non‐random process, e.g. odd or even date of birth; hospital or clinic record number);

  • unclear risk of bias.

(2) Allocation concealment (checking for possible selection bias)

We will describe for each included study the method used to conceal allocation to interventions prior to assignment and will assess whether intervention allocation could have been foreseen in advance of, or during recruitment, or changed after assignment.

We will assess the methods as:

  • low risk of bias (e.g. telephone or central randomization; consecutively numbered sealed opaque envelopes);

  • high risk of bias (open random allocation; unsealed or non‐opaque envelopes, alternation; date of birth);

  • unclear risk of bias.

(3.1) Blinding of participants and personnel (checking for possible performance bias)

We will describe for each included study the methods used, if any, to blind study participants and personnel from knowledge of which intervention a participant received. We will consider that studies are at low risk of bias if they were blinded, or if we judge that the lack of blinding would be unlikely to affect results. We will assess blinding separately for different outcomes or classes of outcomes.

We will assess the methods as:

  • low, high or unclear risk of bias for participants;

  • low, high or unclear risk of bias for personnel.

(3.2) Blinding of outcome assessment (checking for possible detection bias)

We will describe for each included study the methods used, if any, to blind outcome assessors from knowledge of which intervention a participant received. We will assess blinding separately for different outcomes or classes of outcomes.

We will assess methods used to blind outcome assessment as:

  • low, high or unclear risk of bias.

(4) Incomplete outcome data (checking for possible attrition bias due to the amount, nature and handling of incomplete outcome data)

We will describe for each included study, and for each outcome or class of outcomes, the completeness of data including attrition and exclusions from the analysis. We will state whether attrition and exclusions were reported and the numbers included in the analysis at each stage (compared with the total randomized participants), reasons for attrition or exclusion where reported, and whether missing data were balanced across groups or were related to outcomes. Where sufficient information is reported, or can be supplied by the trial authors, we will re‐include missing data in the analyses which we undertake.

We will assess methods as:

  • low risk of bias (e.g. no missing outcome data; missing outcome data balanced across groups);

  • high risk of bias (e.g. numbers or reasons for missing data imbalanced across groups; ‘as treated’ analysis done with substantial departure of intervention received from that assigned at randomization);

  • unclear risk of bias.

(5) Selective reporting (checking for reporting bias)

We will describe for each included study how we investigated the possibility of selective outcome reporting bias and what we found.

We will assess the methods as:

  • low risk of bias (where it is clear that all of the study’s pre‐specified outcomes and all expected outcomes of interest to the review have been reported);

  • high risk of bias (where not all the study’s pre‐specified outcomes have been reported; one or more reported primary outcomes were not pre‐specified; outcomes of interest are reported incompletely and so cannot be used; study fails to include results of a key outcome that would have been expected to have been reported);

  • unclear risk of bias.

(6) Other bias (checking for bias due to problems not covered by (1) to (5) above)

We will describe for each included study any important concerns we have about other possible sources of bias.

(7) Overall risk of bias

We will make explicit judgements about whether studies are at high risk of bias, according to the criteria given in the Handbook (Higgins 2011). With reference to (1) to (6) above, we will assess the likely magnitude and direction of the bias and whether we consider it is likely to impact on the findings. We will explore the impact of the level of bias through undertaking sensitivity analyses ‐ see Sensitivity analysis.

The quality of the evidence will be assessed using the GRADE approach (Schunemann 2009) in order to assess the quality of the body of evidence relating to the following outcomes for the main comparisons:

  1. Incidence of hepatitis B virus infection in infants.

GRADEprofiler (GRADE 2008) will be used to import data from Review Manager 5.3 (RevMan 2014) in order to create ’Summary of findings’ tables. A summary of the intervention effect and a measure of quality for each of the above outcomes will be produced using the GRADE approach. The GRADE approach uses five considerations (study limitations, consistency of effect, imprecision, indirectness and publication bias) to assess the quality of the body of evidence for each outcome. The evidence can be downgraded from 'high quality' by one level for serious (or by two levels for very serious) limitations, depending on assessments for risk of bias, indirectness of evidence, serious inconsistency, imprecision of effect estimates or potential publication bias.

Measures of treatment effect

Dichotomous data

For dichotomous data, we will present results as summary risk ratio with 95% confidence intervals.

Continuous data

For continuous data, we will use the mean difference if outcomes are measured in the same way between trials. We will use the standardized mean difference to combine trials that measure the same outcome, but use different methods.

Unit of analysis issues

Cluster‐randomized trials

We will include cluster‐randomized trials in the analyses along with individually‐randomized trials. We will adjust their sample size using the methods described in the Handbook [Section 16.3.4 or 16.3.6] using an estimate of the intracluster correlation co‐efficient (ICC) derived from the trial (if possible), from a similar trial or from a study of a similar population. If we use ICCs from other sources, we will report this and conduct sensitivity analyses to investigate the effect of variation in the ICC. If we identify both cluster‐randomized trials and individually‐randomized trials, we plan to synthesize the relevant information. We will consider it reasonable to combine the results from both if there is little heterogeneity between the study designs and the interaction between the effect of intervention and the choice of randomization unit is considered to be unlikely.

We will also acknowledge heterogeneity in the randomization unit and perform a subgroup analysis to investigate the effects of the randomization unit.

Cross‐over trials
Other unit of analysis issues

It is unlikely that cross‐over designs will be a valid study design for Pregnancy and Childbirth reviews, and so if identified, we will exclude them.

Dealing with missing data

For included studies, we will note levels of attrition. We will explore the impact of including studies with high levels of missing data in the overall assessment of treatment effect by using sensitivity analysis.

For all outcomes, we will carry out analyses, as far as possible, on an intention‐to‐treat basis, i.e. we will attempt to include all participants randomized to each group in the analyses, and all participants will be analyzed in the group to which they were allocated, regardless of whether or not they received the allocated intervention. The denominator for each outcome in each trial will be the number randomized minus any participants whose outcomes are known to be missing.

Assessment of heterogeneity

We will assess statistical heterogeneity in each meta‐analysis using the Tau², I² and Chi² statistics. We will regard heterogeneity as substantial if the I² is greater than 30% and either the Tau² is greater than zero, or there is a low P value (less than 0.10) in the Chi² test for heterogeneity.

Assessment of reporting biases

If there are 10 or more studies in the meta‐analysis we will investigate reporting biases (such as publication bias) using funnel plots. We will assess funnel plot asymmetry visually. If asymmetry is suggested by a visual assessment, we will perform exploratory analyses to investigate it.

Data synthesis

We will carry out statistical analysis using the Review Manager software (RevMan 2014). We will use fixed‐effect meta‐analysis for combining data where it is reasonable to assume that studies are estimating the same underlying treatment effect: i.e. where trials are examining the same intervention, and the trials’ populations and methods are judged sufficiently similar. If there is clinical heterogeneity sufficient to expect that the underlying treatment effects differ between trials, or if substantial statistical heterogeneity is detected, we will use random‐effects meta‐analysis to produce an overall summary if an average treatment effect across trials is considered clinically meaningful. The random‐effects summary will be treated as the average of the range of possible treatment effects and we will discuss the clinical implications of treatment effects differing between trials. If the average treatment effect is not clinically meaningful, we will not combine trials.

If we use random‐effects analyses, the results will be presented as the average treatment effect with 95% confidence intervals, and the estimates of Tau² and I².

Subgroup analysis and investigation of heterogeneity

If we identify substantial heterogeneity, we will investigate it using subgroup analyses and sensitivity analyses. We will consider whether an overall summary is meaningful, and if it is, use random‐effects analysis to produce it.

We plan to carry out the following subgroup analyses:

  1. low risk of hepatitis B virus (HBV) infection versus high risk (as defined by authors e.g. injection drug users, healthcare workers) of HBV infection;

  2. low endemic setting versus high endemic setting of HBV infection;

  3. vaccination schedule (e.g. three doses versus two doses regimen);

  4. maternal negative versus positive for marker of hepatitis B virus serology.

The following outcomes will be used in subgroup analysis:

1. Incidence of hepatitis B virus infection in infants.

We will assess subgroup differences by interaction tests available within RevMan (RevMan 2014). We will report the results of subgroup analyses quoting the Chi2 statistic and P value, and the interaction test I² value.

Sensitivity analysis

We plan to carry out sensitivity analyses to explore the effect of trial quality assessed by concealment of allocation, high attrition rates, or both, with poor quality studies being excluded from the analyses in order to assess whether this makes any difference to the overall result.