Scolaris Content Display Scolaris Content Display

Study flow diagram.
Figuras y tablas -
Figure 1

Study flow diagram.

Table 1. Unused methods sections

Unused methods

Description of methods

Data extraction and management

The data extraction form will include subheadings relating to the following areas.

  1. Study methods (including methods of randomisation, allocation concealment, and blinding of research personnel or participants).

  2. Ethical approval (provision of informed consent or assent).

  3. Referral route (method through which individuals are referred or present for family therapy).

  4. Participant demographics and clinical diagnoses (including ASD and comorbid diagnoses).

  5. Instruments used to diagnose ASD (including clinician‐administered assessments with either participants or informants).

  6. Active and comparator interventions (modality, content and duration of the active and comparator interventions).

  7. Outcome measurements (for individuals with ASD and their family members; and health outcome data, if cited).

  8. Results (including descriptive and inferential statistical data, as well as study results).

  9. Adverse events (e.g. whether there has been an increase in mental health morbidities).

  10. Treatment fidelity (e.g. whether a manualised treatment approach was used, if treatment sessions were independently reviewed for adherence to the theoretical model, and the frequency and nature of clinical supervision for trial therapists).

We will attempt to separate the outcomes and results between sites for any multi‐centre studies. In the event that data described appear ambiguous for any of the reports, we will contact the authors for clarification. If we are unable to liaise with report authors, we will document this within the review, and the review team will discuss the discrepancies.

For any non‐English language studies, we will endeavour to arrange for report translation.

Assessment of risk of bias in included studies

DS and JS will independently assess the risk of bias of all included studies across seven domains: random sequence generation; allocation concealment; blinding of participants and trial staff; blinding of outcome assessments; incomplete outcome data; selective outcome reporting; and any other potential sources of bias. For each included study, we will assign each of these domains one of three ratings: high risk of bias, low risk of bias, or unclear risk of bias. We have detailed criteria for rating various domains of bias below, with examples drawn from Chapter 8.5 of the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011a).

Random sequence generation

  1. High risk of bias: a non‐random method is used to generate the sequence such as allocation by alternate days or geographical location of entry to the trial.

  2. Low risk of bias: random methods (e.g. random number table or computer random number generator) are used to generate the sequence to produce comparable groups.

  3. Unclear risk of bias: no or insufficient information is provided on the methods used to generate the sequence to permit a judgement of high or low risk of bias.

Allocation concealment

  1. High risk of bias: participants and researchers may have been able to foresee assignment to intervention groups due to insufficient measures used to conceal allocation (such as open random allocation schedule, unsealed or non‐opaque envelopes).

  2. Low risk of bias: adequate methods are used to conceal the allocation (e.g. opaque envelope procedure, central allocation or by independent personnel outside of the research team), so that participants and researchers are unable to foresee or influence the assignment of intervention groups.

  3. Unclear risk of bias: no or insufficient detail is provided on methods used to conceal the allocation sequence to permit a judgement of high or low risk of bias.

Blinding of participants and research personnel

  1. High risk of bias: neither participants nor research personnel are blinded to the treatment group allocation or study hypotheses, and outcomes are likely to be influenced by such lack of blinding; or blinding is attempted and subsequently broken; or some participants and personnel are blinded while others are not blinded, which may introduce bias.

  2. Low risk of bias: effective measures (e.g. placebo or sham therapy sessions) are used to blind study participants and research personnel from knowing intervention group allocation and study hypotheses; or when blinding is not possible, study authors are able to justify that the outcome is unlikely to be influenced by the lack of blinding.

  3. Unclear risk of bias: either the study did not address this outcome or insufficient details are provided on methods of blinding to permit a judgement of low or high risk of bias.

Blinding of outcome assessment

  1. High risk of bias: outcome assessors are not blinded to treatment allocation of the study participants and the study hypothesis, and the outcomes are likely to be influenced by the lack of blinding.

  2. Low risk of bias: objective measures (such as biomedical measures of cortisol levels) that are unlikely to be influenced by the lack of blinding outcome assessors are used; participants are unaware of which intervention they have been allocated to; or participants' knowledge of which intervention they are receiving does not mediate their response to subjective outcome measures.

  3. Unclear risk of bias: there is a lack of detail on methods of blinding to permit a judgement of high or low risk of bias.

Incomplete outcome data

  1. High risk of bias: reasons for missing data are likely to be related to the true outcome; missing data are not balanced across groups; or inappropriate methods are used to impute missing data.

  2. Low risk of bias: no incomplete outcome data for each main outcome; reasons for missing data are unlikely to be related to true outcome; missing data are balanced across groups; or appropriate methods have been used to impute the data.

  3. Unclear risk of bias: either the study did not address this outcome, or there is insufficient detail as regards to the amount, nature, and handling of incomplete outcome data to permit a judgement of low or high risk of bias.

Selective reporting

  1. High risk of bias: not all prespecified outcomes are reported; or outcomes are reported using methods not prespecified and for only a subgroup of the sample; or outcomes are reported that were not prespecified; or outcomes are reported incompletely and cannot be included in a meta‐analysis.

  2. Low risk of bias: all outcomes are reported as prespecified in published protocol, or the protocol is not available but there is convincing text that suggests that all prespecified outcomes have been reported.

  3. Unclear risk of bias: there is insufficient information (e.g. no protocol available) to permit a judgement of high or low risk of bias.

Other sources of bias

  1. High risk of bias: the study raises other important concerns, such as bias relating to the study design or claims of fraudulence, or other sources of bias that are not covered by the above domains.

  2. Low risk of bias: there is no evidence to suggest there are any other important concerns about bias not addressed in the domains stated above.

  3. Unclear risk of bias: there may be an additional risk of bias, but there is insufficient information to fully assess this risk, or it is unclear that the risk would introduce bias in the study results.

We will obtain a third opinion from EP, MF or FH should there be disagreement about the 'Risk of bias' assessment or a lack of consensus about any of the individual domains per study or in terms of the overall appraisal of the trial. We will also attempt to contact report authors to provide clarification about aspects of the trial, as needed.

'Summary of findings' tables

We will import data from Review Manager (Review Manager 2014), into GRADEprofiler (GRADEpro GDT), and use this software to create 'Summary of findings' tables. These tables will provide outcome‐specific information concerning the overall quality of the body of evidence from the studies included in the comparison, the magnitude of effect of the interventions examined, and the sum of available data on outcomes rated as relevant to patient care and decision making.

We will employ the GRADE approach to assess the quality of evidence (Schünemann 2011), using the following ratings: high quality (RCTs or quasi‐RCTs with a very low risk of bias), moderate quality (RCTs or quasi‐RCTs with some evidence of risk of bias such as inadequate allocation concealment), low and very low quality (RCTs or quasi‐RCTs that have significant threats to internal study validity such as failure to adequately randomise participants, lack of blinding of outcome assessors, or selective outcome reporting) (Schünemann 2011, Table 12.2.a).

We will include the following outcomes in the 'Summary of findings' tables.

  1. Quality or quantity of social interaction or communication.

  2. Mental health morbidity, including stress, anxiety, or depression.

  3. Quality of life.

  4. Confidence in or attributions about coping.

  5. Adverse effects or events.

Measures of treatment effect

Dichotomous data

For dichotomous outcomes, such as the presence or absence of challenging behaviour(s), we will use the Mantel‐Haenszel method for computing the pooled risk ratio (RR) (Mantel 1959). We will use the RR in meta‐analyses, rather than the odds ratio (OR), because the OR can be susceptible to misinterpretation, which can lead to overestimation of the benefits and harms of the intervention (Deeks 2011, Section 9.4.4.4). We will report the RR with 95% CIs.

Continuous data

When different measures are used, we will calculate the standardised mean difference and 95% CI. We will calculate the mean difference and 95% CI when all outcomes are measured using the same scale in the same way.

Unit of analysis issues

Cluster trials

In cluster trials, the independence of individuals cannot be assumed (Higgins 2011b). As we are examining the effectiveness of an intervention for both individuals and family members, we may identify cluster‐randomised trials.

If clustering has been incorporated into the analyses of primary studies, we plan to present these data as if from a non‐cluster‐randomised study, but adjust for the clustering effect. We will contact study authors for more information if needed. If we identify cluster trials that have been analysed using incorrect statistical methods (i.e. not taking the clustering into account), we will contact study authors to request individual participant data so that we may calculate an estimate of the intracluster correlation coefficient (ICC). If we are unable to obtain this information, we will adjust sample sizes using an estimate of the ICC from the trial or from a trial of a similar population, with advice from a statistician, and use this to re‐analyse the data. In the event that we are unable to adjust for incorrect statistical methods used by the cluster trials, and therefore cannot estimate the ICC with any a degree of confidence, we will exclude the trial (Higgins 2011b).

We will investigate the robustness of our results by conducting sensitivity analyses, for example, to explore the impact of different types of cluster‐randomisation units (such as families, health practitioners) (Higgins 2011b). We will also compare the results with and without cluster trials that have not been analysed correctly by the trialists (where the ICC is estimated from other trials for the adjustment of cluster effect) (see Sensitivity analysis).

Cross‐over trials

Due to the issue of carry‐over, that is, whereby the effectiveness of a second intervention may be mediated by the first intervention, we will exclude cross‐over trials.

Multiple comparisons

Where a trial involves more than two treatment (or comparator) arms, we will first assess which intervention (or comparator) groups are relevant to our review. We will use data from the arms of the trial that are relevant to the review objectives, but present all intervention groups in the 'Characteristics of included studies' tables, providing a detailed description of why we have selected particular groups and excluded others. In the event that studies have more than two intervention groups and a control group that are relevant to the review, we will split the control group data proportionately to the other two groups.

Repeated measures

When a trial reports outcome data obtained at more than one time point, we will conduct analyses separately for each time point (e.g. postintervention and at follow‐up, if follow‐up is specified by the trialist).

Dealing with missing data

We will consider the possible impact of missing data on the results of the review.

Data may be missing either because (1) they have been insufficiently or inadequately reported, or (2) due to dropout or attrition. In the event of insufficient or inadequate reporting, we will first try to obtain any missing data from the trial authors, including unreported data (e.g. group means and SDs), details of dropouts, and interventions provided. We will describe the missing data in the 'Risk of bias' table.

In either case outlined above, and when we cannot obtain data, we will conduct analyses using ITT principles. For dichotomous outcomes (those not deemed to be missing at random), we will impute the outcomes for the missing participants using both the most optimistic (i.e. assuming participants with missing data improve) and the most pessimistic (i.e. assuming participants with missing data deteriorate) scenarios.

When data are missing for continuous outcomes (e.g. data pertaining to means or SD), we will attempt to calculate them based on the standard errors, CIs, and t values, according to the rules described in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011c). If this information is missing, and we are unable to obtain it from trial authors, we will report it as missing data in the review.

We will also conduct a sensitivity analysis to compare the results from the ITT analysis with the imputation and ‘available case’ analysis (see Sensitivity analysis). If these analyses yield similar results in terms of the effects of treatment, we will present the results of the available case analyses.

Assessment of heterogeneity

Within each comparison, we will first assess clinical heterogeneity (e.g. variability in active and comparator interventions, participant characteristics, or outcome measures used) and methodological heterogeneity (e.g. variability in study design, including differences in the nature of the randomisation unit and the size of cluster randomised; and risk of bias, which we will assess according to the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions (Deeks 2011)). If there is clinical or methodological heterogeneity, we will extract and document all of these characteristics onto the data extraction form and synthesise the results narratively. We will then assess statistical heterogeneity using the I² and Chi² statistics, and by visually inspecting the forest plots. If we identify a substantial level of heterogeneity in trials (e.g. the I² is more than 30% to 60%, the P value is less than 0.10 in the Chi² test for heterogeneity, or there is a different direction of the effects), we will conduct prespecified subgroup analyses (see Subgroup analysis and investigation of heterogeneity).

Assessment of reporting biases

We will assess reporting biases, including (multiple) publication, selective reporting, outcome and language biases (Sterne 2011, Table 10.1.a). First, we will try to locate protocols of included trials. If the protocol is available, we will compare outcomes documented in the protocol and the published report. If the protocol is not available, we will compare outcomes listed in the methods section of the trial report with the reported results. In addition, we will create funnel plots to investigate the possibility of publication bias and other small‐study effects when there is a sufficient number of trials (10 or more). While funnel plots may be useful in investigating reporting biases, there is some concern that tests for funnel plot asymmetry have limited power to detect small‐study effects, particularly when there are fewer than 10 studies, or when all studies are of a similar sample size (Sterne 2011). In the event that funnel plots are possible, we will produce them and seek statistical advice in their interpretation.

Data synthesis

We will conduct random‐effects meta‐analyses to produce the average effect size of the intervention across trials. A random‐effects model is considered more appropriate than a fixed‐effect model because the population and setting of trials are likely to be different, and therefore the effects are also likely to be different (Deeks 2011).

Subgroup analysis and assessment of heterogeneity

Depending on the sample size and heterogeneity of study populations, we propose to undertake subgroup analyses as follows:

  1. children and adolescents (aged 17 years and under) versus adults (aged 18 years and above) with ASD; and

  2. individuals with ASD who have a concurrent learning disability (i.e. IQ below 70) versus individuals with ASD and no learning disability.

To limit the risk of multiple comparisons, we will conduct subgroup analyses on primary outcomes only.

Sensitivity analysis

We will undertake sensitivity analyses to evaluate the impact of excluding trials (or trial data) that are judged to have a high risk of bias (e.g. in terms of the domains of random sequence generation, allocation concealment, blinding, or outcome reporting). We will also undertake sensitivity analyses to assess the potential impact of missing outcome data.

ASD: autism spectrum disorder; CI: confidence interval; GRADE: Grades of Recommendation, Assessment, Development and Evaluation; ITT: intention‐to‐treat; IQ: intelligence quotient; RCTs: randomised controlled trials; SD: standard deviation.

Figuras y tablas -
Table 1. Unused methods sections