Scolaris Content Display Scolaris Content Display

Families and Schools Together (FAST) for improving outcomes for children and their families

Esta versión no es la más reciente

Contraer todo Desplegar todo

Abstract

This is a protocol for a Cochrane Review (Intervention). The objectives are as follows:

To assess the effectiveness of the Families and Schools Together (FAST) programme on improving outcomes for children and their families.

Background

Description of the condition

TheMelbourne Declaration on Education Goals for Young Australians is not unusual in recognising the important role played by schools in nurturing the "intellectual, physical, social, emotional, moral, spiritual, and aesthetic development and well‐being" of young people (MCEETYA 2008). In the UK, the Education Act 2002 and the Academies Act 2010 require schools to provide a curriculum that promotes "the spiritual, moral, cultural and physical development of pupils at [...] school and of society, and prepare pupils ... for the opportunities, expendabilities and experiences of later life". Schools are not, therefore, just about educational attainment, but how well children engage with school and, in turn, how they do at school is predictive of a range of outcomes in later life, including future employment, income, and physical and mental health (see, for example, Heckmann 2014; Nores 2010).

Investment in education and training is essential to creating sustainable economic growth, competitiveness, and increased productivity (European Commission 2012b). One of the five flagship initiatives of Europe's Growth Strategy is to reduce school dropout rates to below 10% in member nations where, on average, 73 million adults have low levels of education and 20% of 15‐year‐olds lack basic reading skills (European Commission 2013). The Education and Training Monitor supports efforts to combat early school leaving, to increase participation in education, to improve early education experiences, to reduce inequalities in achievements, and promote skills‐based learning and participation in education (European Commission 2012a). As part of the global drive to increase the uptake of education (ACARA 2013; Eurydice 2012; National Committee on Inuit Education 2011; USDoE 2002), many education systems have introduced some form of objective‐based educational standards designed to record levels of achievement or 'outcomes' in basic skills such as literacy, numeracy, science, languages, and social skills (The World Bank 2009).

Children whose families live in poverty, particularly persistent poverty, are at increased risk of low educational achievement. The reasons for this are complex, reflecting the interaction of individual, family, school and community level factors. Examples of individual factors that have been implicated in children's educational attainment include a child's working memory (Alloway 2010), information‐processing efficiency in infancy, general mental development in toddlerhood and behaviour difficulties in early childhood (Bornstein 2013), and their health status (Basch 2011); family factors include mother's educational attainment (Bornstein 2013), school absences and mobility (Ou 2008), and parental aspirations (Goodman 2011). The quality of the school environment also influences outcomes for children (Sylva 2011), as do community factors such as the level of social disorganisation (Nieuwenhuis 2016).

The adverse impacts of poverty make themselves felt in children's earliest years, with children from economically disadvantaged backgrounds performing well behind their non‐disadvantaged peers in terms of literacy and communication skills, and school readiness by three years of age (Hirsh 2007). These differences persist and the gap between those in poverty and their more affluent peers increases over time. Good quality parenting can play an important role in mitigating the effects of poverty, and help to ensure that children reach their educational potential (Kiernan 2011). Sensitive, positive parenting can help ensure they are ready for school, and support their learning once there (e.g. Jeynes 2005). However, the stresses of poverty may adversely impact parents' ability to parent their children effectively, and this can be exacerbated by an absence of support networks (Gutman 2010).

Since the publication of the Coleman Report (Coleman 1966), which asserted that schools have less influence upon student outcomes than family background and other environmental factors (Emerson 2012), policy makers have adopted increasingly broad‐based approaches to improving educational outcomes, with increasing emphasis upon family involvement within a supported learning environment. Multi‐systemic approaches, which involve staff, students, parents and the wider community, are thought to have a greater chance of success at improving child outcomes.

Parents and carers have an important role in the socialization of young people (Foxcroft 2011). They are a major influence on children's learning and development from birth, through the school years, and long into adulthood (DEEWR 2011). Parental contribution to education includes: providing a secure environment in which to learn, providing intellectual stimulation, transmission of social norms and values, shaping the child’s resilience through fostering literacy and problem solving, and encouraging personal and social aspiration (Kim 2012). Increasingly, providers of formalized education are recognizing the primary role of parents, carers and the wider family, as well as peers and the environment in shaping children's education, health and life experiences (Desforges 2003; Sisco 2012). Research demonstrates that high levels of parental and community involvement in education are related to improved student performance, learning outcomes, attendance and behaviour, regardless of cultural and social background (DEEWR 2011; Weiss 2010).

Description of the intervention

FAST was developed in the USA in 1988 at Family Service Incorporated, Madison, Wisconsin, by Dr Lynn McDonald, in collaboration with the Prevention ‐ Intervention Centre for Alcohol and other Drug Abuse. It was originally designed as a targeted intervention for children at risk of failure at school (Layzer 2001). Since that time, FAST has evolved into a multi‐family, after‐school programme, although it is primarily implemented in schools with populations experiencing multiple risk factors of deprivation (Layzer 2001; McDonald 2009; WSIPP 2012). The programme has incorporated cultural adaptations to suit White, Hispanic/Latino, American Indian, African American, Southeast Asian American, Alaskan Native and Australian Indigenous people, and has been translated into French, German, Japanese, Russian and Spanish, as well as being delivered to multi‐lingual and English groups as a 'second language' (CBCAP 2009; Mupotsa 2010). FAST is in use in over 2000 schools across 11 countries, including the USA, Canada, Australia, Europe and Russia, Brazil and the UK (McDonald 2010). FAST is delivered in six, age‐specific versions, as shown in Table 1 below.

Table 1: FAST versions.

FAST Version

Age Group

Baby Fast

For young parents and their infants from birth to three years of age

Pre‐K Fast

For families and their children aged three to six years

Kids' FAST

For parents and students from Kindergarten to fifth grade (USA)

Middle School FAST

For families and students from sixth to eighth grade (USA)

Teen FAST

For high school youth and their families (USA)

FAST is designed to prevent children from experiencing school failure by empowering parents in their role as educators, by fostering closer relationships between families and school, and by encouraging improvements in children's behaviour and educational outcomes. FAST works with health professionals to prevent substance abuse through increasing knowledge and awareness of substance abuse on child development, and by generating links between the programme, local substance abuse and mental health services. FAST also aims to reduce family stress by developing ongoing supports for parents and children, by facilitating parents' access to local supports and agencies, and by fostering personal achievement and self‐esteem in participants. This element of the programme is referred to as FASTWORKS. The aims of each version of FAST for both the child and the family are shown below in Tables 2 and 3 respectively.

Table 2: Explicit FAST core outcomes for the child across FAST versions.

FAST Aims

FAST Version

Realm

Aim

Baby

Pre‐K

Kids

Middle

Teen

Child

Improved interaction with education and scholastic outputs

X

X

X

X

Reduction in unhealthy behaviours

X

X

X

X

X

Reduced stress, aggression and violence

X

X

X

Improved self‐esteem and coping skills

X

X

X

Table 3: Explicit FAST core outcomes in the family realm across FAST versions.

FAST Aims

FAST Version

Realm

Aim

Baby

Pre‐K

Kids

Middle

Teen

Family

Reduce compulsive/impulsive behaviours (aggression anxiety, depression)

X

X

X

X

X

Reduce conflict and stress

X

X

Reduce substance abuse

X

X

X

Improve parenting skills

X

X

Improve family cohesion

X

X

X

X

Improve communications

X

X

X

X

Increase child development and learning environment

X

X

X

X

X

Improve parental self‐esteem and coping skills

X

X

Improved community/social capital

X

X

X

X

The FAST model incorporates an eight‐week FAST programme delivered at a school by a team comprising a professional from the host school (teacher), a mental health professional from FAST, an Alcohol or Drugs Professional and a 'parent liaison' who has graduated from the FAST programme. The programme requires that the implementation site has one certified trainer for each FAST team. The programme is generally delivered to between 8 and 25 families over 8 to 12, 2.5‐hour sessions. Family size varies across studies from between two and ‘nine or more’ children (McDonald 2009): for example, in McDonald 2010, the average number of children per family was 4.9, and in Crozier 2010 it was 2.4.

FAST uses repeated, structured, personal and interactive sessions between child, family and peers, designed to create an enjoyable learning environment and encourage further participation. A typical session consists of a family meal (where each family takes turns to prepare a meal for the group), family communication games, a parental self‐help group session, supervised children's play, one‐to‐one mediated parental play therapy, opening and closing routines and family rituals.

In each school where FAST is offered, when a teacher identifies a child at risk, he or she informs the parent of their concerns and the availability of the FAST programme. This 'targeted recruitment' is followed by a 30‐minute home visit from a member of FAST staff, accompanied by a parent graduate. Layzer 2001 have argued that, due to the highly structured nature of the FAST home visit, the recruitment process itself could be considered to be part of the intervention. In Baby FAST, parents are enrolled by means of outreach programmes in the community, as well as open enrolment. Those parents who agree to participate meet weekly for eight weeks, with each session lasing around 2.5 hours. Baby FAST sessions take place in community locations such as churches and clinical and children's centres.

The FAST curriculum comprises a set of core elements (accounting for 40% of the programme), which must be implemented 'precisely' in each of the age‐specific versions. The remaining 60% can be adapted to meet the needs of the target population. At the end of the eight‐week programme, the participant families are encouraged to meet monthly over a two‐year period under the acronym FASTWORKS (Families and Schools Together, Working, Organising, Relaxing, Knowing, Sharing). FASTWORKS is organised by a Parent Advisory Council, made up of elected FAST 'graduates', each of whom are given a budget and responsibility to plan and implement the two‐year programme. For each FAST version there are two levels of implementation: 'standard' for a single site, or 'multi‐hub' for multiple, "simultaneous cycles at the same location" (www.familiesandschools.org/programs). Furthermore, each version can be delivered in one of three editions, as listed below.

  1. Parent Involvement FAST, with an emphasis on drug and alcohol misuse.

  2. Healthy FAST, concentrating on physical and mental health.

  3. Achieve FAST, for families with children with special needs.

In all its versions, participation is voluntary, with families invited to attend after‐school FAST sessions on the school premises.

All FAST programmes must be licensed and certified by Families and Schools Together Incorporated, described as a 'non‐profit' organisation that designs and distributes family strengthening and parent involvement programmes. Training consists of four days' training over four months, including two days' attendance at workshops, three site visits, and one review day.

Typically, standard costs are incurred for technical assistance, four days' training for a pilot scheme, travel of the FAST team to site, three visits from the FAST trainer, evaluation consultations, questionnaires, data analysis and evaluation report for local pilot FAST programme, manuals and supplies for FAST training, and costs of the team members' time to be trained. A range of implementation costs have been reported, varying from USD 300 to USD 1800 per family unit (Yellow Horse 2003); USD 1200 for each family over the full two‐year programme, including USD 100 per site for evaluation services (CBCAP 2009); GBP 33,158 for 40 families (GBP 828 per family) (Cummings 2012); or an average cost per parent completing the programme as approximately GBP 1658 (Lindsay 2011). Costs for training the FAST team are recorded as being between USD 4295 and USD 4595 (depending on curriculum), plus further significant travel costs estimated at USD 4290, though savings can be made depending on the ability of the local team to use 'creative budgeting', access free services, and barter for local goods and services.

How the intervention might work

FAST was developed to address the limitations of unidimensional (school or home) and unidirectional (school to home) interventions. With children spending, on average, 30 hours a week at school and 138 hours within their family and community, an intervention that recognised the relatively limited influence that schools can maintain, and aimed to capture the influence of wider family and social networks, was considered to have greater potential to sustain positive outcomes (see above). Coote 2000 describes the main working characteristics of FAST as an early intervention, ideally reaching at‐risk children early on in their life course, but also providing strategic support at key transition points such as adolescence. FAST is designed to promote prosocial development in children and families by creating a collaborative support system of family, school, peers, community and professional help, to develop protective behaviours that promote family resilience and prevent maladaptive behaviours becoming entrenched.

FAST draws on systemic modes, such as family therapy, in which family functioning and relationships are seen as interconnected and interdependent, and situated within wider relationships with school and community (Pritchett 2011).

FAST aims to nurture high levels of participation and completion of the intervention by encouraging voluntary participation and supportive networks, combined with incentives and structured, enjoyable, interactive group programmes. Family support may include transport to FAST meetings, a meal, child care, prize winning, and access to social supports. Engagement is encouraged through interactive tasks with clear learning goals, and learning takes place in an atmosphere of mutual support rather than passive parental‐education or training. Conflict and criticism are actively discouraged whilst positive and supportive behaviours, including the establishment of parental roles, are reinforced through repetition and task completion. Family members are encouraged to act out and discuss their emotions through a positive 'Feelings Charades' activity, designed to break down barriers and facilitate talking openly in family groups about each individual's feelings. A short time is set aside during each session for one‐to‐one play sessions ('Special Play Time'), where adults must let the child lead the play activity, refrain from bossing, teaching or directing the child, instead learning to let the child direct their time together.

Parents are encouraged to strengthen their relationships with each other by communicating during 'Buddy Time', a ring‐fenced session to allow adults to talk about their day in a controlled, child‐free environment, supported by a further 45‐minute parental self‐help group exercise. FAST aims to create family cohesion through family group tasks such as developing a family flag, cooking a family meal, singing, and structured, family communication games. At the end of the FAST session, community is reinforced through announcements (birthdays or other notable events) and a closing ritual. Once families have completed the eight‐week programme, a graduation ceremony is held within the school and family and friends are encouraged to attend and support the graduating families. Once completed, a further two years of support are provided to the families through the FASTWORKS programme, made up of families who have completed the programme.

FAST uses the features described above to increase child‐parent bonds, increase family cohesiveness, generate closer parental bonds, encourage the use of self‐help groups, provide closer links to the school and the community, and to empower family members to be able to seek out and access services through an increased positive attitude and greater self‐esteem.

Why it is important to do this review

Whilst reviews have been conducted on interventions aimed at whole‐school approaches to health (Langford 2014), and on specific behaviours such as smoking, anti‐social behaviour, alcohol and drug use, and sexual behaviour (Carney 2014; Mason‐Jones 2011; Thomas 2013), or health associated topics such as fitness, mobility, and exercise (Dobbins 2013; Waters 2011), there is a lack of robust evaluation of the effectiveness of family‐school interventions (MacArthur 2012). This, in turn, has led to a dearth of empirical evidence about the effects of involving parents in schools, as a means of changing their own behaviour or that of teachers or students, and thereby improving student achievement.

The uptake of FAST is growing. For example in the UK, Save the Children and ‘Families and Schools Together’ have formed an alliance with the aim of establishing 430 FAST groups across the country, training some 8000 practitioners, and delivering the programme to 50,000 children. The UK assessment is being carried out by a team from Middlesex University headed by the programme’s founder, Dr Lynn McDonald, using a standardised tool originally developed by Dr McDonald in collaboration with Dr Stephen Billingham, and subsequently adapted for use in the UK (McDonald 2010). In the USA, programme evaluation is the sole responsibility of Families and Schools Together Incorporated and results are sent to the FAST National Training and Evaluation Centre for analysis and publication (McDonald 2009). The growth in the use of FAST, and promising results from evaluations conducted by the programme developers and associates, merit an independent synthesis of the available studies. As far we know, no systematic review of FAST studies has, to date, been conducted. This review will fill that gap.

Objectives

To assess the effectiveness of the Families and Schools Together (FAST) programme on improving outcomes for children and their families.

Methods

Criteria for considering studies for this review

Types of studies

Randomised controlled trials (RCTs) and quasi‐RCTs (in which methods of allocation to groups are not truly random, e.g. day of week, case number).

Types of participants

Families with children from birth to age of completion of compulsory education from all ethnic backgrounds and family sizes, however defined by the trialists.

Types of interventions

FAST programmes compared with waiting list, usual services, alternate service or no treatment.

Types of outcome measures

We will assess outcome measures in the short term (up to one year follow‐up), the medium term (between one and two years' follow‐up), and long term (over two years post FASTWORKS). We will record the timing of outcome assessment as presented in studies.

Primary outcomes
Child outcomes

  1. Improved school performance*, as measured by grades or marks that students earn, standardised educational tests, performance tests or other objective measure of educational attainment. Grades/marks that describe academic performance in at least two classes in the same timeframe are eligible (e.g. grade in a math course is not eligible, average grade in academic courses is eligible, average grade across all classes is eligible). Grades/marks in a single course are not eligible.

  2. Adverse outcomes. Any reported increase in targeted negative child behaviours or conversely reported decrease in promoted positive behaviours, including school performance (which may be indicative of group contagion effect).

Parent outcomes

  1. Reduced parental substance use*, as measured by any standardised self‐report or objective measure of substance use, not including indirect attitude, perception or awareness measures (Foxcroft 2011).

  2. Reduced parental stress*, as measured by any standardised measure of parental stress such as the Parenting Stress Index (Loyd 1985).

Secondary outcomes
Child outcomes

  1. Improved internalising behaviours or symptoms at school or at home*, as recorded on a standardised measure such as the internalising subscale of the Child Behaviour Checklist (Achenbach 1991) or a similar standardised measure.

  2. Improved externalising behaviours or symptoms at school or at home*, as recorded on a standardised measure such as the externalising subscale of the Child Behaviour Checklist (Achenbach 1991) or a similar standardised measure.

  3. Reduced substance use*, as measured by any self‐report or objective measure of alcohol consumption, including quantity, frequency or incidence of drunkenness (Foxcroft 2011).

  4. Increased school attendance, as measured by any objective record of school attendance such as a school or class register.

  5. Reduced youth delinquency, as measured by self‐reports or official records of contacts with the juvenile justice system or other similar law enforcement agency.

Parental outcomes

  1. Increased parental self‐efficacy*, as measured by improved scores on a standardised measure of parental self‐efficacy such as the Self‐Efficacy for Parenting Tasks Index (SEPTI; Coleman 2000) or similar standardised instrument.

  2. Improved parent‐child relationship*, as measured by improved scores on a standardised measure of the parent‐child relationship such as the Parent‐Child Relationship Inventory (PCRI; Gerard 1994) or similar standardised measure.

  3. Increased parental engagement with education*, as measured by both teachers' and parents' reports of parental involvement with education, including attendance at school‐based events, correspondence between parent and teacher, parental engagement with homework, learning activities and educational materials as well as extracurricular activities, objective measures of parental values and attitudes to education and the aspirations they have for their child's development.

  4. Increased parental uptake of services (mental health, drug and alcohol), as measured through reported referrals to, or attendance at, mental health, drug and/or alcohol services.

  5. Increased parental involvement in community‐based activities.

Family outcomes

  1. Improved family relationships*, as measured by improved scores on a standardised measure of family relationships such as the Family Environment Scale (FES; Moos 1994) or a similar standardised instrument.

  2. Reduction in child abuse and neglect, as measured by reduced incidence of child maltreatment on a standardised measure of child abuse and neglect such as the Juvenile Victimisation Questionnaire (JVQ; Finkelhor 2005) or a similar standardised measure, or by official records from law enforcement or social welfare agencies.

We will use those outcomes marked with an asterisk to populate the 'Summary of findings' table for the review.

Search methods for identification of studies

We will conduct electronic searches of bibliographic databases, government policy databanks, and professional websites. We will not apply any geographical, language or publication restrictions, and will seek translations for reports published in languages other than English. We will confine our searches for information to post 1988, the year that FAST was developed.

The search strategy is based upon terms relating to FAST, its authors, FAST versions, FAST outcomes and outcome measures.

Electronic searches

We will search the electronic resources listed below.

  1. Cochrane Central Register of Controlled Trials (CENTRAL; current issue) in the Cochrane Library, which includes the Cochrane Developmental, Psychosocial and Learning Problems Specialised Register.

  2. MEDLINE Ovid (1946 onwards).

  3. Embase Ovid (1980 onwards).

  4. PsycINFO Ovid (1806 onwards).

  5. ERIC EBSCOhost (Education Resources Information Center; 1966 onwards).

  6. British Education Index EBSCOhost (BEI; 1950 onwards).

  7. ProQuest Education Database (1988 onwards).

  8. Education Abstracts (HW Wilson) EBSCOhost (1983 onwards).

  9. Social Science Citation Index Web of Science (1970 onwards).

  10. Conference Proceedings Citation Index Social Science and Humanities Web of Science (1990 onwards).

  11. EPPI‐Centre Database of Education Research (eppi.ioe.ac.uk/webdatabases/Search.aspx).

  12. Campbell Library of Systematic Reviews (www.campbellcollaboration.org/library.html).

  13. Cochrane Database of Systematic Reviews (CDSR; current issue), part of the Cochrane Library.

  14. Database of Abstracts of Reviews of Effectiveness (DARE; current issue), part of the Cochrane Library.

  15. Epistemonikos (www.epistemonikos.org).

  16. UK Clinical Trials Gateway (www.ukctg.nihr.ac.uk/clinical‐trials).

  17. ClinicalTrials.gov (clinicaltrials.gov).

  18. World Health Organisation International Clinical Trials Registry Platform (WHO ICTRP; www.who.int/ictrp/en).

Searching other resources

We will examine reference lists of reports, reviews and primary studies, and will contact the FAST programme developers, FAST practitioners and independent researchers to identify studies not retrieved by the electronic searches. In addition, we will search the Families and Schools Together website (familiesandschoolstogether.com), What Works (ies.ed.gov/ncee/wwc/FWW), and other government, education, health and social services websites, as well as NGOs (nongovernmental organisations) with an education, child or family remit in which FAST is, or has been, employed. We will also search using Google Scholar.

Data collection and analysis

We will use Review Manager 5 (RevMan 5) to organise and analyse our data (Review Manager 2014). We will use EndNote to manage our bibliographical data (EndNote 2013).

Selection of studies

Two reviewers will independently review all titles and abstracts to determine all potentially relevant studies. Any citations deemed potentially relevant by at least one reviewer will be retrieved in full text. The same two reviewers will then independently read all retrieved papers of potentially relevant studies to determine whether they satisfy the inclusion criteria (Criteria for considering studies for this review). The reviewers will resolve disagreements by discussion with GM and JV. When the reviewers exclude a retrieved study, they will document the reasons for its exclusion. We will record our decisions in a PRISMA diagram (Moher 2009).

Data extraction and management

For each included study, two review author pairs will independently extract and record all relevant data on a specifically designed and piloted data collection form. The review authors will resolve any disagreements in discussion with a GM or JV. The reviewers will extract the following data.

  1. Study characteristics: study author(s), year of publication, journal or source, contact details, study design, study duration, attrition details, language.

  2. Child characteristics: age, gender, ethnicity, special educational needs or disability.

  3. Parent characteristics: age, gender, ethnicity, educational attainment or qualifications or both, employment status.

  4. Family characteristics: family size, marital status, annual income.

  5. School characteristics: population served, size, other interventions, location.

  6. Outcomes and measures used: details on all primary and secondary outcome measures, including measures used, length of follow‐up, summary data, means and standard deviations.

  7. Cost incurred by the intervention.

We will collect information on study design and implementation in a format suited to completion of the 'Risk of bias' tables to appear in the completed review (Higgins 2011a). We will collect raw (unadjusted) results in preference to adjusted results, for reasons of consistency of interpretation across studies and because this choice of analysis appears to be less susceptible to selective reporting bias (for example, it prevents potentially biased selection of covariates for inclusion in the model). This decision may, however, increase the risk of bias that may be attributable to baseline differences, such as those arising from differential dropout.

Assessment of risk of bias in included studies

We will assess the risk of bias of included studies using Cochrane's 'Risk of bias' tool (Higgins 2011b). For each included study, two review author pairs will independently assess the risk of bias within each included study based on the seven domains listed below, with review authors' judgements presented as 'high', 'low' or 'unclear' risk of bias (see Table 1). Where any disagreements occur between the judgement of the authors, they will seek resolution in discussion with the Cochrane Developmental, Psychosocial and Learning Problems Editorial Team.

Open in table viewer
Table 1. Judgements underpinning 'Risk of bias' assessments

Random sequence generation

  1. Where robust methods of sequence allocation were employed, we will record the risk of bias 'low' (Schultz 2002).

  2. Where nonrandom or nonsystematic approaches were employed, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Allocation concealment

  1. Where robust methods of concealment were employed, and participants and investigators could not determine assignment prior to allocation, we will record the risk of bias as 'low'.

  2. Where the possibility for allocation disclosure and consequent selection bias was present, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Blinding of participants and personnel

  1. Where blinding of participants and study personnel was maintained, or where no blinding or incomplete blinding occurred but the review authors judge that the ocutome was not likely to have been influenced by the lack of blinding, we will record the risk of bias as 'low'.

  2. Where no or incomplete blinding occurred and could have affected outcomes, or where blinding occurred but there is a likelihood that it could have been broken and the outcome influenced as a result, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Blinding of outcome assessment

  1. Where blinding was robustly applied, there was partial blinding of participants or key personnel, or no blinding took place but the review authors judge that the lack of blinding is unlikely to affect the measures employed or reported outcomes of the study, we will record the risk of bias as 'low'.

  2. Where incomplete or inefficient blinding occurred, and the measures or outcomes are likely to be affected as a result, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Incomplete outcome data

  1. Where there are no missing data, the reasons for missing data are unlikely to be related to the true outcome, or the effect of missing data is not enough to have a clinically relevant impact, we will record the risk of bias as 'low'.

  2. Where the reason for missing data is likely to be related to outcomes, or is sufficient to produce a clinically relevant bias, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Selective Outcome Reporting

  1. Where outcomes have been reported in accordance with the protocol, or all the expected outcomes have been presented, we will record the risk of bias as 'low'.

  2. Where there is some variance in reporting outcomes from that specified in the protocol, reporting is incomplete, or the study fails to include results for a key outcome, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

  1. Random sequence generation. We will describe the methods used to generate the allocation sequence in detail, in order to assess whether it was likely to produce comparable groups of participants. The question: was the allocation sequence adequately generated?

  2. Allocation concealment. We will describe the methods used to conceal the allocation sequence in detail, in order to determine whether intervention allocation has been concealed before and during the allocation process. The question: was the allocation adequately concealed?

  3. Blinding of participants and personnel. Given the nature of the intervention, it is not possible to blind participants or personnel to knowledge of the allocated intervention, and we will examine the extent to which this may have introduced a high risk of bias. The question: was performance biased due to knowledge of the allocated interventions by participants and personnel during the study?

  4. Blinding of outcome assessment. We will provide a description of the methods used to blind outcome assessors to knowledge of the allocated intervention to ascertain whether adequate protection of concealment was maintained throughout the study. The question: was knowledge of the allocated intervention adequately prevented during the study?

  5. Incomplete outcome data. We will describe the completeness of outcome data for each main outcome, noting reported attrition and exclusions in each intervention group, the reasons for attrition or exclusions, and any reinclusions in analysis employed by the review authors. The question: were incomplete outcome data adequately addressed?

  6. Selective outcome reporting. We will examine the comprehensiveness of outcome reporting in relation to published reports or available study protocols to ascertain whether selective outcome reporting was employed. The question: are reports of the study free from selective outcome reporting?

  7. Other sources of bias not addressed under the preceding domains. We will examine study protocols in sufficient detail to ascertain whether other sources of bias are present. The question: are reports of the study free from the sources of bias listed in Table 4 below.

Table 4: Other potential sources of bias.

Design bias

Description of effects

Cluster‐randomised trials

Recruitment bias

Baseline imbalance

Loss of clusters

Incorrect analysis

Comparability with RCT

Early stopping of trial

Results show statistically significant, large effects

Harm has occurred

Study stops contrary to protocol

Randomised block design

In certain circumstances, the blocking process can compromise selection bias and blinding

Footnotes

RCT: randomised controlled trial

Measures of treatment effect

Where possible, we will calculate intervention effects using Cochrane's software: Review Manager 2014.

Dichotomous data

Where dichotomous data are presented, we will calculate an odds ratio (OR) with a 95% confidence interval (CI), comparing treatment group to comparison group for each outcome (Deeks 2011).

Continuous data

We will calculate mean differences (MD) if all studies use the same measurement scale, or standardised mean differences (SMD) if studies use different measurement scales, and 95% CIs. Where necessary, we will compute effect measures from P values, t statistics, analysis of variance (ANOVA) tables or other statistics.

Multiple outcomes

When an included study provides multiple, interchangeable measures of the same construct at the same point in time (e.g. multiple measures of self‐efficacy), we will, if possible, calculate the average SMD across all relevant outcomes, and the average of their estimated variances. This strategy is intended to avoid the need to select a single measure, and to avoid inflated precision in any meta‐analyses that might arise from placing more weight on studies that report on more than one outcome measure than others that rely on a single measure. We anticipate too few studies to support robust variance estimation.

We anticipate that some studies will measure outcomes at multiple points in time. We will analyse outcomes separately for the following three time periods: (a) immediately after the end of the intervention (zero to two months), (b) short‐term follow‐up (three to nine months), and (c) long‐term follow‐up (10+ months after the end of the intervention).

Economic issue

We will record any costs incurred by the FAST programme reported within the studies under review.

Unit of analysis issues

Cluster‐randomised trials

Where clustering has been appropriately accounted for within the analysis of the original study data, clustered data can be used in a meta‐analysis. However, a 'unit of analysis' error occurs when data from cluster‐randomised trials have been analysed as though the unit of allocation has been the individual rather than the cluster. In these circumstances, corrections are required to produce accurate effect size estimates (Higgins 2011c, section 16.3.4). To calculate the design effect, we need a measure of the relative variation both within and between clusters. This is known as the intraclass correlation coefficient (ICC). Where the ICC from the original trial is not available, we will use external estimates from similar studies to calculate the design effect. If there are no reported estimates in the literature, we will perform a sensitivity analysis using low (0.01), medium (0.05), and high (0.10) values for ICC. However, as the design effect must be rounded up for entry into RevMan 5, this approach may be unsuitable for small studies and we may need to employ an alternative approach that multiplies the standard errors (SEs) of the effect size by the square root of the design effect. In either case, where we include cluster‐randomised trials in the meta‐analysis, we will clearly identify them and explain the method of calculating effect size estimates and their standard errors. In these circumstances, we will employ a sensitivity analysis to test the robustness of any conclusions deduced from these methods (Sensitivity analysis).

Studies with multiple treatment groups

We do not anticipate finding studies with multiple treatment groups. However, should we identify such studies, we will first combine all eligible intervention arms and compare these with all control arms, making a single, pair‐wise comparison. If such a strategy seems likely to prevent the investigation of important sources of heterogeneity, we will keep intervention arms separate, and compare each with a common control group, dividing the sample size of the latter proportionately across each comparison, thereby preventing double counting of individuals (Higgins 2011c, section 16.5.5)

Dealing with missing data

Where necessary, we will contact the corresponding authors of included studies to secure any unreported data (e.g. group means and standard deviations, details of dropouts, and reasons for attrition). We will contact other authors as necessary. Where a study reports outcomes only for those participants completing the trial, or only for those who followed the protocol, we will endeavour to obtain the additional information necessary to facilitate analyses according to intention‐to‐treat (ITT) principles. We will describe missing data and attrition/dropout rates for each included study in the 'Risk of bias' tables and discuss the extent to which missing data could affect the results of the review or the conclusions drawn.

Where we are certain that missing data are 'missing at random' and unlikely to be related to the characteristics of the participants or study design, we will analyse the available data ignoring the missing data (Higgins 2011c). Conversely, where there is no reason to believe that data are missing at random — that is, as a result of publication or selective reporting bias — we will work with a statistician to select replacement values using imputed mean values or multiple imputation methods.

Assessment of heterogeneity

We will assess and describe clinical variation across included studies (variability in participants, the FAST programme), and methodological diversity (randomisation, randomisation concealment, blinding of outcomes assessment, losses to follow‐up, etc.). We will describe statistical heterogeneity by computing the I² (Deeks 2011, section 9.5), a quantity which broadly describes the proportion of variation in point estimates that is due to heterogeneity rather than sampling error. In addition, we will use a Chi² test of homogeneity to determine the strength of evidence that heterogeneity is genuine. Inconsistency between studies may be ambiguous and depend upon several factors; therefore the results of the I² test may be roughly interpreted as follows.

  1. 0% to 40% might not be important.

  2. 30% to 60% may represent moderate heterogeneity.

  3. 50% to 90% may represent substantial heterogeneity.

  4. 75% to 100% represents considerable heterogeneity.

Assessment of reporting biases

Where 10 or more studies provide data on a particular outcome, we will draw funnel plots (estimated differences in treatment effects against their standard error). Symmetrical funnel plots are associated with low levels of bias. Asymmetric funnel plots may be due to publication bias, but they can also reflect a real relationships between trial size and effect size, such as when larger trials have lower compliance, and compliance is positively related to effect size. If we have reason to think that this is happening, we will look for a possible explanation in clinical variation across studies.

To test directly for publication bias, we will conduct a Sensitivity analysis to compare results from published data with unpublished data and data from other sources.

Data synthesis

We will synthesise the data using RevMan 5 (Review Manager 2014). We will use both a fixed‐effect model and a random‐effects model and compare the two to assess the impact of statistical heterogeneity. Unless contraindicated by the presence of funnel plot asymmetry, we will present the results from the fixed‐effect model, given the focus of the review is on one intervention. Where we encounter serious funnel plot asymmetry, we will assume that neither model is appropriate and present the results of both the fixed‐effect and random‐effects analyses. Where both indicate the presence or absence of an effect, we will assume that we can have some confidence in the results. Where they disagree, we will report this.

We will calculate all overall effects using inverse variance methods. If some included studies report an outcome using dichotomous outcome measures and others use a continuous measure, we will convert the results from the former, from an OR to an SMD, as long as there is reason to assume that the underlying continuous measure approximates a normal or logistic distribution. Where this is not the case we will conduct separate analyses.

'Summary of findings' table

We will present the main findings of the review in a 'Summary of findings' table, developed using the GRADEpro Guideline Development Tool (GRADEpro GDT 2017). As stated in the outcomes section of this protocol (Types of outcome measures), we will use those outcomes marked with an asterisk to populate the 'Summary of findings' table for the review. The table will describe the population, setting, intervention and comparison/control for each included study before setting out a summary and assessment of the quality of the main results, overall completeness and applicability of evidence, quality of the evidence and potential sources of bias for each outcome (Schünemann 2011a, sections 11.5.6). Space will be provided for any further comments.

Using the GRADE approach (Schünemann 2011b, section 12.2), two review authors will independently grade the quality of the evidence as high, moderate, low or very low, according to the presence of the following five factors: limitations in the design and implementation of available studies; indirectness of evidence; inconsistency of results; imprecision of results; and high likelihood of publication bias.

As empirical evidence suggests that relative effect measures are both more consistent and more inclined to be understood and used by practitioners, we will present ORs of dichotomous data in the 'Summary of findings' table in terms of a percentage risk ratio reduction (RRR).

Subgroup analysis and investigation of heterogeneity

As the overuse of subgroup analysis is problematic (Deeks 2011), we will only use subgroup analyses to determine a small number of effect modifiers.

We will conduct the following four subgroup analyses.

  1. Differences in treatment effect between each of the FAST variants (correlated to ages of child participants) namely:

    1. Baby FAST;

    2. Pre‐K FAST;

    3. Kids' FAST;

    4. Middle School FAST; and

    5. Teen FAST.

  2. Programmes evaluated by teams independent of the programme developer, versus those involving the programme developer, as there is evidence to suggest the effect sizes reported in studies involving the programme developer are larger than those conducted entirely independently.

  3. Location, exploring the possible impacts of FAST in countries of differing stages of economic development.

  4. Ethnicity. Since Moberg 2007 noted that Latino families are 12% more likely to graduate from FAST, and more than twice as likely to attend FASTWORKS than African Americans, we will consider a subgroup analysis of the ethnicity of participants or cultural adaptation of the programme (or both) and implementation of FASTWORKS.

As no family size effects are noted in McDonald 2009, McDonald 2010 and Crozier 2010, despite reported differing average family sizes, we will not include family size in the subgroup analysis.

Sensitivity analysis

We will use sensitivity analyses to explore the impact of studies at high risk of bias on the robustness of the results of the review, restricting the analyses to (a) studies or outcomes with low risk of assessment bias, (b) studies with low risk of attrition bias, and (c) studies with low risk of reporting bias. In addition:

  1. where RCTs and quasi‐RCTs are included in a meta‐analysis, we will explore the impact of removing the quasi‐RCT studies;

  2. where one or two studies appear to be 'outliers' (have results very different from the remainder), we will examine the impact of excluding these from the meta‐analysis;

  3. where the results of a meta‐analysis appear to be heavily dependent on one particular trial, we will repeat the analysis excluding this trial (which may be the largest, or the earliest); and

  4. we may examine the effect of different ICCs for cluster‐randomised trials.

Table 1. Judgements underpinning 'Risk of bias' assessments

Random sequence generation

  1. Where robust methods of sequence allocation were employed, we will record the risk of bias 'low' (Schultz 2002).

  2. Where nonrandom or nonsystematic approaches were employed, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Allocation concealment

  1. Where robust methods of concealment were employed, and participants and investigators could not determine assignment prior to allocation, we will record the risk of bias as 'low'.

  2. Where the possibility for allocation disclosure and consequent selection bias was present, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Blinding of participants and personnel

  1. Where blinding of participants and study personnel was maintained, or where no blinding or incomplete blinding occurred but the review authors judge that the ocutome was not likely to have been influenced by the lack of blinding, we will record the risk of bias as 'low'.

  2. Where no or incomplete blinding occurred and could have affected outcomes, or where blinding occurred but there is a likelihood that it could have been broken and the outcome influenced as a result, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Blinding of outcome assessment

  1. Where blinding was robustly applied, there was partial blinding of participants or key personnel, or no blinding took place but the review authors judge that the lack of blinding is unlikely to affect the measures employed or reported outcomes of the study, we will record the risk of bias as 'low'.

  2. Where incomplete or inefficient blinding occurred, and the measures or outcomes are likely to be affected as a result, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Incomplete outcome data

  1. Where there are no missing data, the reasons for missing data are unlikely to be related to the true outcome, or the effect of missing data is not enough to have a clinically relevant impact, we will record the risk of bias as 'low'.

  2. Where the reason for missing data is likely to be related to outcomes, or is sufficient to produce a clinically relevant bias, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Selective Outcome Reporting

  1. Where outcomes have been reported in accordance with the protocol, or all the expected outcomes have been presented, we will record the risk of bias as 'low'.

  2. Where there is some variance in reporting outcomes from that specified in the protocol, reporting is incomplete, or the study fails to include results for a key outcome, we will record the risk of bias as 'high'.

  3. Where insufficient detail is provided to make a judgement, we will record the risk of bias as 'unclear'.

Figuras y tablas -
Table 1. Judgements underpinning 'Risk of bias' assessments