Scolaris Content Display Scolaris Content Display

Estrategias para la mejoría de la implementación de políticas o prácticas escolares dirigidas a los factores de riesgo de enfermedades crónicas

This is not the most recent version

Collapse all Expand all

Resumen

Antecedentes

Varias políticas o prácticas escolares parecen ser efectivas para mejorar la dieta y la actividad física de los niños, y para prevenir el aumento de peso excesivo, el hábito de fumar o el consumo nocivo de alcohol. Las escuelas, sin embargo, con frecuencia no logran implementar dichas intervenciones basadas en evidencia.

Objetivos

Los objetivos primarios de la revisión son examinar la efectividad de las estrategias que procuran mejorar la implementación de las políticas, los programas o las prácticas escolares para considerar la dieta, la actividad física, la obesidad, el hábito de fumar o el consumo de alcohol en los niños.

Los objetivos secundarios de la revisión son: Examinar la eficacia de las estrategias de aplicación en lo que respecta al comportamiento en materia de salud (por ejemplo, el consumo de frutas y verduras) y los resultados antropométricos (por ejemplo, el índice de masa corporal, el peso); describir los efectos de esas estrategias en los conocimientos, las aptitudes o las actitudes del personal escolar que participa en la aplicación de políticas, programas o prácticas de promoción de la salud; describir el coste o el coste‐efectividad de esas estrategias; y describir todo efecto adverso no previsto de las estrategias en las escuelas, el personal escolar o los niños.

Métodos de búsqueda

Se realizaron búsquedas en todas las bases de datos electrónicas el 16 de julio de 2017 de los estudios publicados hasta el 31 de agosto de 2016. Se hicieron búsquedas en las siguientes bases de datos electrónicas: Biblioteca Cochrane, incluido el Registro Cochrane Central de Ensayos Controlados (Cochrane Central Register of Controlled Trials, CENTRAL); MEDLINE; MEDLINE In‐Process & Other Non‐Indexed Citations; Embase Classic y Embase; PsycINFO; Education Resource Information Center (ERIC); Cumulative Index to Nursing and Allied Health Literature (CINAHL); Disertaciones y tesis; y SCOPUS. Se examinaron las listas de referencias de todos los ensayos incluidos para obtener las citas de otros ensayos potencialmente relevantes. Se realizaron búsquedas manuales en todas las publicaciones entre 2011 y 2016 en dos revistas especializadas (Implementation Science y Journal of Translational Behavioral Medicine) y búsquedas en la Plataforma de Registro Internacional de Ensayos Clínicos de la OMS (ICTRP) (http://apps.who.int/trialsearch/), así como en el registro de los Institutos Nacionales de Salud de los Estados Unidos (https://clinicaltrials.gov). Se consultó con expertos en el tema para identificar otras investigaciones relevantes.

Criterios de selección

La “implementación” se definió como el uso de estrategias para adoptar e integrar las intervenciones de salud basadas en evidencia y cambiar los modelos de práctica dentro de contextos específicos. Se incluyó cualquier ensayo (aleatorizado o no aleatorizado) realizado a cualquier escala, con un grupo de control paralelo que comparara una estrategia para implementar las políticas o las prácticas con objeto de considerar la dieta, la actividad física, el sobrepeso o la obesidad, el hábito de fumar o el consumo de alcohol por parte del personal de la escuela versus “ninguna intervención”, la práctica “habitual” o una estrategia de implementación diferente.

Obtención y análisis de los datos

La búsqueda de citas, la extracción de datos y la evaluación del riesgo de sesgo fueron realizadas por los autores de la revisión en pares. Los desacuerdos entre los autores de la revisión se resolvieron por consenso, o si fue necesario, con un tercer autor. La heterogeneidad considerable de los ensayos impidió el metanálisis. Los hallazgos de los ensayos se resumieron de manera narrativa mediante la descripción del tamaño del efecto de la medida de resultado primaria para la implementación de la política o la práctica (o la mediana de dichas medidas cuando no se especificaba un único resultado primario).

Resultados principales

Se incluyeron 27 ensayos, 18 de los cuales se realizaron en los EE. UU. Diecinueve estudios emplearon diseños de ensayos controlados aleatorizados (ECA). Quince ensayos probaron estrategias para implementar políticas, prácticas o programas de alimentación saludable; seis ensayos probaron estrategias dirigidas a políticas o prácticas de actividad física; y tres ensayos se centraron en políticas o prácticas de tabaco. Tres ensayos se dirigieron a una combinación de los factores de riesgo. Ninguno de los ensayos incluidos procuró aumentar la implementación de las intervenciones para retrasar la iniciación ni reducir el consumo de alcohol. Todos los ensayos examinaron estrategias de implementación multiestratégicas y no hubo dos ensayos que examinaran las mismas combinaciones de las estrategias de implementación. Las estrategias de aplicación más comunes incluían materiales educativos, divulgación educativa y reuniones educativas. Para todos los resultados, la calidad general de la evidencia fue muy baja y el riesgo de sesgo fue alto para la mayoría de los ensayos en cuanto al sesgo de detección y de realización.

Entre los 13 ensayos que informaron sobre resultados dicotómicos de la aplicación ‐la proporción de escuelas o personal escolar (por ejemplo, clases) que aplican una política o una práctica específicas‐, la mediana de los tamaños del efecto no ajustado (mejora) varió entre el 8,5% y el 66,6%. De siete ensayos que informaron el porcentaje de una práctica, programa o política que se había ejecutado, la mediana del efecto no ajustado (mejoría), en relación con el control, varió de ‐8% a 43%. El efecto, en relación con el control, informado en dos ensayos que evaluaron el impacto de las estrategias de implementación en el tiempo por semana que los profesores dedicaron a proporcionar las políticas o las prácticas proyectadas varió de 26,6 a 54,9 minutos por semana. Entre los ensayos que informaron sobre otros resultados continuos de la implementación, los resultados fueron contradictorios. Se realizaron cuatro ensayos de las estrategias que procuraban lograr la implementación “a escala”, o sea, a través de muestras de al menos 50 escuelas, de las cuales se informaron mejorías en la implementación en tres ensayos.

El impacto de las intervenciones en el comportamiento de salud de los estudiantes o el estado del peso fue contradictorio. Tres de los ocho ensayos con resultados de la actividad física no informaron mejorías significativas. Dos ensayos informaron reducciones en el consumo de tabaco entre la intervención en relación con el control. Siete de nueve ensayos no informaron diferencias entre los grupos en el sobrepeso, la obesidad ni la adiposidad de los estudiantes. En general se informaron mejorías positivas en la ingesta alimentaria de los niños entre los ensayos que informaron dichos resultados. Tres ensayos evaluaron el impacto de las estrategias de implementación en las actitudes del personal de la escuela y encontraron efectos contradictorios. Dos ensayos especificaron una evaluación de los posibles efectos adversos no intencionales en los métodos de estudio, de los cuales, no informaron ninguno. Un ensayo informó que el apoyo a la implementación no aumentó significativamente los ingresos ni los gastos de la escuela, y otro realizó una evaluación económica formal, e informó que la intervención fue coste‐efectiva. La heterogeneidad de los ensayos, y la ausencia de terminología consistente en cuanto a la descripción de las estrategias de implementación, fueron limitaciones importantes de la revisión.

Conclusiones de los autores

Debido a la calidad muy baja de la evidencia disponible, se desconoce si las estrategias evaluadas mejoran la implementación de las políticas o las prácticas escolares proyectadas, los comportamientos de salud de los estudiantes o el conocimiento o las actitudes del personal de la escuela. Tampoco se conoce si las estrategias para mejorar la implementación son efectivas en función de los costes ni si dan lugar a consecuencias adversas no intencionales. Se necesita investigación adicional para guiar los esfuerzos con objeto de facilitar el traslado de la evidencia a la práctica en este contexto.

PICOs

Population
Intervention
Comparison
Outcome

The PICO model is widely used and taught in evidence-based health care as a strategy for formulating questions and search strategies and for characterizing clinical studies or meta-analyses. PICO stands for four different potential components of a clinical question: Patient, Population or Problem; Intervention; Comparison; Outcome.

See more on using PICO in the Cochrane Handbook.

Resumen en términos sencillos

Mejoría de la implementación de las políticas y prácticas escolares para mejorar la salud de los estudiantes

Pregunta de la revisión: La revisión procuró evaluar cuán efectivas fueron las estrategias para apoyar la implementación de políticas y prácticas escolares con objeto de considerar la dieta, la actividad física, el aumento de peso excesivo, el hábito de fumar o el consumo de alcohol de los estudiantes. También se evaluó si estas estrategias dieron lugar a mejorías en estos comportamientos de salud o en el estado de peso de los estudiantes, mejoraron las actitudes del personal de la escuela o su conocimiento con respecto a la implementación, tuvieron algún efecto adverso y si fueron coste‐efectivas.

Antecedentes: La investigación ha identificado un rango de políticas y prácticas escolares que pueden ser potencialmente efectivas para mejorar los comportamientos de salud de los estudiantes. A pesar de lo anterior, dichas políticas y prácticas suelen no implementarse en las escuelas, incluso en las circunstancias en las que es obligatorio hacerlo. A menos que se ejecuten políticas y prácticas basadas en evidencia, no pueden otorgar beneficios en la salud pública.

Características de los estudios: Se incluyeron 27 ensayos, 18 de los cuales se realizaron en los EE. UU. Quince ensayos probaron estrategias para implementar políticas, prácticas o programas de alimentación saludable; seis ensayos probaron estrategias dirigidas a políticas o prácticas de actividad física; y tres ensayos se centraron en políticas o prácticas de tabaco. Tres ensayos se dirigieron a una combinación de los comportamientos de salud. Ninguno de los ensayos incluidos procuró aumentar la implementación de las intervenciones para retrasar la iniciación ni reducir el consumo de alcohol. Los ensayos evaluaron un rango de estrategias de apoyo a la implementación, e incluyeron material didáctico, reuniones educacionales, uso de líderes de opinión, financiamiento externo, procesos de consenso local e intervenciones adaptadas.

Fecha de la búsqueda: La evidencia está actualizada hasta el 31 de agosto de 2016.

Resultados clave: No se conoce si las estrategias evaluadas mejoran la implementación de las políticas o las prácticas escolares proyectadas, los comportamientos de salud de los estudiantes ni el conocimiento o las actitudes del personal de la escuela. Tampoco se conoce si las estrategias evaluadas dan lugar a efectos adversos no intencionales ni si son efectivas en función de los costes.

Limitaciones: La heterogeneidad de los ensayos, y la ausencia de una terminología consistente que describiera las estrategias de implementación fueron limitaciones importantes de la revisión.

Calidad de la evidencia: La calidad general de la evidencia se consideró muy baja para todos los resultados que incluyeron los efectos informados por los ensayos.

Authors' conclusions

Implications for practice

The review provides little clear guidance for policy makers or practitioners responsible for implementing initiatives in school settings to reduce the risk of chronic diseases. The findings suggest that achieving improvements in the implementation of policies and practices is possible, although the overall quality of evidence is poor and the characteristics of effective implementation strategies and the contexts in which they may operate remain unknown. Furthermore, the effects of implementation strategies were, in most cases moderate (10% to 20% absolute improvement) based on definitions described by Grimshaw and colleagues (Grimshaw 2004). In many instances such improvements were not sufficient to achieve reductions in student health risks particularly with regard to weight status.

In the absence of clear guidance, maximising the likelihood of the effects of implementation, efforts may be achieved through thorough formative evaluation and consultation with schools and school systems to identify barriers or enablers to policy or program implementation, and the co‐development of appropriate, and contextually‐relevant implementation support strategies. A number of implementation frameworks are currently available to assist in the identification of the factors that may impede implementation and the selection of strategies to overcome them. Among the most commonly used are the Theoretical Domains Framework, and the Consolidated Framework for Implementation in Research (CFIR) (Cane 2012; Damschroder 2009; French 2012). Given the modest improvements in policy and practice implementation identified by strategies in this review, practitioners should also pay careful attention to the health promotion policy or practice that is the subject of implementation. The CFIR framework suggests that interventions that are too complex, time consuming, or expensive, or that require the skills or expertise that are uncommon in schools may be less likely to be implemented and sustained (Damschroder 2009). The selection of interventions that focus are simple, do not require significant resourcing and can be integrated into existing school procedures should, therefore, be preferenced. Additionally, Milat's guide to implementing health promotion programs at scale suggests that other factors including organisational infrastructure and resources, planning, and stakeholder engagement are important determinants of successfully implementing population health interventions at scale (Milat 2016). Such frameworks may provide good guidance until empirical evidence testing such recommendations is available.

Finally, the review identified a need for the development and use of robust measures for the assessment of implementation outcomes. A number of the included trials included self‐report measures of school staff such as questionnaires, teacher completed log books and telephone interviews, of which just two were reported to have been validated. The reliability and validity of self‐reported measures of policy or practice implementation are questionable, particularly for use in trials given the potential for socially desirable responding (Greene 2008). While direct observation methods represent a more objective measures, such assessments can be subject to research reactivity, and are cost prohibitive. Relative to direct observation, the use of video or audio recordings in situ may reduce bias, but provide objective measures of implementation at lower cost. Further, the use of routinely collected data from licensing agencies or authorities could be considered for large scale trials. In instances where the use of a single robust measure is not feasible, measurement triangulation may provide a more comprehensive assessment of the effects of an implementation strategy.

Implications for research

Schools are one of the most valuable settings for population‐level interventions to improve child health. Despite this, there remains a surprising lack of controlled trials examining the impact of the strategies to implement initiatives to address chronic disease risks in this setting. Previous bibliographic studies have suggested that trials of implementation strategies represent just 3% of public health research publications (Wolfenden 2016b), and the findings of this review underscore the need for more trials in the field. For example, Cochrane reviews have identified 134 randomised trials of school‐based smoking prevention interventions (Thomas 2013), and 53 randomised trials of school‐based programs to prevent alcohol misuse (Foxcroft 2011). However, we did not identify any trials of strategies to implement alcohol prevention policies and practices in schools, and just four trials (Gingiss 2006; Mathur 2016; McCormick 1995; Saraf 2015), three of which were randomised trials, assessed strategies to implement school‐based tobacco policies or practices. The findings demonstrate an immature evidence base, and a need to re‐orient research investment to fund not only trials of interventions to improve health behaviours, but trials of strategies to get such interventions implemented in routine school practice.

The lack of evidence regarding the effects of strategies to improve implementation in schools is surprising, given that most interventions in this setting would involve some form of implementation strategy. A number of included studies targeted multiple health behaviours however, did not assess the impact of implementation strategies on policies and practices for each health behaviour. For example, Simmons‐Morton incorporated interventions components to improve the school nutrition and physical activity environment. However, strategies and outcomes, for the implementation of nutrition policies and practices only were reported (Simons‐Morton 1988). For such trials, the impact of efforts to implement policies and practices targeting other health behaviours and their effects on student outcomes (e.g. physical activity) represent a missed opportunity to learn from implementation experiences. Anecdotally, a number of trials were excluded as they described an implementation strategy, but only included assessments of implementation in process evaluations within the intervention group. The greater application of hybrid research designs has been suggested as one means of improving the availability of research evidence to guide implementation efforts (Wolfenden 2016c). Hybrid designs simultaneously plan and collect data on the impact of interventions on individual health behaviours or clinical outcomes as well as the impact (or potential impact) of strategies to enhance their implementation. The routine collection of such information in future trials seeking to test the effects of school‐based interventions delivered by usual teaching staff could efficiently build the evidence base (Wolfenden 2016c). Furthermore, application of the recently released Standards for Reporting Implementation Studies (StaRI) Statement may improve the availability and usability of implementation information in future trials (Pinnock 2017). Another potential explanation for the lack of trials was that policy implementation often occurs at large scale, occurs at the discretion of policy makers and practitioners and may not easily be examined using controlled trial designs. As such many evaluations of policy implementation occur post policy implementation only, or do not use comparison groups (Watts 2014).

While not unique to the field of implementation (Lau 2015), of particular concern was the lack of consideration to the costs of implementing health promoting policies or practices, or their unintended adverse effects. Information regarding costs and adverse effects are particularly salient for health decision makers who must weigh the benefits of intervention with their harms and costs to community (Wolfenden 2010; Wolfenden 2015). Approaches to implementation are not immune to unintended consequences. Surveys of teaching staff suggest a range of adverse outcomes are possible. For example, policies restricting unhealthy foods for sale from school kiosks or canteens have been suggested to compromise food sale profits which are often re‐invested in the school for other student initiatives (Pettigrew 2012). Furthermore, the introduction of new policies or practices in schools may displace the implementation of other policies or practices of proven benefit for students. Future research should incorporate logic models to identify potential harms associated with implementing health promotion programs in schools, and include measures to prospectively measure both harms and implementation costs.

Summary of findings

Open in table viewer
Summary of findings for the main comparison.

Strategies for enhancing the implementation of school‐based policies or practices targeting risk factors for chronic disease

Patient or population: School aged children (5 ‐ <18 years)

Settings: School

Intervention: Any strategy (e.g. educational materials, educational meetings, audit and feedback, opinion leaders, education outreach visits) with the intention of improving the implementation of health promoting policies, programs or practices for physical activity, healthy eating, obesity prevention, tobacco use prevention or alcohol use prevention in schools

Comparison: No intervention or usual practice (22 trials), alternate intervention (2 trials) or minimal support comparison group (3 trials)

Outcomes

Impact

Number of Participants
(trials)

Quality of the evidence
(GRADE)d

Implementation of school‐based policies, practices or programs that aim to promote healthy or reduce unhealthy behaviours relating to child diet, physical activity, obesity, or tobacco or alcohol use

We are uncertain whether strategies improve the implementation of school‐based policies, practices or programs that aim to promote healthy or reduce unhealthy behaviours relating to child diet, physical activity, obesity, or tobacco or alcohol use.

Among 13 trials reporting dichotomous implementation outcomes—the proportion of schools or school staff (e.g. classes) implementing a targeted policy or practice—the median unadjusted (improvement) effect sizes ranged from 8.5% to 66.6%. Of seven trials reporting the percentage of a practice, program or policy that had been implemented, the median unadjusted effect (improvement), relative to the control ranged from ‐8% to 43%. The effect, relative to control, reported in two trials assessing the impact of implementation strategies on the time per week teachers spent delivering targeted policies or practices ranged from 26.6 to 54.9 minutes per week.

1599 schools

(27 trials)

Very lowa,b

Measures of student physical activity, diet, weight status, tobacco or alcohol use

We are uncertain whether strategies to improve the implementation of school‐based policies, practices or programs targeting risk factors for chronic disease impact on measures of student physical activity, diet, weight status, tobacco or alcohol use

29,181 studentsf

(21 trials)

Very lowa,b,c

Knowledge, skills or attitudes of school staff involved regarding the implementation of health promoting policies, or practices

We are uncertain whether strategies to improve the implementation of school‐based policies, practices or programs targeting risk factors for chronic disease impact on the knowledge, skills or attitudes of school staff

1347 stakeholders (3 trials)

Very lowa,b

Cost or cost‐effectiveness of strategies to improve the implementation

We are uncertain whether strategies to improve the implementation of school‐based policies, practices or programs targeting risk factors for chronic disease are cost‐effective

42 schools (1 trial)

473 students (1 trial)g

Very lowa,b,d

Unintended adverse effects of strategies to improve implementation on schools, school staff or children

We are uncertain whether strategies to improve the implementation of school‐based policies, practices or programs targeting risk factors for chronic disease result in unintended adverse effects or consequences

68 schools and 4603 studentsh (2 trials)

Very lowb,c

High quality: Further research is very unlikely to change our confidence in the estimate of effect.
Moderate quality: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low quality: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low quality: We are very uncertain about the estimate.

aDowngraded one level due to limitations in the design.

bDowngraded one level due to unexplained heterogeneity.

cDowngraded one level due to indirectness.

dDowngraded one level due to imprecision.

eGRADE Working Group grades of evidence

fTwo trials measured student behaviour through the use of non‐student data (e.g. purchases) and did not provide student sample sizes.

gOne trial reported on the impact of an intervention on school level revenue. One trial reported on cost‐effectiveness.

hOne trial measured adverse events through the use of non‐student data (i.e. canteen profits) and did not provide student sample sizes.

Background

Description of the condition

Five health risks: physical inactivity, poor diet, tobacco smoking, risky alcohol consumption and obesity are the most common modifiable causes of chronic disease (Lim 2012). These risk factors, all among the top 20 risk factors contributing to global death and disability, each account for a significant proportion of the total global disease burden: physical inactivity (2.8%), dietary risks (9.2%), tobacco smoking (5.5%), alcohol use (3.8%), and high body‐mass index (BMI) (3.8%) (IHME 2013). Together, they were responsible for more than 580 million years lived with disability and 24 million deaths in 2010 (IHME 2013). As a consequence, reducing the impact of these modifiable health risks in the community has been identified as a public health priority (WHO 2011).

Targeting health risks in children is an important chronic disease prevention strategy, as heath behaviours established in childhood are likely to track into adulthood (Swinburn 2011). Schools are an attractive setting for the implementation of child‐focused chronic disease‐prevention initiatives, as they offer continuous and intensive contact with children for prolonged periods (WHO 2012). Furthermore, evidence from systematic reviews support a range of benefits from school‐based health programs (Dobbins 2013; Dusenbury 2003; Foxcroft 2011; Jaime 2009; Kahn 2002; Thomas 2013; Waters 2011). For instance, comprehensive physical activity interventions can improve child activity during the school day, their movement skill proficiency and knowledge for lifetime physical activity (Kahn 2002). A Cochrane review of school‐based programs for smoking found interventions (> one year in duration) that aimed to prevent smoking uptake, reduced smoking rates by up to 12% (Thomas 2013). Similarly, Cochrane reviews of obesity and alcohol prevention programs include examples of interventions that have positive protective effects on child BMI and alcohol misuse (Dusenbury 2003; Foxcroft 2011; Waters 2011). Finally, systematic review evidence also suggests when implemented, school food policies are generally effective in improving the food environment and dietary intake of school students (Jaime 2009).

Despite such evidence, the implementation of policies, intervention programs or recommended practices to reduce these health risks in usual community contexts is poor (AONSW 2012; De Silva‐Sanigorski 2011; Downs 2012; Gabriel 2009; Nathan 2011). Research conducted in Brazil, Canada and Australia for example, suggests that less than 10% of schools are compliant with legislation, policy or nutrition guidelines regarding the sale and promotion of healthy foods in schools (De Silva‐Sanigorski 2011; Downs 2012; Gabriel 2009). In Australia, a recent report highlighted that around 30% of schools did not provide recommended planned physical activity to children (AONSW 2012). Further, in the USA, less than 17% of schools effectively implement substance misuse prevention programs including those related to tobacco and alcohol use (Ennett 2003). The failure to implement evidence‐based programs in the community, denies the public the benefits such health research is intended to deliver. Improving the translation of research findings, characterised by the transition of evidence regarding an intervention to its application in the real world, represents a significant challenge for 21st century medicine (Wolfenden 2015).

Description of the intervention

Research about a treatment or intervention can not lead to health outcomes if health systems, organisations, or professionals do not use interventions with known health benefits (Eccles 2009). The process of research translation, is however, complex. As a conceptual guide, the US National Institute of Health have described five phases of the translation process (T0 – T4) from research discovery to population health impact (Glasgow 2012; Khoury 2010). Earlier phases (T0 – T2) focus on basic science, epidemiology and testing the efficacy of health interventions. Translation Phase 3, known as 'T3', is dedicated to research designed to increase the implementation of evidence‐based interventions, practices, policies or programs in practice (Glasgow 2012). This is achieved through 'implementation strategies'—techniques designed to change practice patterns within specific settings to improve the 'implementation' of evidence‐based health interventions (Glasgow 2012; Rabin 2008). There are a range of potential implementation strategies that can improve the likelihood of schools' implementation of policies and practices to promote student health and reduce the risk of future disease including those listed in the Cochrane Effective Practice and Organisation of Care (EPOC) taxonomy (EPOC 2015). Such strategies might include continuous quality improvement processes, educational materials, performance monitoring, local consensus processes and educational outreach visits.

Why it is important to do this review

Studying the effectiveness of 'implementation strategies', and why these strategies succeed or fail, provides important information for future implementation research and informs decisions of policy makers and practitioners interested in ensuring evidence‐based chronic disease prevention programs are sufficiently implemented to yield health benefits. A number of systematic reviews have been conducted describing the effectiveness of strategies to implement practice guidelines and improve professional practice of clinicians in clinical settings, such as audit and feedback (Ivers 2012), reminders (Arditi 2017), education meetings and workshops (Forsetlund 2009), and incentives (Scott 2011). However, implementation research in non‐clinical community settings has largely been overlooked (Buller 2010). To our knowledge, few systematic reviews concerning implementation of community interventions have been conducted; only one has examined strategies to implement chronic disease prevention programs in schools (Rabin 2010), and another within childcare settings (Wolfenden 2016). The school's review included studies investigating cancer prevention strategies and only identified nine school‐based implementation strategies. Moreover, the review only included studies published until the beginning of 2008. To guide optimal implementation of school‐based health initiatives, further synthesis of evidence is warranted to ensure the inclusion of all relevant studies within the school setting. By doing so, this review aims to provide evidence for how health promotion practitioners and education systems can design and optimally implement policies, programs and practices in the school setting to promote healthy behaviours of children.

Objectives

The primary aims of the review are to examine the effectiveness of strategies aiming to improve the implementation of school‐based policies, programs or practices to address child diet, physical activity, obesity, tobacco or alcohol use.

Secondary objectives of the review are to:

  • examine the effectiveness of implementation strategies on health behaviour (e.g. fruit and vegetable consumption) and anthropometric outcomes (e.g. BMI, weight);

  • describe the impact of such strategies on the knowledge, skills or attitudes of school staff involved in implementing health‐promoting policies, programs or practices;

  • describe the cost or cost‐effectiveness of such strategies; and

  • describe any unintended adverse effects of strategies on schools, school staff or children.

Methods

Criteria for considering studies for this review

Types of studies

Strategies to improve the implementation of policies, programs or practices are often complex in nature and have been evaluated with a wide variety of methods and designs. While results of randomised controlled trials (RCTs) are considered more robust, using this study design is often impractical or inappropriate for complex public health interventions (Glasgow 1999). We are aware of an ongoing RCT evaluating implementation strategies in schools; however, we envisaged that there would be a paucity of completed trials of this kind. To overcome this, we included any trial (randomised or non‐randomised) with a parallel control group published in any language including the following trial designs:

  • RCTs and cluster‐RCTs;

  • quasi‐RCTs and cluster quasi‐RCTs; and

  • controlled before and after studies (CBAs), cluster‐CBAs.

Studies assessing any strategy aiming to improve the implementation of policies, programs or practices in a school setting which target healthy eating, physical activity, obesity prevention, tobacco or alcohol prevention (or combination of) were eligible. To be included trials were required to report the impact of a defined implementation strategy on an implementation outcome between experimental groups.

Types of participants

We included studies set in schools (e.g. elementary, primary, secondary, middle, high and central schools) where the age of students was typically between five and 18 years. Study participants could be any stakeholders who may influence the uptake, implementation or sustainability of the target health‐promoting policy, practice or program in schools, including teachers, managers, cooks or other staff of schools and education departments. Study participants may also include administrators, officials or representatives of school services, or other health, education, government or non‐government personnel responsible for encouraging or enforcing the implementation of health promoting programs, policies or practices in schools. Studies or arms of trials assessing implementation performed by research staff were excluded.

Types of interventions

We included studies that compared school‐based strategies with the intention of improving the implementation of health‐promoting policies, programs or practices for physical activity, healthy eating, obesity prevention, tobacco use prevention or alcohol use prevention to either 1) other implementation strategies, 2) no implementation strategy or 3) 'usual' practice. For trials that did not describe the comparison conditions, but reported the findings against a comparison group, we assumed that the comparison was usual practice.

To be eligible for inclusion, studies had to include strategies to improve implementation by those involved in the delivery, uptake or use of policies, programs or practices in schools. Strategies could include quality improvement initiatives, education and training, performance feedback, prompts and reminders, implementation resources (e.g. manuals), financial incentives, penalties, communication and social marketing strategies, professional networking, the use of opinion leaders, implementation consensus processes or other strategies. Strategies could be singular or multi‐component and could be directed at individuals, classes or whole schools.

Types of outcome measures

The review examined a range of primary and secondary outcomes of school policy, program or practice implementation. 'Implementation' was defined as the use of strategies to adopt and integrate evidence‐based health interventions and to change practice patterns within specific settings (Glasgow 2012). To be included, outcomes were required to report an action undertaken by a school or school personnel (e.g. proportion of schools implementing canteen services consistent with dietary guidelines or mean number of lessons of teaching curricula implemented). Measures of individual child behaviour (e.g. proportion of children who were moderately or vigorously physically active) were not considered implementation outcomes. Implementation could have occurred at any scale (local, national or international). We included trials reporting only follow‐up data of an implementation outcome (i.e. no baseline data) in instances where the trial utilised a randomised design as baseline values were assumed to have been equivalent (or differ only due to chance), or if the baseline values of implementation outcomes were assumed to be zero, for example, the implementation of a curricula resource not available to schools at baseline.

Primary outcomes

  • Any objectively or subjectively (self‐reported) assessed measure of school policy, program or practice implementation.

Measures relating to successful implementation including uptake, partial/complete uptake (e.g. consistent with protocol/design), or routine use were included. Such data may be obtained from audits of school records, questionnaires or surveys of staff, direct observation or recordings, examination of routinely collected information from government departments (such as compliance with food standards or breaches of department regulations) or other sources.

Secondary outcomes

Data on secondary outcomes were only extracted for measures corresponding to implementation outcomes. For example, in a trial of an intervention targeting physical activity and healthy eating, but where an implementation strategy and implementation outcome data were only reported for healthy eating policies or practices, only data on secondary trial outcomes only related to diet (foods or beverages consumed by students or student BMI) were extracted. Secondary outcomes could be measured objectively or subjectively (self‐reported) and included:

  • measures of health behaviours or risk factors relevant to policies, programs, or practices being implemented (i.e. diet; physical activity; tobacco or alcohol use; or measures of excessive weight gain);

  • any measure of school staff knowledge, skills or attitudes related to the implementation of policies, programs or practices supportive of diet, physical activity, or healthy weight, or tobacco or alcohol use prevention;

  • estimates of absolute costs or any assessment of the cost‐effectiveness of strategies to improve implementation of policies, programs or practices in schools; and

  • any reported unintended adverse consequences of a strategy to improve implementation of policies, programs or practices in schools; these could include adverse impacts on child health (e.g. unintended changes in other risk factors, injury), school operation or staff attitudes (e.g. impacts on staff motivation or cohesion following implementation), or the displacement of other key programs, curricula or practice.

We summarise data for all relevant risk factors targeted by the review. Where there were differences in published information between peer‐reviewed and grey literature for the same trial, we preferentially used data from peer‐reviewed publications.

Search methods for identification of studies

We performed a comprehensive search for both published and unpublished research studies across a broad range of information sources to reflect the cross‐disciplinary nature of the topic. Articles published in any language were eligible and there were no restrictions regarding article publication dates.

Electronic searches

We searched the following electronic databases:

  • Cochrane Library including the Cochrane Central Register of Controlled Trials (CENTRAL) (up to Sept 1st 2016);

  • MEDLINE (up to Sept 1st 2016);

  • MEDLINE In‐Process & Other Non‐Indexed Citations (up to Sept 1st 2016);

  • Embase Classic and Embase (up to Sept 1st 2016);

  • PsycINFO (up to Sept 1st 2016);

  • Education Resource Information Center (ERIC) (up to Sept 1st 2016);

  • Cumulative Index to Nursing and Allied Health Literature (CINAHL) (up to Sept 1st 2016);

  • Dissertations and Theses (up to Sept 1st 2016); and

  • SCOPUS (up to Sept 1st 2016).

We adapted the MEDLINE search strategy for each database using database‐specific subject headings, where available (Appendix 1). We included filters used in other systematic reviews for research design (Waters 2011), population (Guerra 2014), physical activity and healthy eating (Dobbins 2013; Guerra 2014; Jaime 2009), obesity (Waters 2011), tobacco use prevention (Thomas 2013), and alcohol misuse (Foxcroft 2011). A search filter for intervention (implementation strategies) was developed based on previous reviews (Wolfenden 2016), and common terms in implementation and dissemination research (Rabin 2008).

Searching other resources

We screened reference lists of all included trials for citations of potentially relevant studies and contacted authors of included studies for other potentially relevant trials. We handsearched all publications between July 2011 and July 2016 in the journals: Implementation Science and Journal of Translational Behavioral Medicine. We also conducted searches of the WHO International Clinical Trials Registry Platform (ICTRP) (http://apps.who.int/trialsearch/) as well as the US National Institutes of Health registry (https://clinicaltrials.gov). One study identified in these searches which had not been published was listed in the 'Characteristics of ongoing studies' table. We consulted with experts in the field to identify other relevant research. To identify companion papers of identified eligible trails we also conducted Google Scholar searches of the first 100 citations identified by a search of the trial name or title.

Data collection and analysis

Selection of studies

Initially, one review author (CW) screened the titles and abstracts retrieved from the literature search to exclude duplicate records and clearly‐ineligible articles (i.e. studies of non‐humans or inappropriate settings). The remaining titles and abstracts were then screened independently by two review authors (AF, AG, LW, NN, RS, RW, SY, or TD). We obtained full texts of all remaining potentially relevant or unclear articles and authors independently reviewed these against our inclusion criteria, in duplicate (AF, AG, LW, NN, RS, RW, RH, SY, or TD). We used Google translate for abstracts or obtained translation from non‐English speaking collaborators. At each stage, disagreements were resolved by discussion between the two review authors and, where required, by consulting a third review author (CW or LW). We recorded reasons for exclusion of studies in the 'Characteristics of excluded studies' table.

Data extraction and management

Two review authors (CW, NN, PB, RS, RW, SY, RH, BP or TD) independently extracted data using a data extraction form adapted from the Cochrane Public Health Group Methods Manual (CPHG 2011). Any disagreements in data extraction were resolved by discussion or by consulting a third author (LW), where required.

Where key data were missing from the study reports, we attempted to contact the authors to obtain the information. Where multiple reports of the same trial were published, we extracted data from those deemed the most applicable. We extracted data comprehensively to cover all relevant outcomes and methods reported across studies.

We extracted and reported the following study characteristics:

  • information regarding study eligibility as well as the study design, date of publication, school type, country, participant/school demographic/socioeconomic characteristics, number of experimental conditions, as well as information to allow assessment of risk of study bias;

  • information describing the characteristics of the implementation strategy, including the duration, and intervention (policy, program, practice), the theoretical underpinning of the strategy (if noted in the study), information to allow classification against the EPOC Group 'Taxonomy of Interventions', as well as data describing consistency of the execution of the strategy with a planned delivery protocol (EPOC 2015);

  • information on trial primary and secondary outcomes, including the data collection method, validity of measures used, effect size and measures of outcome variability, costs and adverse outcomes; and

  • information on the source(s) of research funding and potential conflicts of interest.

Assessment of risk of bias in included studies

Assessment of risk of bias considered study design and reporting characteristics relevant to the implementation outcomes of the included studies only. For included trials, we used Cochrane's tool for assessing risk of bias, which includes assessments based on domains (selection bias, performance bias, detection bias, attrition bias and reporting bias) (Higgins 2011). We also included additional criteria for cluster‐RCTs including 'recruitment to cluster', 'baseline imbalance', 'loss of clusters', 'incorrect analysis', 'contamination' and 'compatibility with individually RCTs'. We included an additional criterion 'potential confounding' for the assessment of the risk of bias in non‐randomised trial designs. We assessed studies as having 'low', 'high', or 'unclear' risk of bias in accordance with the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011).

Two pairs of authors (FT, TCM and AG, AF) assessed risk of bias independently for each study. Any disagreement was resolved by discussion, or if required, by involving an additional author (LW).

Measures of treatment effect

Considerable differences in study measures and primary and secondary outcomes reported by included studies precluded the use of summary statistics to describe treatment effects. As such, we synthesised study findings narratively based on the outcomes reported in the included trials. For dichotomous implementation outcomes, these included absolute differences in the proportion of schools or teachers implementing a policy, practice or program. Continuous outcomes were reported as absolute, non‐standardised differences (mean difference) for measures including an implementation score, the percentage of policy or program implementation, or the frequency or time in which a policy, practice or program implementation occurred.

Unit of analysis issues

We examined cluster trials for unit of analysis errors and identified trials with such errors in the 'Risk of bias' summary.

Dealing with missing data

When outcomes, methods, or results of the studies were missing or unclear, we contacted the corresponding authors of the published trial to supply the data. Any information provided was incorporated into the review as appropriate. Any evidence of potential selective reporting or incomplete reporting of trial data was documented in the 'Risk of bias' tables.

Assessment of heterogeneity

We were unable to examine heterogeneity quantitatively through the use of I2 statistic or forest plots given considerable differences in the implementation strategies, outcomes, measures and comparators that precluded pooling of data. Clinical heterogeneity of the included studies was therefore described narratively.

Assessment of reporting biases

We compared published reports with information in trial registers and protocols to assess reporting bias where such information was available. Where we suspected reporting bias (via assessment of risk of bias in included studies), we attempted to contact study authors and ask them to provide missing outcome data. Instances of potential reporting bias were recorded in the 'Risk of bias' summary.

Data synthesis

Primarily, trial heterogeneity precluded meta‐analysis. The target population in trials varied, including teaching staff, school food service staff and principals. No two trials employed the same implementation strategies. Included studies compared implementation strategies with a different strategy, minimal support control or usual practice. Substantial heterogeneity was particularly evident for trial outcomes in terms of assessment methods and measures, which often occurred at multiple levels (at a school level and/or teacher/class level). The availability of data to pool was further limited by reporting of dichotomous and continuous outcomes which were not able to be combined. Further, the review identified studies with randomised and non‐randomised designs. Pooling data across such trial designs is not recommended (Higgins 2011). Finally, meta‐analysis with a small number of studies (< five) is problematic and can produce imprecise estimates of effect given the underlying assumptions of random‐effects models (Higgins 2008).

As such, and consistent with the approach of a previous Cochrane review of implementation strategies in the childcare setting (Wolfenden 2016), we narratively synthesised trial findings based on the outcomes reported. As trial heterogeneity precluded meta‐analysis, we described the effects of interventions for individual trials by reporting the absolute effect size of the primary outcome measure for policy, practice or program implementation for each study. We focused on specified primary outcomes where available as the intervention (implementation strategy) was designed to directly influence this outcome, the trial (should be) powered to detect meaningful effects on these measures, and as pre‐specified primary (as opposed to secondary) outcomes are considered most appropriate for hypothesis testing. We calculated the effect size by subtracting the change from baseline on the primary implementation outcome for the control (or comparison) group from the change from baseline in the experimental or intervention group. For trials with multiple follow‐up periods, we used data from the final follow‐up period reported. If data to enable calculation of change from baseline were unavailable, we used the differences between groups post‐intervention. Where there were two or more primary implementation outcome measures, we used the median effect size of the primary outcomes and also reported the range. Where the primary outcome measure was not identified by the study authors in the published manuscripts, we used the implementation outcome on which the trial sample size calculation was based or, in its absence, we took the median effect size of all measures judged to be implementation outcomes reported in a manuscript and also reported the range. Such an approach was previously used in the Cochrane review of the effects of audit and feedback on professional practices published by the Cochrane EPOC Group (Ivers 2012), and in our previous review of implementation strategies in the childcare setting (Wolfenden 2016). In instances where subscales of an overall implementation score were reported, in addition to a total scale score, we used the total score as the primary outcome to provide a more comprehensive measure of implementation. We reverse‐scored implementation measures that did not represent an improvement (e.g. the proportion of schools without a healthy menu) in the calculation of median effects. In instances where there were self‐reported, and observed data assessing the same implementation outcome, observational measures were extracted in place of self‐report given observation represents a more objective measure of implementation.

We present the effects of interventions grouped according to the outcome data (continuous or dichotomous) and implementation measure reported. For individual studies where there is no single primary implementation outcome, we describe the median as well as report the range of effects across all comparable measures (description of within‐trial effects). To characterise the effects of interventions across studies (description of between‐study effects), we report an unadjusted median and range of the absolute effects across included trials. The median and range for between‐study effects were calculated using the absolute effect size of the primary implementation outcome of individual trials, or the median of such measures where a single primary outcome was not reported. Such synthesis is intended for descriptive, rather than interpretative purposes, as it does not consider the trial characteristics (e.g. variance) for which trial weights are applied in formal meta‐analysis.

A 'Summary of findings' table was generated to present the key findings of included studies (summary of findings Table for the main comparison for the main comparison), based on recommendations of the Cochrane EPOC group and the Cochrane Handbook for Systematic Reviews of Interventions and included a list of primary and secondary outcomes in the reviews, a description of the intervention effect, the number of participants and studies addressing the outcome, and a grade for the overall quality of evidence. We used the GRADE system to assess the quality of the body of evidence through consideration of study limitations, consistency of effect, imprecision, indirectness and publication bias. Two review authors assessed the overall quality of evidence using the GRADE system (LW + RH) and consulted a third review author (CW) where consensus on any issues arising could not be reached. The quality of the body of evidence for each individual outcome was graded accordingly from 'High' to 'Very Low' in accordance with the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011). Given the variability in the denominator for various implementation outcomes across and within included trials, we report the total number of schools providing data in the 'Summary of findings' table as all trials allocated schools to experimental groups.

Subgroup analysis and investigation of heterogeneity

Quantitative examination of heterogeneity could not be conducted as we were unable to pool outcome data from trials. However, clinical and methodological heterogeneity of studies is described narratively based on participant, intervention, outcome and study design characteristics. In order to investigate the impact of implementation strategies in improving implementation of policies, practices or programs at scale (defined as targeting implementation in 50 or more schools), we performed a narrative synthesis on a subgroup of studies where implementation occurred at scale.

Sensitivity analysis

We did not carry out a sensitivity analysis by removing studies with a high risk of bias from the meta‐analysis as no quantitative synthesis was conducted.

Results

Description of studies

See Characteristics of included studies; Characteristics of excluded studies; Characteristics of ongoing studies.

Results of the search

Full details of each of the included trials are presented in the Characteristics of included studies table. The reasons for excluding trials are reported in the Characteristics of excluded studies table. One ongoing study was identified and details are presented in the Characteristics of ongoing studies table. The electronic search, conducted to 31 August 2016, yielded 22,056 citations (Figure 1). We identified an additional 3125 records from handsearching key journals, checking reference lists of included trials and Google Scholar searches. Through our contact with authors of included trials or, experts in the field, we identified two additional articles. One was in‐press and the other was published later than our search dates; both contained eligible trials. The information obtained through contact with trial authors was incorporated into the Characteristics of included studies table and used in assessments of risk of bias, and trial outcomes. Following screening of titles and abstracts, we obtained the full texts of 385 manuscripts for further review, of which we included 81 manuscripts describing 27 individual trials.


Study flow diagram.

Study flow diagram.

Included studies

Types of studies

Of the 27 included trials 18 were conducted in the USA (Alaimo 2015; Cunningham‐Sabo 2003; Delk 2014; French 2004; Gingiss 2006; Heath 2002; Hoelscher 2010; Lytle 2006; McCormick 1995; Mobley 2012; Perry 1997; Perry 2004; Sallis 1997; Saunders 2006; Simons‐Morton 1988; Story 2000; Whatley Blum 2007; Young 2008), with the remaining trials undertaken in India (Mathur 2016; Saraf 2015), Australia (Nathan 2012; Nathan 2016; Sutherland 2017; Wolfenden 2017; Yoong 2016), Canada (Naylor 2006), and South Africa (De Villiers 2015). Trials were conducted between 1985 (Simons‐Morton 1988), and 2015 (Nathan 2016). In the assessment of implementation outcomes, eight studies employed randomised controlled trial (RCT) designs (Cunningham‐Sabo 2003; De Villiers 2015; Lytle 2006; Mobley 2012; Nathan 2016; Saunders 2006; Wolfenden 2017; Yoong 2016), 11 used cluster‐RCT designs (Delk 2014; French 2004; Mathur 2016; McCormick 1995; Naylor 2006; Perry 1997; Perry 2004; Saraf 2015; Story 2000; Sutherland 2017; Young 2008), and eight were conducted using non‐RCT designs. Trial designs used to evaluate implementation outcomes differed at times from those used to assess behavioural trial outcomes. For example, Saunders and colleagues assessed school level implementation outcomes (RCT design) as well as the impact of intervention implementation on individual student outcomes located within schools (cluster‐RCT). There was considerable variability in the types of participants, implementation strategies and outcomes reported.

Participants

Trials recruited samples of between four (Simons‐Morton 1988), and 828 (Nathan 2012) schools. In four trials, 50 or more schools were allocated to the intervention group to receive implementation support (Alaimo 2015; Gingiss 2006; Nathan 2012; Perry 1997). The majority of trials were conducted in elementary (or primary) schools catering for children between five years and 12 years (Cunningham‐Sabo 2003; De Villiers 2015; Heath 2002; Hoelscher 2010; Nathan 2012; Nathan 2016; Naylor 2006; Perry 1997; Perry 2004; Sallis 1997; Simons‐Morton 1988; Story 2000; Sutherland 2017; Wolfenden 2017; Yoong 2016). Six were conducted in middle schools where children are aged between 11 years and 14 years (Alaimo 2015; Delk 2014; Lytle 2006; Mobley 2012; Saraf 2015; Young 2008), two in both middle and high schools (Gingiss 2006; McCormick 1995), and four were conducted in high schools only typically catering for children aged 13 or 14 years to 18 years of age (French 2004; Mathur 2016; Saunders 2006; Whatley Blum 2007). A number of trials reported they were conducted in low‐income regions or in schools with students from predominantly low‐income households (Alaimo 2015; De Villiers 2015; Heath 2002; Hoelscher 2010; Lytle 2006; Mathur 2016; Mobley 2012; Nathan 2012; Sutherland 2017).

Interventions

There was considerable heterogeneity in the implementation strategies employed. All trials examined multi‐strategic implementation strategies with the most common implementation strategies being educational materials, educational outreach and educational meetings. No two trials examined the same combinations of implementation strategies (Table 1). The EPOC taxonomy descriptors of the implementation strategies employed by included trials are described in Table 2. In the trial reported by McCormick and colleagues, the duration of implementation support ranged from four months to more than four years. Seven trials did not report the use of any theory or theoretical frameworks. Eight trials used explicit implementation or dissemination theories and frameworks including the Charter and Jones Framework (institutional commitment, structural context, role performance, learning activities) (Simons‐Morton 1988), the Theoretical Domains Framework (TDF) (Nathan 2016; Wolfenden 2017), consolidated frameworks for practice change (Nathan 2012), social‐ecological theory (Sutherland 2017), diffusion of innovation and/or organisational change (McCormick 1995; Young 2008), and control theory (Yoong 2016). While other trials reported the use of operant learning theory (Young 2008), Social Contextual Model of Health Behavior Change (Mathur 2016), social‐ecological models (De Villiers 2015, Hoelscher 2010; Mobley 2012; Naylor 2006; Saunders 2006; Young 2008), social cognitive theory (Hoelscher 2010; Lytle 2006; Perry 2004; Story 2000; Young 2008), and social learning theory and/or organisational change (Cunningham‐Sabo 2003; Heath 2002; Perry 1997; Story 2000), often in the description of intervention content rather than a framework to guide an implementation strategy.

Open in table viewer
Table 1. Interventions across studies

Trial

Audit and feedback

Clinical practice guidelines

Continuous quality improvements

Distribution

of supplies

External

funding

Education

games

Education

materials

Education

meeting

Education

outreach visits

Inter‐

professional

education

Length of consultation

Local consensus

process

Local opinion

leader

Managerial supervision

Monitoring performance

of delivery

Pay for performance

Tailored intervention

The use of communication

technology

Other

Alaimo 2015

X

X

X

X

X

X

Cunningham‐Sabo 2003

X

X

X

X

De Villiers 2015

X

X

X

X

Delk 2014

X

X

X

X

X

X

French 2004

X

X

X

X

Gingiss 2006

X

X

X

X

Heath 2002

X

X

X

Hoelscher 2010

X

X

X

X

X

X

X

Lytle 2006

X

X

X

X

Mathur 2016

X

X

X

X

X

McCormick 1995

X

X

X

Mobley 2012

X

X

X

X

X

X

X

X

Nathan 2012

X

X

X

X

X

X

X

Nathan 2016

X

X

X

X

X

X

X

Naylor 2006

X

X

X

X

X

X

Perry 1997

X

X

X

X

Perry 2004

X

X

X

X

X

Sallis 1997

X

X

X

X

X

Saraf 2015

X

X

X

X

X

X

X

Saunders 2006

X

X

X

X

X

X

Simons‐Morton 1988

X

X

X

X

X

X

X

Story 2000

X

X

Sutherland 2017

X

X

X

X

X

X

Whatley Blum 2007

X

X

X

X

X

X

X

X

Wolfenden 2017

X

X

X

X

X

X

X

X

X

Yoong 2016

X

X

X

X

Young 2008

X

X

X

X

X

X

Open in table viewer
Table 2. Definition of EPOC subcategories utilised in the review

EPOC subcategory

Definition

Audit and feedback

A summary of health workers’ performance over a specified period of time, given to them in a written, electronic or verbal format. The summary may include recommendations for clinical action.

Clinical practice guidelines

Clinical guidelines are systematically developed statements to assist healthcare providers and patients to decide on appropriate health care for specific clinical circumstances'(US IOM).

Educational materials

Distribution to individuals, or groups, of educational materials to support clinical care, i.e. any intervention in which knowledge is distributed. For example this may be facilitated by the Internet, learning critical appraisal skills; skills for electronic retrieval of information, diagnostic formulation; question formulation.

Educational meetings

Courses, workshops, conferences or other educational meetings.

Educational outreach visits, or academic detailing

Personal visits by a trained person to health workers in their own settings, to provide information with the aim of changing practice.

External funding

Financial contributions such as donations, loans, etc. from public or private entities from outside the national or local health financing system.

Inter‐professional education

Continuing education for health professionals that involves more than one profession in joint, interactive learning.

Length of consultation

Changes in the length of consultations.

Local consensus processes

Formal or informal local consensus processes, for example agreeing a clinical protocol to manage a patient group, adapting a guideline for a local health system or promoting the implementation of guidelines.

Local opinion leaders

The identification and use of identifiable local opinion leaders to promote good clinical practice.

Managerial supervision

Routine supervision visits by health staff.

Monitoring the performance of the delivery of healthcare

Monitoring of health services by individuals or healthcare organisations, for example by comparing with an external standard.

Other

Strategies were classified as other if they did not clearly fit within the standard subcategories.

Pay for performance – target payments

Transfer of money or material goods to healthcare providers conditional on taking a measurable action or achieving a predetermined performance target, for example incentives for lay health workers.

Procurement and distribution of supplies

Systems for procuring and distributing drugs or other supplies.

Tailored interventions

Interventions to change practice that are selected based on an assessment of barriers to change, for example through interviews or surveys.

The use of information and communication technology

Technology based methods to transfer healthcare information and support the delivery of care.

Fifteen trials tested strategies to implement healthy eating policies, programs or practices (Alaimo 2015; Cunningham‐Sabo 2003; De Villiers 2015; French 2004; Heath 2002; Lytle 2006; Mobley 2012; Nathan 2012; Nathan 2016; Perry 2004; Simons‐Morton 1988; Story 2000; Whatley Blum 2007; Wolfenden 2017; Yoong 2016), six tested strategies targeting physical activity policies or practices (Delk 2014; Naylor 2006; Sallis 1997; Saunders 2006; Sutherland 2017; Young 2008), and three targeted tobacco policies and practices (Gingiss 2006; Mathur 2016; McCormick 1995). Three trials targeted a combination of health behaviours, with two examining implementation of healthy eating and physical activity policies or practices (Hoelscher 2010; Perry 1997), and one trial examining policies or practices to improve implementation of tobacco control, healthy eating and physical activity initiatives (Saraf 2015). None of the included trials sought to increase the implementation of interventions to delay initiation or reduce the consumption of alcohol.

Outcomes

Implementation outcome follow‐up data were collected six months post‐baseline in one trial (Sutherland 2017), 12 to 14 months in 11 trials (Alaimo 2015; Hoelscher 2010; Mathur 2016; Nathan 2016; Saraf 2015; Saunders 2006; Simons‐Morton 1988; Story 2000; Whatley Blum 2007; Wolfenden 2017; Yoong 2016), while another seven trials collected follow‐up data between 16 months and two years post‐baseline (Cunningham‐Sabo 2003; French 2004; Gingiss 2006; Nathan 2012; Naylor 2006; Perry 2004; Young 2008), with the remaining eight trials collecting data between two and a half and four years post‐baseline (Delk 2014; De Villiers 2015; Heath 2002; Lytle 2006; McCormick 1995; Mobley 2012; Perry 1997; Sallis 1997). Four trials used observation‐based measures to assess implementation outcomes (Perry 2004; Sallis 1997; Story 2000; Whatley Blum 2007). A further three trials used school records or documents (Mobley 2012; Nathan 2016; Wolfenden 2017). While one trial used a combination of observation methods and school records (Lytle 2006). In contrast, 13 trials relied on instruments to assess reported policy or practice implementation including surveys, questionnaires, semi‐structured interviews or teacher/staff completion of log‐books (Alaimo 2015; Cunningham‐Sabo 2003; De Villiers 2015; Delk 2014; Gingiss 2006; Heath 2002; Hoelscher 2010; Mathur 2016; McCormick 1995; Nathan 2012; Naylor 2006; Saunders 2006; Simons‐Morton 1988). Only one trial using these measures reported the instrument had been validated (Nathan 2012). A further six trials used both objective (direct observation or school records) and self‐report (staff completion of log‐books, surveys, questionnaires or interview) techniques for implementation outcome assessment (Perry 1997; French 2004; Saraf 2015; Sutherland 2017; Yoong 2016; Young 2008). Only one of these trials reported the self‐report measures had been validated (Young 2008).

Eight trials assessed student physical activity (Hoelscher 2010; Naylor 2006; Perry 1997; Sallis 1997; Saraf 2015; Saunders 2006; Sutherland 2017; Young 2008). Physical activity behaviours were assessed using accelerometer (Sallis 1997; Sutherland 2017; Young 2008), pedometer (Naylor 2006), student questionnaire (Perry 1997; Naylor 2006; Perry 1997; Saraf 2015; Saunders 2006), observations (Perry 1997; Hoelscher 2010; Sallis 1997) and fitness‐tests (Perry 1997; Naylor 2006; Sallis 1997). Fourteen trials assessed child nutritional intake or food selection using questionnaires (Alaimo 2015; Cunningham‐Sabo 2003; De Villiers 2015; French 2004; Hoelscher 2010; Lytle 2006; Mobley 2012; Perry 1997; Saraf 2015; Simons‐Morton 1988; Story 2000; Whatley Blum 2007), observation (Cunningham‐Sabo 2003; Perry 2004; Story 2000; Wolfenden 2017), or sales data (French 2004). Anthropometric measures, assessed objectively, were collected from participants in nine trials (Cunningham‐Sabo 2003; Heath 2002; Hoelscher 2010; Mobley 2012; Naylor 2006; Perry 1997; Sallis 1997; Saunders 2006; Young 2008). Tobacco use was assessed in two trials using questionnaires (Saraf 2015) and observation (Mathur 2016). No trials assessed student alcohol use. Two trials included a measure that was specified in the study methods as an assessment of potential unintended adverse effects (Mobley 2012; Wolfenden 2017), and two trials reported cost analyses (Heath 2002; Mobley 2012). While, three trials reported on the knowledge, skills or attitudes of school staff regarding implementation (Delk 2014; Gingiss 2006; McCormick 1995).

Types of comparisons

The predominance of trials (n = 22) compared implementation strategies against usual practice or waiting‐list control (Alaimo 2015; Cunningham‐Sabo 2003; French 2004; Gingiss 2006; Heath 2002; Lytle 2006; Mathur 2016; Mobley 2012; Nathan 2016; Naylor 2006; Perry 1997; Perry 2004; Sallis 1997; Saraf 2015; Saunders 2006; Simons‐Morton 1988; Story 2000; Sutherland 2017; Whatley Blum 2007; Wolfenden 2017; Young 2008; Yoong 2016), while two compared against different interventions (Delk 2014; Hoelscher 2010) and three trials used a minimal support comparison group (De Villiers 2015; McCormick 1995; Nathan 2012). Among trials using a minimal support control group, all schools in the study by Nathan and colleagues, including those allocated to control could have received support from a non‐government agency to assist with implementation of a fruit and vegetable break if they sought out such support. In the trial by McCormick and colleagues, control schools received curricula in the mail and technical assistance upon request. Finally in the trial by De Villiers and colleagues, Principals at schools in the control arm received a booklet with “tips” for healthy schools and a guide to resources that could be accessed to assist in creating a healthier school environment. Seven trials did not describe the comparison condition and so we assumed that the comparison was usual practice (Cunningham‐Sabo 2003; Gingiss 2006; Heath 2002; Saraf 2015; Simons‐Morton 1988; Story 2000; Young 2008).

Five trials included more than two trial arms (Alaimo 2015; Delk 2014; Naylor 2006; Perry 1997; Sallis 1997). The School Nutrition Advances Kids (SNAK) study included four conditions, three interventions and one control (Alaimo 2015). The three intervention conditions all sought to improve the implementation of nutrition policies and practices of schools. In all intervention conditions, implementation support included local consensus processes (convened by a coordinated school health team), educational outreach (visit from a trained facilitator), external funding ($1000 to implement aspects of the intervention) and tailored intervention (individualised action plans). In the second intervention group, such support was more intensive, for example, more frequent contact with the trained facilitator and additional funding ($400) for implementation. In the third group, the more intensive implementation support was also offered but schools were asked to implement additional changes to their cafeteria à la carte lines and were provided with a further $1500 (Alaimo 2015). The implementation outcomes reported in the paper combine all intervention conditions into a single group for comparison against the control group, and is reported accordingly in this review.

The Action Schools! British Columbia (BC) program randomised schools into three conditions (Naylor 2006). Two groups received implementation support. The implementation strategies utilised were identical in these two intervention groups, however, in one group post‐training support was provided directly to school teachers via a school facilitator, while in the second group, post‐training support was provided to a designated champion who was asked to activate and support their teacher colleagues (Naylor 2006). The third group served as a usual practice control. For this trial, we combined intervention groups by calculating, relative to the control, the unadjusted median effect (and range) across intervention conditions.

In the Child and Adolescent Trial for Cardiovascular Health (CATCH) trial, schools were randomly assigned to either an intervention condition or a control condition (Perry 1997). Of the 56 schools assigned to intervention, 28 schools were randomly assigned to an intervention arm targeting the same implementation outcomes and including the same implementation strategies, but were also asked to implement programs targeting families. Implementation data for the two intervention groups were combined in the reporting of the paper and the combined data used in this review.

The Sports, Play, and Active Recreation for Kids (SPARK) trial randomised schools to two intervention and one control condition (Sallis 1997). Data were only extracted for two of the three assigned groups; the control group where implementation support was not provided and an experimental group where implementation support was provided to usual teaching staff to implement the intervention. The third group, where physical activity practices were implemented by external specialist physical education teachers hired by the research team was excluded as per review inclusion criteria.

Finally, in the trial by Delk and colleagues, 30 schools were randomised into three conditions. As each condition contained different implementation strategies data were extracted and reported across all conditions.

Other study design characteristics

In a number of trials, decisions regarding study inclusion and data extraction were particularly complex. The SPARK trial included post‐intervention implementation outcome data only, however they used random assignment for six of the seven included schools, randomly allocating them to one of three conditions (Sallis 1997). The remaining school was allocated by the researchers to the control group. Despite the lack of baseline implementation data, given the use of random assignment, and similarity of other trial sample characteristics, the trial was retained in the review. The Texas Tobacco Initiative was also a non‐randomised trial that did not report baseline data for the implementation outcomes, however, was included as the authors state that there were no differences between groups at baseline on these measures (Gingiss 2006). Similarly, for the SPARK program, no single primary implementation outcome was reported. Implementation outcomes included measures of lesson context (management, general knowledge, fitness knowledge, fitness activity, skill drills and game play), measures of teacher behaviour (promotes fitness, demonstrates fitness, instructs generally, manages, observes and off task), as well as measures of lesson duration and frequency (Sallis 1997). However, only for lesson duration and frequency was the desired quantity or direction of effect specified in the published reports (three lessons per week of 30 minutes duration each). While improving lesson context and teacher behaviour was an objective of the support strategy, the desired direction of effect for each measure was not clear. For example, it was unclear if teachers were supported to reduce time spent managing or observing children during class time and more time in promoting fitness. As such, both frequency and duration of lessons were only extracted as outcomes for this trial. Identical measures of lesson context were also reported in the Coordinated Approach to Child Health (CATCH) project (Perry 1997), as well as the El Paso CATCH program (Heath 2002), and were similarly excluded.

A variety of outcomes pertaining to program implementation were reported across the published reports of the CATCH intervention (Perry 1997). At times there was inconsistency in the reported key implementation policies and practices targeted by the program. Given this, implementation outcome data were extracted from the study published by Perry and colleagues as the objective of this paper was specifically to report on program implementation and measures including intervention 'fidelity'. The median effects of these outcomes are reported as no single primary outcome was identified.

Finally, in the Lifestyle Education for Activity Program (LEAP) trial (Saunders 2006), implementation of targeted policies and practices in the experimental group was presented in subgroups of 'high' and 'low' implementers and could not be combined into a single group. As such, effect size estimates for outcomes reported in this trial between groups were unable to be reported.

Excluded studies

Following screening of titles and abstracts, we obtained the full texts of 385 papers for further assessment of eligibility (Figure 1). Of these, 305 papers were considered ineligible. Primary reasons for exclusion included inappropriate: participants n = 17; intervention n = 7; comparator n = 30; and outcomes n = 233. Studies were excluded based on 'inappropriate outcomes' if they: did not report any implementation outcomes; did not report implementation outcomes for both intervention and control groups; or did not report between‐group differences in implementation outcomes. We also excluded 11 papers that did not report the results of a trial; and a further seven studies that were non‐randomised and did not report comparability of implementation outcomes between groups at baseline (i.e. it could not be assumed that differences between groups were zero) (Donnelly 1996; Harvey‐Berino 1998; Hoelscher 2003; Hoelscher 2004; Kelder 2003; O’Brien 2010; Osganian 2003).

Risk of bias in included studies

Assessment of risk of bias considered study design and reporting characteristics relevant to the implementation outcomes of the included studies (Figure 2; Figure 3).


Risk of bias summary: review authors' judgements about each risk of bias item for each included study.

Risk of bias summary: review authors' judgements about each risk of bias item for each included study.


Risk of bias graph: review authors' judgements about each risk of bias item presented as percentages across all included studies.

Risk of bias graph: review authors' judgements about each risk of bias item presented as percentages across all included studies.

Allocation

Risk of selection bias differed across the 27 trials. All of the eight non‐randomised trials were considered to have a high risk of selection bias for both random sequence generation and concealment of allocation (Alaimo 2015; Gingiss 2006; Heath 2002; Hoelscher 2010; Nathan 2012; Sallis 1997; Simons‐Morton 1988; Whatley Blum 2007). For the 11 trials with cluster‐RCT designs, only two were considered low risk for random sequence generation (Saraf 2015; Sutherland 2017), using the drawing of lots or computerised random number function to determine allocation to intervention or control groups. While four of the eight trials using RCT designs were considered low risk for random sequence generation (De Villiers 2015; Nathan 2016; Wolfenden 2017; Yoong 2016). The bias for concealment was unclear for all RCTs (Cunningham‐Sabo 2003; De Villiers 2015; Lytle 2006; Mobley 2012; Nathan 2016; Saunders 2006; Wolfenden 2017; Yoong 2016) and cluster‐RCTs (Delk 2014; French 2004; Mathur 2016; McCormick 1995; Nathan 2016; Naylor 2006; Perry 1997; Perry 2004; Saraf 2015; Story 2000; Young 2008).

Blinding

All 27 studies were considered to have high risk of performance bias, due to participants and research personnel not being blind to group allocation. Only four studies had a low risk for implementation outcome assessment, as this was conducted by staff who were blind to group allocation (Mobley 2012; Nathan 2016; Wolfenden 2017; Yoong 2016). Of the remaining 23 studies, detection bias was high for 17 studies primarily due to the use of self‐report measures (Cunningham‐Sabo 2003; Delk 2014; De Villiers 2015; French 2004; Gingiss 2006; Heath 2002; Hoelscher 2010; Mathur 2016McCormick 1995; Nathan 2012; Naylor 2006; Perry 1997; Perry 2004; Saraf 2015; Saunders 2006; Story 2000; Young 2008). For three studies, the risk of detection bias was unclear (Lytle 2006; Sallis 1997; Whatley Blum 2007), and for the remaining three studies, the risk of detection bias was high, low or unclear across one or more outcome measures (Alaimo 2015; Simons‐Morton 1988; Sutherland 2017).

Incomplete outcome data

For the majority of studies, the risk of attrition bias was low, as either all or most schools were still participating in the study at follow‐up, and their data included in the analyses. Two studies had a high attrition bias (Delk 2014; Gingiss 2006). In particular, Gingiss and colleagues reported 25 (19%) of schools were lost for the Principal survey and 50 (37%) schools were lost for the Health Coordinator survey. For the remaining studies, the risk of attrition bias was high, low or unclear for some (McCormick 1995; Sutherland 2017), or unclear for all (Naylor 2006; Sallis 1997), of the reported outcome data.

Selective reporting

Seventeen trials did not have a published protocol paper or trial registration record and therefore it was unclear whether reporting bias had occurred. The risk of reporting bias was low for the remaining 10 studies as protocols, design papers, or reports were available, and all a priori determined outcomes were reported (Cunningham‐Sabo 2003; De Villiers 2015; Mathur 2016; Mobley 2012; Nathan 2016; Naylor 2006; Sutherland 2017; Wolfenden 2017; Yoong 2016; Young 2008).

Other potential sources of bias

Eleven studies used a cluster‐RCT design (Delk 2014; French 2004; Mathur 2016; McCormick 1995; Naylor 2006; Perry 1997; Perry 2004; Saraf 2015; Story 2000; Sutherland 2017; Young 2008). Therefore we assessed the potential risk of additional biases for this group. For the potential risk of recruitment (to cluster) bias, two studies had an unclear risk (Delk 2014; McCormick 1995), while nine studies were low risk as randomisation to groups occurred either post‐recruitment or post‐baseline assessment (French 2004; Mathur 2016; Naylor 2006; Perry 1997; Perry 2004; Saraf 2015; Story 2000; Sutherland 2017; Young 2008). Regarding risk of bias due to baseline imbalances, three studies were at unclear risk while the remaining eight studies had a low risk due to the random allocation of schools to experimental groups, stratification by school characteristics, or adjustments for baseline differences being made within the analyses (Delk 2014; French 2004; McCormick 1995; Perry 1997; Perry 2004; Story 2000; Sutherland 2017; Young 2008). All studies except Delk 2014 and Sutherland 2017 had a low risk for loss of clusters. Only three studies had a low risk for incorrect analysis as the appropriate statistical analysis was undertaken to allow for clustering within groups (Naylor 2006; Perry 1997; Young 2008). Five studies were judged as having a high risk for incorrect analysis (Delk 2014; Mathur 2016; McCormick 1995; Saraf 2015Story 2000), while for three studies the analysis performed was unclear (French 2004; Perry 2004; Sutherland 2017). The risk of contamination was judged as high for one trial (Perry 2004). All 11 cluster‐RCTs were at unclear risk for compatibility with individually‐randomised trials as we were unable to determine whether a herd effect existed.

For the eight studies with non‐randomised designs, three studies were considered to have a high risk of bias due to potentially confounding factors (Gingiss 2006; Nathan 2012; Simons‐Morton 1988). For the remaining studies (n = 5) it was unclear whether confounders were adequately adjusted for.

Effects of interventions

See: Summary of findings for the main comparison

The majority of included trials reported significant improvements in at least one implementation outcome measure (Table 3). All primary implementation outcomes in nine trials (Delk 2014; French 2004; Mathur 2016; Nathan 2012; Nathan 2016; Naylor 2006; Sallis 1997; Story 2000; Wolfenden 2017), were significant, as were the majority of outcomes reported across implementation measures in a further five trials (Gingiss 2006; Hoelscher 2010; Perry 1997; Saraf 2015; Whatley Blum 2007). In three trials there was no significant improvements in implementation on any primary implementation outcomes (Alaimo 2015; McCormick 1995; Yoong 2016), and in six trials there was improvement in 50% or less of implementation outcomes reported (Cunningham‐Sabo 2003; Heath 2002; Lytle 2006; Perry 2004; Sutherland 2017; Young 2008). Four trials did not report significance testing of between‐group comparisons of implementation outcomes (De Villiers 2015; Mobley 2012; Saunders 2006; Simons‐Morton 1988).

Open in table viewer
Table 3. Summary of intervention, measures and absolute intervention effect size in included studies

Trial

Trial name

Targeted risk factor

Implementation strategies

Comparison

Primary Implementation outcome

and measures

Effect size

P < 0.05

Alaimo 2015

School Nutrition Advances Kids (SNAK)

Nutrition

Clinical practice guidelines, educational materials, educational outreach visits, external funding, local consensus processes, tailored interventions

Usual practice or waiting‐list control

Continuous:

i) Nutrition policy score and

ii) Nutrition education and/or practice score (2 measures)

Median (range)

0.65 (0.2 to 1.1)

0/2

Cunningham‐Sabo 2003

Pathways

Nutritionc

Clinical practice guidelines, educational materials, educational meetings, educational outreach visits

Usual practice

Continuous:

Nutrient content of school meals % of calories from fat breakfast/lunch

(2 measures)

Median (range)

‐3% (‐3.3% to ‐2.7%)

1/2

De Villiers 2015

HealthKick

Nutritionc

Local opinion leaders, educational materials, educational outreach visits, education meetings

Minimal support control

Dichotomous:

% implementing a variety of policies and practices (3 measures)

Median (range)

25% (12.5% to 29.5%)

Not reported

Delk 2014

No trial name

Physical activity

Local consensus process, educational meetings, clinical practice guidelines, educational outreach visits, tailored interventions, other

Different implementation strategy

Continuous:

% of teachers that conducted activity breaks weekly (1 measure 2 comparisons)

Dichotomous:

% implementing a variety of policies and practices (2 measures 4 comparisons)

Median (range)

13.3% (11.1% to 15.4%)

Median (range)

26.5% (19.4% to 31.9%)

6/6

French 2004

Trying Alternative Cafeteria Options in Schools (TACOS)

Nutrition

Local consensus processes, tailored intervention, educational meetings, pay for performance

Usual practice or waiting‐list control

Continuous

% of program implementation (5 measures)

Median (range)

33% (11% to 41%)

5/5

Gingiss 2006

Texas Tobacco Prevention Initiative

Tobacco

Educational meetings, educational outreach visits, external funding, local consensus processes

Usual practice

Dichotomous:

% implementing a variety of policies and practices (10 measures)

Median (range) 18.5% (‐1% to 59%)

7/10

Heath 2002

El Paso Coordinated Approach to Child Health (El Paso CATCH)

Nutritionc

Educational materials, educational meetings, educational outreach visits

Usual practice

Continuous:

% fat in school meal

(2 measures)

Sodium of school meals

(2 measures)

Effect size

Median (range)

‐1.7% (‐4.4% to 1%)

Median (range)

‐29.5 (‐48 to ‐11)

1/4

Hoelscher 2010

Travis County Coordinated Approach To Child Health (CATCH) Trial

Nutrition and physical activity

Educational materials, educational meetings, educational outreach visits, pay for performance, other, the use of information and communication technology, local consensus process

Different implementation strategy

Continuous:

Mean number of lessons/or activities (5 measures)

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

0.8 (‐0.4 to 1.2)

Median (range)

4.4% (3.6% to 5.2%)

4/7

Lytle 2006

Teens Eating for Energy and Nutrition at School (TEENS)

Nutrition

Educational materials, educational meetings, local opinion leaders, local consensus processes

Usual practice or waitling‐list control

Dichotomous:

% of schools offering or selling targeted foods (4 measures)

Median (range)

8.5% (4% to 12%)

2/4

Mathur 2016

Bihar School Teachers Study (BSTS)

Tobacco

Local opinion leader, continuous quality improvement, education materials, education meeting, local consensus process

Usual practice or waiting‐list control

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

56.9% (36.3% to 77.5%)

2/2

McCormick 1995

The North Carolina School Health and Tobacco Education Project (SHTEP)/ Skills Management
and Resistance Training (SMART)

Tobacco

Educational meetings, local consensus processes, educational materials

Minimal support control

Dichotomous:

% later implementation of curriculum for school district (1 measure)

Continuous:

Mean extent later implementation for school district (% of total curriculum activities taught) (1 measure)

Effect Size (95%CI)

16.7% (‐37.7% to 64.1%)

Mean differencea

0.56%

0/2

Mobley 2012

HEALTHY study

Nutritionc

Educational games, educational meetings, external funding, tailored intervention, educational materials, educational outreach, other, the use of information and communication technology

Usual practice or waiting‐list control

Dichotomous:

% schools meeting various nutrition goals (12 measures)

Median (range)

15.5% (0% to 88%)

Not reported

Nathan 2012

Good for Kids. Good for Life

Nutrition

Educational materials, educational meetings, local consensus processes, local opinion leaders, other, monitoring the performance of the delivery of the healthcare, tailored interventions

Minimal support control

Dichotomous:

% Schools implementing a vegetable and fruit break (1 measure)

Mean difference (95%CI)

16.2% (5.6% to 26.8%)

1/1

Nathan 2016

No trial name

Nutrition

Audit and feedback, continuous quality improvement, education materials, education meeting, local consensus process, local opinion leader, tailored intervention, other

Usual practice

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

35.5% (30.0% to 41.1%)

2/2

Naylor 2006

Action Schools! British Columbia (BC)

Physical activity

Educational materials, educational meetings, educational outreach meetings, local consensus process, other, tailored Interventions

Usual practice or waiting‐list control

Continuous:

Minutes per week of physical activity implemented in the classroom (1 measure 2 comparisons)

Median (range)

54.9 minutes (46.4 to 63.4)

2/2

Perry 1997

Child and Adolescent Trial for Cardiovascular Health (CATCH)

Nutrition and physical activityd

Educational materials, educational meetings, educational outreach visits, other

Usual practice or waiting‐list control

Continuous:

% of kilocalories from fat in school lunch (1 measure)

Mean milligrams of sodium in lunches (1 measure)

Cholesterol milligrams in lunches (1 measure)

Quality of PE lesson % of 7 activities observed (1 measure)

Effect size

Mean difference (95%CI)

‐4.3% (‐5.8% to ‐2.8%)

Mean difference (95%CI)

‐100.5 (‐167.6 to ‐33.4)

Mean difference (95%CI)

‐8.3 (‐16.7 to 0.1)

Mean difference (95%CI)

14.3% (11.6% to 17.0%)

3/4

Perry 2004

The Cafeteria Power Plus project

Nutrition

Educational meetings, educational outreach visits, educational materials,

local consensus processes, other

Usual practice or waiting‐list control

Continuous:

% of program implementation (2 measures)

Mean number of fruit and vegetables available (2 measures)

Median (range)

14% (‐2% to 30%)

Median (range)

0.64 (0.48 to 0.80)

2/4

Sallis 1997

Sports, Play, and Active Recreation for Kids (SPARK)

Physical activity

Educational materials, educational meetings, educational outreach visits, length of consultation, other

Usual practice or waiting‐list control

Continuous:

Duration (minutes) per week of physical education lessons (1 measure)

Frequency (per week) of physical education lessons (1 measures)

Mean difference (95%CI)

26.6 (15.3 to 37.9)

Mean difference (95%CI)

0.8 (0.3 to 1.3)

2/2

Saraf 2015

No trial name

Nutrition, physical activity and tobacco

Educational games, educational materials, educational meetings, local consensus processes, local opinion leaders, tailored Interventions, other

Usual practice

Dichotomous:

% implementing a variety of policies and practices (6 measures)

Median (range)

36.9% (‐5.3% to 79.5%)

5/6

Saunders 2006

Lifestyle Education for Activity Program (LEAP)

Physical activity

Educational materials, educational meetings, educational outreach visits, local consensus processes, local opinion leaders, other

Usual practice or waiting‐list control

Continuous:

School level policy and practice related to physical activity from the school administrators perspective (9 measures)

N/Ab

Not reported

Simons‐Morton 1988

Go for Health

Nutritionc

Educational materials, educational outreach visits, local consensus processes, local opinion leaders, managerial supervision, monitoring of performance, other

Usual practice

Continuous:

Macronutrient content of school meals (2 measures)

N/Ab

Not reported

Story 2000

5‐a‐Day Power Plus

Nutrition

Educational meetings, other

Usual practice

Continuous:

Mean number of fruit and vegetables available (2 measures)

% of guidelines implemented and % of promotions held (4 measures)

Median (range)

1.15 (1 to 1.3)

Median (range)

38.4% (28.5% to 43.8%)

6/6

Sutherland 2017

Supporting Children’s Outcomes using Rewards, Exercise and Skills (SCORES)

Physical activity

Audit and feedback, education materials, education meeting, education outreach visits, local opinion leader, other

Usual practice or waiting‐list control

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Continuous:

Physical education lesson quality score

(1 measures)

% of program implementation (4 measures)

Median (range)

19% (16% to 22%)

Mean difference

21.5a

Median (range)

‐8% (‐18% to 2%)

0/2

1/1

0/4

Whatley Blum 2007

No trial name

Nutrition

Clinical practice guidelines, educational materials, educational meetings, educational outreach visits, external funding, distribution of supplies, local consensus process, other

Usual practice or waiting‐list control

Continuous:

% of food and beverage items meeting guideline nutrient and portion criteria (6 measures)

Median (range)

42.95% (15.7% to 60.6%)

5/6

Wolfenden 2017

No trial name

Nutrition

Audit and feedback, continuous quality improvement, external funding, education materials, education meeting, education outreach visits, local consensus process, local opinion leader, tailored intervention, other

Usual practice

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

66.6% (60.5% to 72.6%)

2/2

Yoong 2016

CAFÉ

Nutrition

Audit and feedback, continuous quality improvement, education materials, tailored intervention

Usual practice

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

21.6% (15.6% to 27.5%)

0/2

Young 2008

Trial of Activity for Adolescent Girls (TAAG)

Physical activity

Education materials, education meetings, educational outreach visits, inter‐professional education, local consensus processes, local opinion leaders

Usual practice

Dichotomous:

% implementing a variety of policies and practices (7 measures)

Continuous:

Average number of physical activity programs taught (1 measure)

Median (range)

9.3% (‐6.8% to 55.5%)

Effect Size (95%CI)

5.1 (‐0.4 to10.6)

1/8

aNo measure of variability.

bDid not report aggregate results by group.

cPhysical activity was also a targeted risk factor however, this component did not meet our inclusion criteria (e.g. implementation outcomes unavailable, study staff implemented physical activity component etc.) and was therefore, not considered in our review.

dTobacco use was also a targeted risk factor however, this component did not meet our inclusion criteria (e.g. implementation outcomes unavailable) and was therefore, not considered in our review.

Implementation strategies compared with waiting list, usual practice or minimal support controls

Dichotomous measures

Thirteen trials reported dichotomous implementation outcomes (De Villiers 2015; Gingiss 2006; Lytle 2006; Mathur 2016; McCormick 1995; Mobley 2012; Nathan 2012; Nathan 2016; Saraf 2015; Sutherland 2017; Wolfenden 2017; Yoong 2016; Young 2008). In most instances, such trials reported the proportion of schools or school staff (e.g. classes) implementing a targeted policy or practice. Across the trials, the unadjusted median effect size was 19% (range 8.5% to 66.6%) (Table 3).

The largest effect was reported for the trial conducted by Wolfenden and colleagues who conducted a RCT of 70 schools throughout the Hunter Region of New South Wales, Australia. In this trial, support was provided to implement a healthy school canteen policy restricting the availability of unhealthy foods sold in school canteens. The implementation support included local opinion leader (Principals), audit and feedback (menu reviews with feedback reports), education meeting (one five‐hour training), external funding (reimbursements), education materials (printed instructions), education outreach visits (one and three months post‐canteen training), local consensus process (with canteen staff), continuous quality improvements processes, tailored intervention (individualised goal setting and action planning with managers) and other strategies (marketing in schools, provision of equipment and congratulatory letters for achieving change). Relative to control, the unadjusted median improvement in implementation across two implementation outcomes was 66.6% (range 60.5% to 72.6%) as assessed by copies of canteen menus (Wolfenden 2017).

The smallest effects were reported by Lytle and colleagues who conducted a RCT in which local opinion leaders and consensus processes (school nutritional advisory councils to discuss improvements to the school food environment), educational meetings (food service staff training), and educational materials (information and tools) improved the proportion of schools selling targeted foods by a median (unadjusted) of 8.5% (range 4% to 12%) as assessed via school production records and periodic observation. Similarly, the unadjusted median improvement, relative to control, in the proportion of schools implementing a variety of practices promoting physical activity in the classroom was 9.3% (range ‐6.8% to 55.5%) following implementation support strategies including education meetings (teacher training workshops), educational materials (classroom instructional materials), educational outreach visits (on‐site support for PE teachers), inter‐professional education (collaborations between schools community agencies and university staff), local opinion leaders (recruitment of program champions), and local consensus processes (development of local implementation goals) in the cluster‐RCT by Young and colleagues (Young 2008).

Continuous data

Implementation Score

Three trials reported the effects of an implementation strategy using a score (Alaimo 2015; Saunders 2006; Sutherland 2017). The cluster‐RCT found a significant improvement in the implementation of quality physical education lessons assessed via classroom observations (MD, 21.5 P = < 0.01) (Sutherland 2017). Implementation support included audit and feedback (reports on lesson quality), education materials (lesson booklets, posters, whistles, lanyards and fundamental motor skill cards), education meeting (90‐minute professional learning workshop), education outreach visits (by staff with a physical education background to classroom teachers), local opinion leader (school champion) and other (equipment and ongoing support) (Sutherland 2017).

The other two trials report little improvement in implementation score outcomes. The non‐randomised SNAK trial of 65 low‐income schools reported no difference in change scores on nutrition policy (mean difference (MD) 0.2, 95% confidence interval (CI) ‐0.7 to 1.1) or nutrition education and/or practice (MD 1.1, 95%CI ‐0.8 to 3.0) as assessed by the School Environment and Policy Survey (Alaimo 2015). Schools in the experimental group were supported to implement self‐selected nutrition policies and practices via educational outreach (visits from a trained facilitator), tailored intervention (assessment and development of an action plan), and payment for performance (receipt of funding) strategies. Some randomly selected experimental schools were also asked to convene local school health teams (local consensus processes) or implement specifically a nutrition policy in their cafeteria à la carte. Control schools received support following the study.

In the LEAP RCT, implementation was assessed using an organisational assessment instrument designed to assess school level policy and practice related to physical activity from the school administrators' perspective. Implementation support provided as part of the trial included educational outreach visits (visit from program staff), local opinion leaders (use of a LEAP champion in the school to work with program staff), educational meetings (training before and during the school year), educational materials (including books and video tapes), local consensus process (involving LEAP staff, the LEAP champion and LEAP school teams) as well as other equipment such as hand weights and pedometers. Scores for the nine essential intervention elements assessed by the instrument were provided for the control group, however these were presented separately in the intervention group for 'high' and 'low' implementing schools. No aggregated between‐group comparisons were reported. Nonetheless, scores for control schools were within the scores for high‐ and low‐implementing schools of the intervention group for six of the nine measures (Saunders 2006).

Percentage of programs implemented

Seven trials reported the percentage of an intervention program or program content that had been implemented, the effects of which were mixed (French 2004; McCormick 1995; Perry 1997; Perry 2004; Story 2000; Sutherland 2017; Whatley Blum 2007). The unadjusted median effect, relative to the control in the proportion of program or program content implemented was 14.3% (range ‐8% to 43%). In their non‐randomised study, Whatley and colleagues allocated four public high schools to receive support to implement a low‐fat, low‐sugar guideline in à la carte and vending programs. Implementation support included educational outreach (visits to the schools food and beverage supplier), educational materials (suppliers were given healthy product lists), procurement and distribution of supplies (food service directors were given lists of vendors that met guidelines), practice guidelines (recipe preparation techniques), educational meetings (presentations to school staff), external funding ($1500 allocated to school liaison personnel), a local consensus process (establishment of committee) and other EPOC strategies (early consultation with school staff to obtain co‐operation). Compared with four schools allocated to control, the unadjusted median proportion of food or beverage items meeting criteria requirements of the guideline across à la carte, food vending, and beverage vending programs was 42.95% (range 15.7% to 60.6%), assessed via observation (Whatley Blum 2007). A large effect was also reported in the Trying Alternative Cafeteria Options in Schools (TACOS) cluster‐RCT of 20 schools conducted by French and colleagues. Implementation support to improve the foods available at school included local consensus processes (quarterly meetings between research and food service staff), tailored intervention (tailored lists of higher‐ and lower ‐at foods for schools), educational meetings (training for the students to facilitate implementation) and pay for performance (student groups were offered financial incentives for completing each promotion between $100 and $300). The unadjusted median effect, relative to a waiting‐list control group across implementation measures in the trial was 33% (range 11% to 41%) (French 2004). There were two trials in which there were no improvements in measures of the percentage of programs implemented (McCormick 1995; Sutherland 2017).

Measures of the frequency of implementation

Four trials compared the number or frequency with which program or targeted practices were implemented (Perry 2004; Sallis 1997; Story 2000; Young 2008). Evaluation of the non‐randomised SPARK trial reported that intervention schools taught, on average 0.8 physical education lessons per week more than control schools (Sallis 1997). Implementation support in the trial included educational outreach visits (by a physical education specialist), educational meetings (training), increasing the length of consultation (to include more classroom instruction and practice time), educational materials (yearly plan), as well as another non‐classifiable strategy (equipment was provided). In the cluster‐RCT titled the Trial of Activity for Adolescent Girls (TAAG), intervention schools taught five additional physical activity programs relative to control following education meetings (workshops for teachers), educational materials (instructional and social marketing materials), educational outreach visits (regular on‐site support to conduct lessons), inter‐professional education (collaborations were created between schools and community agencies), local opinion leaders (program champions were recruited and trained to direct the intervention) and local consensus processes, although the difference was not statistically significant (Young 2008). Two trials sought to increase the availability of fruits and vegetables at school cafeterias. The unadjusted median improvement from two measures of fruit and vegetable availability (both significant) among intervention schools in the 5‐a‐Day Power Plus cluster‐RCT was 1.15 items following educational meetings (staff training workshop for which staff were paid to attend) and other provisioning of free fruit and vegetables (Story 2000). In the Cafeteria Power Plus cluster‐RCT, the unadjusted median improvement across two measures of fruit and vegetable availability (one of which was significant) relative to control schools was 0.64 items following educational meetings (monthly meetings with cooks), outreach visits (weekly visits from research staff), educational materials (flyers and posters), local consensus processes (monthly meetings were held with the cook managers) and other special events (Perry 2004).

Time‐based measures of implementation.

Two trials reported the impact of implementation strategies on the time per week teachers implemented physical activity or physical education lessons, with improvements, relative to control ranging from 26.6 minutes per week to 54.9 minutes per week. In their non‐randomised trial, Sallis and colleagues report a significant increase in the duration of physical education lessons among schools receiving educational outreach visits and education meetings assessed via observation as part of the SPARK trial (Sallis 1997). Similarly, in their cluster‐RCT of the Action Schools! BC intervention, Naylor and colleagues reported an improvement (averaged across two experimental conditions) of 54.9 minutes of physical activity implemented in the classroom as assessed by teacher survey (Naylor 2006). Implementation support included tailored intervention (individualised action plans), educational meetings (teacher training), educational materials (planning guide and resources), local consensus process (a committee of school stakeholders to support implementation), educational outreach visits (for teachers) as well as other resources.

Macronutrient content of food served

Four trials reported changes in the macronutrient content of food available at school (Cunningham‐Sabo 2003; Heath 2002; Perry 1997; Simons‐Morton 1988). A non‐randomised trial by Simons‐Morton and colleagues sought to improve implementation of specific practices regarding school lunch, physical education and classroom health education in the Go for Health project. However, only implementation of school lunch initiatives (changes to the macronutrients of sodium and fat in school lunches) was reported post‐intervention. Furthermore, the trial did not report data aggregated by group, instead reporting changes in macronutrient context of school menus for each of the two experimental and two control schools separately. For the nutrition component, implementation support was primarily designed to facilitate implementation of low‐fat and low‐sodium school lunches. Analysis of lunch menus found pre‐ to post‐intervention reductions in sodium content of school meals reduced by 1148.1 mg and 695.5 mg, respectively in each of the intervention schools and remained stable in both control schools over the same period. Further, intervention schools reduced total fat content of school meals by 16.8 g and 11.6 g compared with 8.9 g and 6.1 g among the control schools.

In the Pathways RCT, Cunningham‐Sabo and colleagues examined the impact of providing practice guidelines (for food service), educational outreach visits (on‐site training and professional development for school food service staff twice per year), educational materials (posters, videos, guides etc.), and educational meetings (food service working group met monthly to establish and carry out the intervention). The intervention, relative to the control, reported significant reductions in the percentage of calories from fat included in meals served for school breakfast (‐3.3%, P = 0.03) but not lunches (‐2.7%, P = 0.10) (Cunningham‐Sabo 2003). Two trials examining the CATCH program assessed the macronutrient content of foods served to children at school (Heath 2002; Perry 1997). In the CATCH cluster‐RCT, schools received educational meetings (staff training), educational outreach visits (support visits to school staff to implement Eat Smart), and educational materials (Smart Choices manual) to improve the nutritional quality of school meals (Perry 1997). The intervention reduced the percentage of kilocalories from fat in school meals by 4.3%, sodium by 100 mg and cholesterol in school lunches by 8.3 mg. The El Paso CATCH non‐randomised trial reported unadjusted median reductions, reported across measures of macronutrient content of school breakfast and lunch respectively of 1.7% (range ‐1% to 4.4%) in the percentage of fat in school meals, and 29.5 mg of sodium (range 11 to 48) (Heath 2002).

Comparisons of different implementation strategies

Two trials compared different implementation strategies (Delk 2014; Hoelscher 2010). The Travis County CATCH trial compared the effects of two different implementation strategies to support the implementation of the CATCH program aimed at preventing child obesity in a non‐randomised design (Hoelscher 2010). The first implementation strategy included educational meetings (training and booster sessions for team members from each school), educational materials (CATCH coordination kit providing “how‐to” implementation instructions), local consensus process (community meetings), pay for performance ($2,000 to 5,000 for exemplary CATCH implementation), the use of information and communication technologies (social marketing strategies), educational outreach visits (facilitator visits), and other (family fun events) to implement the program. The second strategy included the same implementation strategies, however the level of support was more intense and often included elements to engage the community in supporting implementation. For example, there were more frequent educational outreach visits, educational meetings targeting community members, community members were engaged in consensus processes, and there were additional implementation resources such as guides and the inclusion of the Centers for Disease Control and Prevention School Health Index as a planning tool. There were small improvements in reporting of continuous measures of implementation favouring the more intensive implementation support. Specifically, four of the five continuous implementation measures, that reported the mean number of activities or practices implemented over the study period significantly increased. The unadjusted median effect size across such measures was 0.8 activities over a 12‐month period (range ‐0.4 to 1.2). Of the two dichotomous measures reporting the proportion of schools or staff implementing a policy or practice, neither was significantly different between groups at follow‐up. The unadjusted median effect size across these two measures was 4.4% (3.6% to 5.2%).

The Central Texas CATCH compared the effects of three combinations of implementation strategies in an effort to promote the implementation of activity breaks by classroom teachers in a cluster‐RCT (Delk 2014). The basic arm included a local consensus process (team developed at each school), clinical practice guidelines (activity break guidelines for teachers), and educational meetings (teaching training's of guidelines) while, the basic plus arm consisted of all of the basic activities plus educational outreach visits (monthly facilitator visits) and tailored interventions (individualised strategies to promote activity breaks on school campuses). The third arm (basic plus‐SM) consisted of all of the aforementioned strategies plus an unclassifiable EPOC strategy (social marketing campaigns). Significant differences in the percentage of teachers reporting implementing weekly activity breaks throughout the school year were found across the basic, basic plus and basic plus‐SM arm (23.3%, 34.4% and 38.7%, respectively). Similarly, significant changes occurred in the percentage of teachers conducting at least one activity break per year as well as those conducting an activity break in the week prior to data collection. For these, significant changes occurred in four of four implementation outcomes (two measures, four comparisons) wherein, the unadjusted median increase in effect size was 26.5% (range 19.4% to 31.9%).

Subgroup analyses of strategies to improve implementation 'at scale'

Four trials were conducted of strategies that sought to achieve implementation 'at scale', that is, across samples of at least 50 schools. Three trials reported significant improvements in the majority of the reported implementation outcomes (Gingiss 2006; Nathan 2012; Perry 1997), while one reported no improvements across any implementation outcome (Alaimo 2015). Among the two trials reporting dichotomous measures, the unadjusted median improvement in the proportion of schools implementing a policy or practice ranged from 16.2% in the study by Nathan and colleagues (Nathan 2012) to 18.5% in the trial by Gingiss and colleagues (Gingiss 2006).

The effectiveness of implementation strategies on health behaviour and anthropometric outcomes

Twenty‐one trials reported the effects of interventions on child health behaviour or anthropometric outcomes (Alaimo 2015; Cunningham‐Sabo 2003; De Villiers 2015; French 2004; Heath 2002; Hoelscher 2010; Lytle 2006; Mathur 2016; Mobley 2012; Naylor 2006; Perry 1997; Perry 2004; Sallis 1997; Saraf 2015; Saunders 2006; Simons‐Morton 1988; Story 2000; Sutherland 2017; Whatley Blum 2007; Wolfenden 2017; Young 2008). Seventeen studies were randomised trials (Cunningham‐Sabo 2003; De Villiers 2015; French 2004; Lytle 2006; Mathur 2016; Mobley 2012; Nathan 2016; Naylor 2006; Perry 1997; Perry 2004; Saraf 2015; Saunders 2006; Story 2000; Sutherland 2017; Wolfenden 2017; Yoong 2016; Young 2008), and six were non‐randomised trials (Alaimo 2015; Heath 2002; Hoelscher 2010; Sallis 1997; Simons‐Morton 1988; Whatley Blum 2007). Three studies targeted multiple health behaviours (Hoelscher 2010; Perry 1997; Saraf 2015), five physical activity only (Naylor 2006; Sallis 1997; Saunders 2006; Sutherland 2017Young 2008), 12 nutrition only (Alaimo 2015; Cunningham‐Sabo 2003; De Villiers 2015; French 2004; Heath 2002; Lytle 2006; Mobley 2012; Perry 2004; Simons‐Morton 1988; Story 2000; Whatley Blum 2007; Wolfenden 2017), and one smoking only (Mathur 2016). Overall, there were eight studies that assessed student physical activity or sedentary behaviour (Hoelscher 2010; Naylor 2006; Perry 1997; Sallis 1997; Saraf 2015; Saunders 2006; Sutherland 2017; Young 2008),14 that assessed student dietary intake (Alaimo 2015; Cunningham‐Sabo 2003; De Villiers 2015; French 2004; Hoelscher 2010; Lytle 2006; Mobley 2012; Perry 1997; Perry 2004; Saraf 2015; Simons‐Morton 1988; Story 2000; Whatley Blum 2007; Wolfenden 2017), nine weight status, BMI or skin‐folds (Cunningham‐Sabo 2003; Heath 2002; Hoelscher 2010; Mobley 2012; Naylor 2006; Perry 1997; Sallis 1997; Saunders 2006; Young 2008), and two tobacco smoking (Saraf 2015; Mathur 2016). Due to varying study designs, interventions and outcome measurements pooling of results was not performed.

Physical activity and sedentary behaviour

Three of eight trials reported no improvements on student physical activity following strategies to enhance implementation of physical activity promoting policies and practices in schools (Hoelscher 2010; Saraf 2015; Young 2008). Other trials reported improvements in student physical activity on at least some measures (Naylor 2006; Perry 1997; Sallis 1997; Saunders 2006; Sutherland 2017). For example, in a non‐randomised trial reported by McKenzie and colleagues and Sallis and colleagues of the SPARK program, students in classrooms of teachers trained to implement the curricula‐based intervention had greater minutes per week in observed moderate to vigorous physical activity (MVPA) compared to those in the usual physical education program group condition (32.7 minutes versus 17.8 minutes, P < 0.001). There was also a significant difference in the time to complete a mile run for boys but not girls in the teacher‐led condition compared to control. In this trial however, there were no significant differences in weekday or weekend physical activity as assessed via accelerometer. In the Action Schools! BC three‐arm randomised trial, relative to usual practice control, improvements in student step counts assessed via pedometer were reported for boys (MD 1175, 95%CI 97 to 2253, P = 0.03) but not girls (MD 730, 95%CI ‐648 to 2108, P = 0.30) attending schools where external liaison support was provided to facilitate implementation of physical activity policies and practices. No significant differences in step counts, relative to control, were reported for students of either boys (MD 804, 95%CI ‐341 to 1949, P = 0.17) or girls (MD 540, 95%CI ‐874 to 1954, P = 0.45) attending schools where implementation support was provided by school staff ('champions'). When the intervention arms were combined, children in intervention schools demonstrated a significantly greater increase in fitness (20‐m shuttle run) and average physical activity score. Finally, in the CATCH trial of Perry and colleagues, significant improvements were reported across measures of student reported vigorous physical activity and observed MVPA during lesson (% of time) but not general physical activity, nine‐minute distance run, or self‐reported total minutes of daily physical activity.

Three trials included measures of student sedentary behaviour outcomes (Hoelscher 2010; Saraf 2015; Young 2008). The cluster‐RCT by Saraf and colleagues, which examined strategies to implement an intervention consisting of school‐based policies and classroom activities as well as a family component, found that students in intervention schools spent almost 16 minutes less time watching television per day (P < 0.01). Similarly, the Travis County CATCH Trial compared different implementation support strategies to improve aspects of the school classroom, food service, PE activities, family and home environment (Hoelscher 2010). The trial found the proportion of students spending greater than two hours using a computer was 5.6% lower among those attending schools receiving support to implement the CATCH BPC relative to CATCH BP program (P = 0.003). In the TAAG trial, girls in the control schools spent 8.2 more minutes than intervention schools in daily sedentary activities (P = 0.050)(Young 2008). There was no difference between groups on measures of TV or video game use.

Overweight, obesity and adiposity

Mobley 2012 was the only trial to report a positive impact of the intervention on BMI between groups (Cunningham‐Sabo 2003; Heath 2002; Mobley 2012; Naylor 2006; Perry 1997; Sallis 1997; Saunders 2006; Young 2008). Similarly, no significant changes occurred in skin‐folds (Cunningham‐Sabo 2003; Sallis 1997; Perry 1997; Young 2008), or in percentage body fat or weight, in the Pathways or TAAG trials (Cunningham‐Sabo 2003; Young 2008). In the Travis County CATCH Project, a comparative effectiveness trial, students of schools receiving support to implement the CATCH BPC had 7% greater reductions in the proportion of overweight students (P = 0.051) and a 1.7% reduction in the proportion of students who were obese (P = 0.33) compared to those implementing CATCH BP (Hoelscher 2010). In the HEALTHY trial, there were no significant differences between groups in the odds of overweight and obese or waist circumference however, the percentage of students with waist circumference at or above the 90th percentile was lower relative to the control group at follow‐up (Mobley 2012). In the El Paso CATCH trial, there was no differences in waist‐to‐hip ratio or weight between groups at follow‐up however, the rate of increase for girls (2% versus 13%) and boys (1% versus 9%) in the CATCH schools was significantly lower compared to students in the control schools (Heath 2002). Sallis and colleagues assessed calf and triceps skin‐folds, and found no significant difference following support provided to teachers to implement the SPARK program verses control. While impacts on BMI were not reported post‐intervention, interim analyses of the impact of the program suggests that the intervention had no impact on child BMI (Sallis 1997).

Diet

Three trials reported no improvements for intervention students in the measures of school student dietary intake following implementation of a dietary related policy, practice or program (De Villiers 2015; Lytle 2006; Whatley Blum 2007). The remaining trials reported improvements on at least one measure of dietary intake (Alaimo 2015; Cunningham‐Sabo 2003; French 2004; Hoelscher 2010; Mobley 2012; Perry 1997; Perry 2004; Saraf 2015; Simons‐Morton 1988; Story 2000; Wolfenden 2017). For example, a cluster‐RCT evaluating the impact of strategies to implement school policies, classroom activities, and a family component targeting multiple health behaviours, found a higher proportion of students consuming fruits and vegetables three to four times a week (fruits +10%, P < 0.01; vegetables +7.2%, P = 0.01) among children of intervention schools relative to control (Saraf 2015). Significant reductions were also reported in the intake of deep‐fried foods but not salty snacks (Saraf 2015). Similarly, strategies to improve implementation of practices in school food services as part of the CATCH trial found that the intervention significantly reduced total self‐reported energy intake and proportion of intake from fat, saturated fat, polyunsaturated fat and monounsaturated fat, but not carbohydrate, protein, cholesterol, fibre or sodium intake of students (Perry 1997). In their exploratory analysis there were significant improvements in the intake of 6/17 vitamins and minerals measured in the study at follow‐up in the intervention school. Moreover, the HEALTHY trial of Mobley and colleagues, which aimed to improve the nutritional quality of foods and beverages served to students via the National School Lunch Program, the School Breakfast Program and à la carte food services, found that significant changes only occurred in student daily fruit consumption but not energy, macronutrients, fibre, grains, vegetables, legumes, sweets, sweetened beverages, fruit juice or higher‐ or lower‐fat milk.

In the Cafeteria Power Plus project, there was an increase in total fruit serves consumed however, there was no reported increase in the total fruit and vegetable serves consumed (Perry 2004). Support to implement the 5‐a‐Day Power Plus program also yielded significant improvements in school lunch intake for a number of measures of fruit and fruit and vegetable, vitamin C, calcium and percentage of total fat/kcal via direct observation or 24‐hour recall assessment methods, but no impact on other macronutrients assessed on either measure (Story 2000). The objective of the Go for Health Program was to reduce the sodium and fat content of school meals (Simons‐Morton 1988). Post intervention data collected via 24 hour dietary recalls found, relative to control, students in intervention schools reported reductions in intake of total fat (MD ‐11.4, 95%CI ‐23.9 to 1.09, P = 0.07) saturated fat (MD ‐5.4, 95%CI ‐10.4 to ‐0.4, P = 0.03) and sodium (MD ‐505, 95%CI ‐962 to ‐48, P = 0.03) and total energy (MD ‐40.8, 95%CI ‐271 to 190, P = 0.73). In the Pathways study, the percentage of energy from fat was significantly lower among students in intervention schools relative to controls. Total energy intake was also significantly lower among students in intervention schools when assessed via 24‐hour dietary recall, but energy intake, assessed via direct observation was not (Cunningham‐Sabo 2003). In the TACOS trial, of an intervention to improve food availability, intervention schools showed a higher percentage of sales of lower‐fat foods in year one (27.5% versus 19.6%, P = 0.096) and higher mean percentage of sales of lower‐fat foods in year two (33.6% versus 22.1%, P = 0.042) from intervention school cafeterias compared with controls, but no differences were found in self‐reported student food choices (French 2004). In the SNAK trial, students in schools that were randomised to complete an online self‐assessment and action and planning template (HSAT) to implement a variety of nutrition practices (marketing of healthy foods, posters for healthy foods in the cafeteria and taste tests) reported consuming significantly more fruit and fibre, and less cholesterol than students in the control schools (data not shown). Intake of other assessed macronutrients by experimental condition, however, were not reported (Alaimo 2015). Finally, in their comparative study students in CATCH BCP, schools had a significantly lower score on an unhealthy food index (measure of unhealthy food intake) than those attending CATCH BP schools. There were however, no difference between groups on a healthy food index score, or fruit, vegetable, milk or sweetened beverage consumption (Hoelscher 2010).

Tobacco

Two trials reported on outcomes indicating changes in student tobacco smoking. In their cluster‐RCT, Saraf and colleagues found a reduced smoking prevalence over 30 days (‐7.7%, 95%CI ‐10.7 to ‐4.7, P < 0.001) in the intervention group. In the Bihar School Teachers study, wrappers from chewing tobacco, cigarette ashes, butts and discarded packages as well as spit marks and staining from chewing tobacco were counted throughout classrooms, corridors, toilets, dustbins and playgrounds within the schools (Mathur 2016). The authors reported a significant decrease on these tobacco use measures in the predominance of locations (Mathur 2016). No other trial reported the effects of an implementation strategy on tobacco smoking.

The impact of implementation strategies on the knowledge, skills or attitudes of school staff

Three trials assessed the impact of implementation strategies on the attitudes of school staff and found mixed effects. A survey of staff participating in the Texas Tobacco Prevention Initiative found that staff from intervention schools were more interested in professional development in tobacco prevention than those in control schools following intervention (87% versus 65%, P < 0.05) (Gingiss 2006). Conversely, interest in doing something about tobacco prevention was not significantly different between staff in intervention schools versus those in the control following adoption and teacher training in the study by McCormick and colleagues. In the three‐arm randomised trial of the Central Texas CATCH Middle School project (serving children 11 to 14 years), relative to schools receiving local consensus processes, clinical practice guidelines and educational meetings, teachers of schools receiving more intensive implementation support (local consensus processes, clinical practice guidelines and educational meetings with educational outreach visits and tailored intervention; or such support with the addition of social marketing strategies) reported significantly higher confidence in implementation of classroom physical activity breaks (Delk 2014). No other trial reported the effects of an implementation strategy on school staff knowledge, skills or attitudes regarding the implementation of policies or practices to reduce the targeted chronic disease risks.

Unintended consequences and adverse effects of strategies

Two trials included a measure that was specified in the study methods as an assessment of potential unintended adverse effects. One of these included aggregate measures of academic test performance, attendance and referral for disciplinary action measures and found no significant difference between groups on these measures (Mobley 2012). The other reported on changes in canteen profitability as a potential adverse outcome of canteen menu modulation and found no significant differences between intervention and control schools (Wolfenden 2017). Four trials did not specify outcomes as measures of adverse effects in their study methods, however, did interpret study findings to suggest that the implementation of policies and practices did not cause unintended harms (Cunningham‐Sabo 2003; French 2004; Naylor 2006; Perry 1997). For example, French and colleagues reported strategies to improve school food service did not adversely impact on school revenue, Perry reported that implementation of a program to lower the fat and saturated fat content of school meals had no impact on the nutritional quality of the school meals, and two trials reported no changes in height (Cunningham‐Sabo 2003; Perry 1997), weight (Cunningham‐Sabo 2003) or statural growth (Perry 1997).

Cost or cost‐effectiveness of strategies

In the HEALTHY study, cost data revealed no significant differences between groups in school revenue or expenses following the provision of implementation support (Mobley 2012). Only one trial conducted formal economic evaluation. Brown and colleagues examined the cost‐effectiveness of the CATCH program using estimates from the CATCH El Paso Trial from a societal perspective. The study reported CATCH to be cost‐effective, with a cost‐effectiveness ratio of US$900 and net benefit of US$68125. No other trial reported the cost or cost‐effectiveness of the implementation strategy however, the TACOS trial reported the revenue generated by sales by the cafeteria targeted by the intervention.

Discussion

Summary of main results

The primary objective of the review was to examine the effectiveness of strategies aiming to improve the implementation of school‐based policies, programs or practices that promote healthy, or reduce unhealthy behaviours relating to child diet, physical activity, obesity, tobacco or alcohol. The review identified 27 unique trials. There was considerable heterogeneity in the implementation strategies examined, policies and practices targeted for implementation, and implementation outcomes assessed. No trials of strategies to implement policies or practices to address alcohol use in schools were identified. Overall, the findings of the impact of strategies on policy and practice implementation were equivocal and the overall quality of evidence (GRADE) was considered very low. For the 13 trials reporting dichotomous implementation outcomes (the proportion of schools or school staff implementing a targeted policy or practice compared with a waiting list, usual practice or minimal support controls), the unadjusted median absolute intervention improvement in implementation across trials was 19% (range 8.5% to 66.6%). Among seven trials reporting the percentage of intervention practices or content that had been implemented, the unadjusted median effect across trials, relative to the control was 14.3% (range ‐8% to 43%). The impact of interventions on student health behaviour or weight status were mixed. Three of the eight trials with physical activity outcomes reported no significant improvements, both trials reporting tobacco use reported significant reductions in such measures in interventions schools while seven out of nine trials reported no between‐group differences on measures of student overweight, obesity or adiposity at follow‐up. Positive improvements were generally reported among measures of child diet intake among trials reporting these outcomes and only two trials reported on, but did not find, an adverse consequence. Three trials assessed the impact of implementation strategies on the attitudes of school staff and found mixed effects. Only one trial conducted a formal cost‐effectiveness assessment.

Consistent with a previous Cochrane review examining implementation strategies in childcare services (Wolfenden 2016), the review team encountered a number of similar methodological issues which complicated synthesis and interpretation of the findings of the review. Among the most significant was the considerable heterogeneity of implementation strategies examined. While a number of implementation strategies, most notably educational materials, educational outreach and educational meetings were commonly used, no two trials examined the same combinations of implementation strategies. Implementation strategies were often poorly described in included studies. Classification of strategies using the EPOC taxonomy was further complicated as the Taxonomy has been developed to describe strategies to improve implementation or professional practice of health services or practitioners, which were often not relevant for the school setting, while other strategies employed by trials did not meet and Taxonomy descriptors (EPOC 2015). Variability in implementation measures, study, and population characteristics was also evident and precluded pooled quantitative analysis, but also represented a challenge to narrative synthesis.

Overall completeness and applicability of evidence

The identified trials demonstrate an immature evidence base as many of the included studies were not primarily designed to address the research questions posed in this review. The research examining implementation strategies in the school setting is dominated by studies conducted in the USA (18 of 27 trials). The applicability of the review to other countries, particularly low‐ and middle‐income countries is therefore limited. Given the importance of contextual factors in implementation outcomes (Durlak 2008), more research in jurisdictions that have different schooling systems to the USA is warranted. Furthermore, while a range of implementation strategies were examined in studies included in the review, there was a lack of studies testing individual implementation strategies, or the same strategies in combination. Not until intervention trials accrue will the impact of individual strategies or multi‐strategic approaches be able to be reliably discerned.

Quality of the evidence

The overall quality of evidence was judged to be very low across all implementation outcomes. The review included a combination of randomised trials and non‐randomised designs. The collective quality of evidence was downgraded due to design, precision and heterogeneity considerations. All 27 trials were considered to be at high risk of performance bias, and all non‐randomised designs were judged to be at high risk of bias due to selection bias from both random sequence generation and allocation concealment. Most trials were small, recruiting relatively small numbers of schools or school staff, limiting the precision of estimated effects. Furthermore, heterogeneity of trial designs, implementation strategies, populations and measures made comparison complex.

Potential biases in the review process

A number of strategies were employed in the conduct of the review to reduce the risk of bias. A comprehensive search was undertaken, including screening of over 18,000 citations, and included searches of trial registers, and handsearching of journals. We also utilised published search filters to maximise the likely capture of relevant studies. Nonetheless, as a developing field, terminology in implementation science is still evolving which may have increased the likelihood that relevant studies may have not been captured in the search strategy (Mazza 2013). The search did capture all relevant trials included in an earlier systematic review of implementation strategies conducted by the Agency for Healthcare Research and Quality (Rabin 2010), and only one additional trial was identified following contacts with study authors and experts in the field suggesting that omissions of large numbers of relevant trials are unlikely. Nonetheless, as terminology in the field develops, search terms may need to be expanded in future review updates. The review also could not pool effects of interventions, and instead utilised a simple description of the unadjusted median and range of effects reported within and across studies. While useful, unadjusted medians treat all trials the same, regardless, of factors such as trial size, as so should be viewed as descriptive. Formal meta‐analytic techniques which apply appropriate trial weights are required to provide robust quantitative estimates of between group effects.

Agreements and disagreements with other studies or reviews

The findings for this review concur with the limited number of previous systematic reviews of controlled trials conducted to assess the effectiveness of implementation strategies in schools and other community settings. The findings are consistent with an Agency for HealthCare Research and Quality systematic review, which included uncontrolled pre‐post trials examining the impact of dissemination or implementation strategies targeting policies or programs to address cancer risk behaviours (including diet and physical activity) across community settings including 11 school studies (Rabin 2010). The review reported considerable heterogeneity of included studies, and poor implementation measurement and methodological quality, and effects that were equivocal. Methodological issues, in particular those pertaining to definitions of implementation constructs and measurement have also been reported in a review of school‐based studies which have examined associations between implementation and individual outcomes in physical activity trials (Naylor 2015). The findings are also consistent with a previous Cochrane review of implementation strategies in childcare services that identified 10 trials of implementation strategies targeting implementation of healthy eating, physical activity or diet providing overall very low quality evidence regarding effectiveness (Wolfenden 2016). Among the four trials reporting a measure of the proportion of childcare service or staff implementing a policy or practice, effect sizes ranged from 0% to 9.5%, lower than the unadjusted median effect for trials identified in this review (19%). The unadjusted median effect size is also within the range of other interventions used to change professional practice of clinicians. For example, in clinical settings the median improvement in professional practice following educational outreach visits is 23% (interquartile range (IQR) 12% to 39%) relative to control, while educational meetings and workshops have achieved median improvements of 10% (IQR 8% to 32%) (Lau 2015). Finally, similar to consolidated reviews in clinical settings, the review also found little evidence of assessment or reporting of cost, cost‐effectiveness or adverse effects included in implementation studies (Lau 2015).

Study flow diagram.
Figures and Tables -
Figure 1

Study flow diagram.

Risk of bias summary: review authors' judgements about each risk of bias item for each included study.
Figures and Tables -
Figure 2

Risk of bias summary: review authors' judgements about each risk of bias item for each included study.

Risk of bias graph: review authors' judgements about each risk of bias item presented as percentages across all included studies.
Figures and Tables -
Figure 3

Risk of bias graph: review authors' judgements about each risk of bias item presented as percentages across all included studies.

Strategies for enhancing the implementation of school‐based policies or practices targeting risk factors for chronic disease

Patient or population: School aged children (5 ‐ <18 years)

Settings: School

Intervention: Any strategy (e.g. educational materials, educational meetings, audit and feedback, opinion leaders, education outreach visits) with the intention of improving the implementation of health promoting policies, programs or practices for physical activity, healthy eating, obesity prevention, tobacco use prevention or alcohol use prevention in schools

Comparison: No intervention or usual practice (22 trials), alternate intervention (2 trials) or minimal support comparison group (3 trials)

Outcomes

Impact

Number of Participants
(trials)

Quality of the evidence
(GRADE)d

Implementation of school‐based policies, practices or programs that aim to promote healthy or reduce unhealthy behaviours relating to child diet, physical activity, obesity, or tobacco or alcohol use

We are uncertain whether strategies improve the implementation of school‐based policies, practices or programs that aim to promote healthy or reduce unhealthy behaviours relating to child diet, physical activity, obesity, or tobacco or alcohol use.

Among 13 trials reporting dichotomous implementation outcomes—the proportion of schools or school staff (e.g. classes) implementing a targeted policy or practice—the median unadjusted (improvement) effect sizes ranged from 8.5% to 66.6%. Of seven trials reporting the percentage of a practice, program or policy that had been implemented, the median unadjusted effect (improvement), relative to the control ranged from ‐8% to 43%. The effect, relative to control, reported in two trials assessing the impact of implementation strategies on the time per week teachers spent delivering targeted policies or practices ranged from 26.6 to 54.9 minutes per week.

1599 schools

(27 trials)

Very lowa,b

Measures of student physical activity, diet, weight status, tobacco or alcohol use

We are uncertain whether strategies to improve the implementation of school‐based policies, practices or programs targeting risk factors for chronic disease impact on measures of student physical activity, diet, weight status, tobacco or alcohol use

29,181 studentsf

(21 trials)

Very lowa,b,c

Knowledge, skills or attitudes of school staff involved regarding the implementation of health promoting policies, or practices

We are uncertain whether strategies to improve the implementation of school‐based policies, practices or programs targeting risk factors for chronic disease impact on the knowledge, skills or attitudes of school staff

1347 stakeholders (3 trials)

Very lowa,b

Cost or cost‐effectiveness of strategies to improve the implementation

We are uncertain whether strategies to improve the implementation of school‐based policies, practices or programs targeting risk factors for chronic disease are cost‐effective

42 schools (1 trial)

473 students (1 trial)g

Very lowa,b,d

Unintended adverse effects of strategies to improve implementation on schools, school staff or children

We are uncertain whether strategies to improve the implementation of school‐based policies, practices or programs targeting risk factors for chronic disease result in unintended adverse effects or consequences

68 schools and 4603 studentsh (2 trials)

Very lowb,c

High quality: Further research is very unlikely to change our confidence in the estimate of effect.
Moderate quality: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low quality: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low quality: We are very uncertain about the estimate.

aDowngraded one level due to limitations in the design.

bDowngraded one level due to unexplained heterogeneity.

cDowngraded one level due to indirectness.

dDowngraded one level due to imprecision.

eGRADE Working Group grades of evidence

fTwo trials measured student behaviour through the use of non‐student data (e.g. purchases) and did not provide student sample sizes.

gOne trial reported on the impact of an intervention on school level revenue. One trial reported on cost‐effectiveness.

hOne trial measured adverse events through the use of non‐student data (i.e. canteen profits) and did not provide student sample sizes.

Figures and Tables -
Table 1. Interventions across studies

Trial

Audit and feedback

Clinical practice guidelines

Continuous quality improvements

Distribution

of supplies

External

funding

Education

games

Education

materials

Education

meeting

Education

outreach visits

Inter‐

professional

education

Length of consultation

Local consensus

process

Local opinion

leader

Managerial supervision

Monitoring performance

of delivery

Pay for performance

Tailored intervention

The use of communication

technology

Other

Alaimo 2015

X

X

X

X

X

X

Cunningham‐Sabo 2003

X

X

X

X

De Villiers 2015

X

X

X

X

Delk 2014

X

X

X

X

X

X

French 2004

X

X

X

X

Gingiss 2006

X

X

X

X

Heath 2002

X

X

X

Hoelscher 2010

X

X

X

X

X

X

X

Lytle 2006

X

X

X

X

Mathur 2016

X

X

X

X

X

McCormick 1995

X

X

X

Mobley 2012

X

X

X

X

X

X

X

X

Nathan 2012

X

X

X

X

X

X

X

Nathan 2016

X

X

X

X

X

X

X

Naylor 2006

X

X

X

X

X

X

Perry 1997

X

X

X

X

Perry 2004

X

X

X

X

X

Sallis 1997

X

X

X

X

X

Saraf 2015

X

X

X

X

X

X

X

Saunders 2006

X

X

X

X

X

X

Simons‐Morton 1988

X

X

X

X

X

X

X

Story 2000

X

X

Sutherland 2017

X

X

X

X

X

X

Whatley Blum 2007

X

X

X

X

X

X

X

X

Wolfenden 2017

X

X

X

X

X

X

X

X

X

Yoong 2016

X

X

X

X

Young 2008

X

X

X

X

X

X

Figures and Tables -
Table 1. Interventions across studies
Table 2. Definition of EPOC subcategories utilised in the review

EPOC subcategory

Definition

Audit and feedback

A summary of health workers’ performance over a specified period of time, given to them in a written, electronic or verbal format. The summary may include recommendations for clinical action.

Clinical practice guidelines

Clinical guidelines are systematically developed statements to assist healthcare providers and patients to decide on appropriate health care for specific clinical circumstances'(US IOM).

Educational materials

Distribution to individuals, or groups, of educational materials to support clinical care, i.e. any intervention in which knowledge is distributed. For example this may be facilitated by the Internet, learning critical appraisal skills; skills for electronic retrieval of information, diagnostic formulation; question formulation.

Educational meetings

Courses, workshops, conferences or other educational meetings.

Educational outreach visits, or academic detailing

Personal visits by a trained person to health workers in their own settings, to provide information with the aim of changing practice.

External funding

Financial contributions such as donations, loans, etc. from public or private entities from outside the national or local health financing system.

Inter‐professional education

Continuing education for health professionals that involves more than one profession in joint, interactive learning.

Length of consultation

Changes in the length of consultations.

Local consensus processes

Formal or informal local consensus processes, for example agreeing a clinical protocol to manage a patient group, adapting a guideline for a local health system or promoting the implementation of guidelines.

Local opinion leaders

The identification and use of identifiable local opinion leaders to promote good clinical practice.

Managerial supervision

Routine supervision visits by health staff.

Monitoring the performance of the delivery of healthcare

Monitoring of health services by individuals or healthcare organisations, for example by comparing with an external standard.

Other

Strategies were classified as other if they did not clearly fit within the standard subcategories.

Pay for performance – target payments

Transfer of money or material goods to healthcare providers conditional on taking a measurable action or achieving a predetermined performance target, for example incentives for lay health workers.

Procurement and distribution of supplies

Systems for procuring and distributing drugs or other supplies.

Tailored interventions

Interventions to change practice that are selected based on an assessment of barriers to change, for example through interviews or surveys.

The use of information and communication technology

Technology based methods to transfer healthcare information and support the delivery of care.

Figures and Tables -
Table 2. Definition of EPOC subcategories utilised in the review
Table 3. Summary of intervention, measures and absolute intervention effect size in included studies

Trial

Trial name

Targeted risk factor

Implementation strategies

Comparison

Primary Implementation outcome

and measures

Effect size

P < 0.05

Alaimo 2015

School Nutrition Advances Kids (SNAK)

Nutrition

Clinical practice guidelines, educational materials, educational outreach visits, external funding, local consensus processes, tailored interventions

Usual practice or waiting‐list control

Continuous:

i) Nutrition policy score and

ii) Nutrition education and/or practice score (2 measures)

Median (range)

0.65 (0.2 to 1.1)

0/2

Cunningham‐Sabo 2003

Pathways

Nutritionc

Clinical practice guidelines, educational materials, educational meetings, educational outreach visits

Usual practice

Continuous:

Nutrient content of school meals % of calories from fat breakfast/lunch

(2 measures)

Median (range)

‐3% (‐3.3% to ‐2.7%)

1/2

De Villiers 2015

HealthKick

Nutritionc

Local opinion leaders, educational materials, educational outreach visits, education meetings

Minimal support control

Dichotomous:

% implementing a variety of policies and practices (3 measures)

Median (range)

25% (12.5% to 29.5%)

Not reported

Delk 2014

No trial name

Physical activity

Local consensus process, educational meetings, clinical practice guidelines, educational outreach visits, tailored interventions, other

Different implementation strategy

Continuous:

% of teachers that conducted activity breaks weekly (1 measure 2 comparisons)

Dichotomous:

% implementing a variety of policies and practices (2 measures 4 comparisons)

Median (range)

13.3% (11.1% to 15.4%)

Median (range)

26.5% (19.4% to 31.9%)

6/6

French 2004

Trying Alternative Cafeteria Options in Schools (TACOS)

Nutrition

Local consensus processes, tailored intervention, educational meetings, pay for performance

Usual practice or waiting‐list control

Continuous

% of program implementation (5 measures)

Median (range)

33% (11% to 41%)

5/5

Gingiss 2006

Texas Tobacco Prevention Initiative

Tobacco

Educational meetings, educational outreach visits, external funding, local consensus processes

Usual practice

Dichotomous:

% implementing a variety of policies and practices (10 measures)

Median (range) 18.5% (‐1% to 59%)

7/10

Heath 2002

El Paso Coordinated Approach to Child Health (El Paso CATCH)

Nutritionc

Educational materials, educational meetings, educational outreach visits

Usual practice

Continuous:

% fat in school meal

(2 measures)

Sodium of school meals

(2 measures)

Effect size

Median (range)

‐1.7% (‐4.4% to 1%)

Median (range)

‐29.5 (‐48 to ‐11)

1/4

Hoelscher 2010

Travis County Coordinated Approach To Child Health (CATCH) Trial

Nutrition and physical activity

Educational materials, educational meetings, educational outreach visits, pay for performance, other, the use of information and communication technology, local consensus process

Different implementation strategy

Continuous:

Mean number of lessons/or activities (5 measures)

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

0.8 (‐0.4 to 1.2)

Median (range)

4.4% (3.6% to 5.2%)

4/7

Lytle 2006

Teens Eating for Energy and Nutrition at School (TEENS)

Nutrition

Educational materials, educational meetings, local opinion leaders, local consensus processes

Usual practice or waitling‐list control

Dichotomous:

% of schools offering or selling targeted foods (4 measures)

Median (range)

8.5% (4% to 12%)

2/4

Mathur 2016

Bihar School Teachers Study (BSTS)

Tobacco

Local opinion leader, continuous quality improvement, education materials, education meeting, local consensus process

Usual practice or waiting‐list control

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

56.9% (36.3% to 77.5%)

2/2

McCormick 1995

The North Carolina School Health and Tobacco Education Project (SHTEP)/ Skills Management
and Resistance Training (SMART)

Tobacco

Educational meetings, local consensus processes, educational materials

Minimal support control

Dichotomous:

% later implementation of curriculum for school district (1 measure)

Continuous:

Mean extent later implementation for school district (% of total curriculum activities taught) (1 measure)

Effect Size (95%CI)

16.7% (‐37.7% to 64.1%)

Mean differencea

0.56%

0/2

Mobley 2012

HEALTHY study

Nutritionc

Educational games, educational meetings, external funding, tailored intervention, educational materials, educational outreach, other, the use of information and communication technology

Usual practice or waiting‐list control

Dichotomous:

% schools meeting various nutrition goals (12 measures)

Median (range)

15.5% (0% to 88%)

Not reported

Nathan 2012

Good for Kids. Good for Life

Nutrition

Educational materials, educational meetings, local consensus processes, local opinion leaders, other, monitoring the performance of the delivery of the healthcare, tailored interventions

Minimal support control

Dichotomous:

% Schools implementing a vegetable and fruit break (1 measure)

Mean difference (95%CI)

16.2% (5.6% to 26.8%)

1/1

Nathan 2016

No trial name

Nutrition

Audit and feedback, continuous quality improvement, education materials, education meeting, local consensus process, local opinion leader, tailored intervention, other

Usual practice

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

35.5% (30.0% to 41.1%)

2/2

Naylor 2006

Action Schools! British Columbia (BC)

Physical activity

Educational materials, educational meetings, educational outreach meetings, local consensus process, other, tailored Interventions

Usual practice or waiting‐list control

Continuous:

Minutes per week of physical activity implemented in the classroom (1 measure 2 comparisons)

Median (range)

54.9 minutes (46.4 to 63.4)

2/2

Perry 1997

Child and Adolescent Trial for Cardiovascular Health (CATCH)

Nutrition and physical activityd

Educational materials, educational meetings, educational outreach visits, other

Usual practice or waiting‐list control

Continuous:

% of kilocalories from fat in school lunch (1 measure)

Mean milligrams of sodium in lunches (1 measure)

Cholesterol milligrams in lunches (1 measure)

Quality of PE lesson % of 7 activities observed (1 measure)

Effect size

Mean difference (95%CI)

‐4.3% (‐5.8% to ‐2.8%)

Mean difference (95%CI)

‐100.5 (‐167.6 to ‐33.4)

Mean difference (95%CI)

‐8.3 (‐16.7 to 0.1)

Mean difference (95%CI)

14.3% (11.6% to 17.0%)

3/4

Perry 2004

The Cafeteria Power Plus project

Nutrition

Educational meetings, educational outreach visits, educational materials,

local consensus processes, other

Usual practice or waiting‐list control

Continuous:

% of program implementation (2 measures)

Mean number of fruit and vegetables available (2 measures)

Median (range)

14% (‐2% to 30%)

Median (range)

0.64 (0.48 to 0.80)

2/4

Sallis 1997

Sports, Play, and Active Recreation for Kids (SPARK)

Physical activity

Educational materials, educational meetings, educational outreach visits, length of consultation, other

Usual practice or waiting‐list control

Continuous:

Duration (minutes) per week of physical education lessons (1 measure)

Frequency (per week) of physical education lessons (1 measures)

Mean difference (95%CI)

26.6 (15.3 to 37.9)

Mean difference (95%CI)

0.8 (0.3 to 1.3)

2/2

Saraf 2015

No trial name

Nutrition, physical activity and tobacco

Educational games, educational materials, educational meetings, local consensus processes, local opinion leaders, tailored Interventions, other

Usual practice

Dichotomous:

% implementing a variety of policies and practices (6 measures)

Median (range)

36.9% (‐5.3% to 79.5%)

5/6

Saunders 2006

Lifestyle Education for Activity Program (LEAP)

Physical activity

Educational materials, educational meetings, educational outreach visits, local consensus processes, local opinion leaders, other

Usual practice or waiting‐list control

Continuous:

School level policy and practice related to physical activity from the school administrators perspective (9 measures)

N/Ab

Not reported

Simons‐Morton 1988

Go for Health

Nutritionc

Educational materials, educational outreach visits, local consensus processes, local opinion leaders, managerial supervision, monitoring of performance, other

Usual practice

Continuous:

Macronutrient content of school meals (2 measures)

N/Ab

Not reported

Story 2000

5‐a‐Day Power Plus

Nutrition

Educational meetings, other

Usual practice

Continuous:

Mean number of fruit and vegetables available (2 measures)

% of guidelines implemented and % of promotions held (4 measures)

Median (range)

1.15 (1 to 1.3)

Median (range)

38.4% (28.5% to 43.8%)

6/6

Sutherland 2017

Supporting Children’s Outcomes using Rewards, Exercise and Skills (SCORES)

Physical activity

Audit and feedback, education materials, education meeting, education outreach visits, local opinion leader, other

Usual practice or waiting‐list control

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Continuous:

Physical education lesson quality score

(1 measures)

% of program implementation (4 measures)

Median (range)

19% (16% to 22%)

Mean difference

21.5a

Median (range)

‐8% (‐18% to 2%)

0/2

1/1

0/4

Whatley Blum 2007

No trial name

Nutrition

Clinical practice guidelines, educational materials, educational meetings, educational outreach visits, external funding, distribution of supplies, local consensus process, other

Usual practice or waiting‐list control

Continuous:

% of food and beverage items meeting guideline nutrient and portion criteria (6 measures)

Median (range)

42.95% (15.7% to 60.6%)

5/6

Wolfenden 2017

No trial name

Nutrition

Audit and feedback, continuous quality improvement, external funding, education materials, education meeting, education outreach visits, local consensus process, local opinion leader, tailored intervention, other

Usual practice

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

66.6% (60.5% to 72.6%)

2/2

Yoong 2016

CAFÉ

Nutrition

Audit and feedback, continuous quality improvement, education materials, tailored intervention

Usual practice

Dichotomous:

% implementing a variety of policies and practices (2 measures)

Median (range)

21.6% (15.6% to 27.5%)

0/2

Young 2008

Trial of Activity for Adolescent Girls (TAAG)

Physical activity

Education materials, education meetings, educational outreach visits, inter‐professional education, local consensus processes, local opinion leaders

Usual practice

Dichotomous:

% implementing a variety of policies and practices (7 measures)

Continuous:

Average number of physical activity programs taught (1 measure)

Median (range)

9.3% (‐6.8% to 55.5%)

Effect Size (95%CI)

5.1 (‐0.4 to10.6)

1/8

aNo measure of variability.

bDid not report aggregate results by group.

cPhysical activity was also a targeted risk factor however, this component did not meet our inclusion criteria (e.g. implementation outcomes unavailable, study staff implemented physical activity component etc.) and was therefore, not considered in our review.

dTobacco use was also a targeted risk factor however, this component did not meet our inclusion criteria (e.g. implementation outcomes unavailable) and was therefore, not considered in our review.

Figures and Tables -
Table 3. Summary of intervention, measures and absolute intervention effect size in included studies