Scolaris Content Display Scolaris Content Display

Intervenciones presenciales y virtuales para la prevención y la reducción del estrés en los trabajadores

Contraer todo Desplegar todo

Antecedentes

La exposición crónica al estrés ha estado vinculada a varios resultados de salud fisiológica y psicológica negativos. Entre los empleados, el estrés y los efectos asociados también pueden dar lugar a pérdidas de productividad y a costos mayores de asistencia sanitaria. Las intervenciones presenciales (cara a cara) y virtuales (por computadora o telefonía móvil) para el manejo del estrés han mostrado ser efectivas para reducir el estrés en los empleados en comparación con ninguna intervención. Sin embargo, no está claro si una forma de administración de las intervenciones es más efectiva que la otra. Es concebible que las intervenciones virtuales sean más accesibles, convenientes y efectivas en función de los costos.

Objetivos

Comparar los efectos de las intervenciones virtuales versus intervenciones presenciales para prevenir y reducir el estrés en los trabajadores.

Métodos de búsqueda

Se hicieron búsquedas en CENTRAL, MEDLINE, PubMed, Embase, PsycINFO, NIOSHTIC, NIOSHTIC‐2, HSELINE, CISDOC y en dos registros de ensayos hasta febrero 2017.

Criterios de selección

Se incluyeron estudios controlados aleatorios que comparaban la efectividad de una intervención virtual para el manejo del estrés (mediante cualquier técnica) con una intervención cara a cara que tuvo el mismo contenido. Se incluyeron estudios que midieron el estrés o el desgaste como un resultado, y usaron a trabajadores con cualquier ocupación como participantes.

Obtención y análisis de los datos

Tres autores examinaron y seleccionaron de forma independiente 75 estudios únicos para la revisión de texto completo de 3431 informes únicos identificados a partir de la búsqueda. Se excluyeron 73 estudios basados en la evaluación de texto completo. se incluyeron dos estudios. Dos autores de la revisión, de forma independiente, extrajeron los datos de resultados del estrés de los dos estudios incluidos. Nos contactó con los autores de los estudios para recopilar datos adicionales. Se utilizaron las diferencias de medias estandarizadas (DME) con intervalos de confianza (IC) del 95% para presentar los resultados de los estudios. No se realizaron los metanálisis debido a la variabilidad en el resultado primario y a la heterogeneidad estadística considerable. Se utilizó el enfoque GRADE para calificar la calidad de la evidencia.

Resultados principales

Dos estudios cumplieron con los criterios de inclusión, con un total de 159 participantes en los brazos incluidos de los estudios (67 participantes completaron las intervenciones virtuales; 92 participantes completaron las intervenciones presenciales). Los trabajadores fueron principalmente blancos, caucásicos, de edad madura, y con educación superior. Ambos estudios proporcionaron educación acerca del estrés, sus causas y las estrategias para reducir el estrés (p.ej. relajación o conciencia plena) a través de una computadora en el brazo virtual, y a través de sesiones en grupos pequeños en el brazo presencial. Ambos estudios midieron el estrés mediante diferentes escalas al momento del seguimiento a corto plazo solamente (menos de un mes). Debido a la heterogeneidad considerable en los resultados, no fue posible agrupar los datos, y los resultados de los estudios se analizaron por separado. La DME de los niveles de estrés en el grupo de intervención virtual fue de 0,81 desviaciones estándar mayor (IC del 95%: 0,21 a 1,41) que en el grupo presencial en un estudio, y de 0,35 desviaciones estándar inferior (IC del 95%: ‐0,76 a 0,05) que en el grupo presencial en otro estudio. Se consideró que ambos estudios tenían un alto riesgo de sesgo.

Conclusiones de los autores

Se encontró evidencia de muy baja calidad con resultados conflictivos, al comparar la efectividad de las intervenciones virtuales para el manejo del estrés versus intervenciones presenciales para el manejo del estrés en los empleados. Sólo fue posible incluir dos estudios con tamaños de la muestra pequeños. Existe muy poca confianza en los cálculos del efecto. Es muy probable que los estudios futuros cambien estas conclusiones.

Programas virtuales versus presenciales para el manejo del estrés en los trabajadores

¿Cuál es el objetivo de esta revisión?

Se deseaba determinar si los programas de manejo del estrés en el trabajo tienen un efecto diferente cuando se administran a través de una computadora, en comparación con su administración de forma presencial. Se recopilaron y analizaron todos los estudios relevantes para responder esta pregunta. Se encontraron dos estudios que examinaron el efecto del método de administración sobre la reducción del estrés en los trabajadores.

Mensajes clave

Los efectos del método de administración sobre la reducción del estrés fueron poco claros. Debe realizarse más investigación para comparar directamente programas equivalentes de manejo del estrés administrados a través de una computadora y de forma presencial. Cualquier estudio futuro probablemente afectará las conclusiones de esta revisión.

¿Qué se estudió en la revisión?
Muchos empleadores desean reducir el estrés en sus empleados y están dispuestos a invertir en programas de manejo del estrés. Se ha demostrado que los programas de manejo del estrés en el lugar de trabajo pueden reducir el estrés en los empleados, cuando se administran a través de una computadora o un dispositivo móvil, o mediante una persona en vivo. Sin embargo, no se conoce si el método de administración en sí repercute en la efectividad del programa. Por lo tanto, se evaluó el efecto del método de administración de la intervención (computadora o en persona) para reducir el estrés en los trabajadores.

¿Cuáles son los resultados de la revisión?

Se encontraron dos estudios, con 159 empleados, que consideraban los niveles de estrés en los trabajadores después de completar los programas de manejo del estrés en una computadora, en comparación con los trabajadores que recibieron ese mismo contenido del programa a través de una persona en vivo. Ambos estudios enseñaron a los participantes, individualmente o en grupos pequeños, cómo reconocer y reducir el estrés, aunque tuvieron resultados conflictivos.

¿Qué grado de actualización tiene esta revisión?

Se hicieron búsquedas de estudios que se habían publicado hasta febrero 2017.

Authors' conclusions

Implications for practice

We found very low‐quality evidence with conflicting results of the effectiveness of computer‐based stress management interventions compared to in‐person stress management interventions in employees. We could only include two studies with small sample sizes. We have very little confidence in the effect estimates. It is very likely that future studies will change these conclusions. The true effect may likely be substantially different from the estimate of effect.

Implications for research

More research is needed that directly compares computer‐based and face‐to‐face stress management programmes, so that the impact of the delivery method can be better understood. The research should randomise employees into equivalent computer‐based and face‐to‐face interventions, as well as a control group. However, future studies must be cognisant of the risk from attrition in computer‐based interventions. In particular, efforts must be made to address attrition, and to account for differences between intervention arms from dropouts (e.g. by using an intention‐to‐treat approach). Adherence in computer‐based health interventions is often around 50% (Kelders 2012; Ludden 2015). Therefore, efforts to monitor adherence (e.g. logging online activity or measuring frequency of practice) are also critical, since adherence seems to be a significant factor in determining outcomes. In addition, researchers should be aware of, and measure possible differences, in characteristics of employees between groups, in particular their ability and propensity to use technology.

Assuming a difference in standard deviations between the two groups of two and a standard deviation of six on the commonly‐used Perceived Stress Scale, at 80% statistical power and with a 95% significance level, the required minimum sample size to detect a meaningful difference would be 141 participants in each study arm. Assuming 50% attrition, we recommend recruiting and allocating at least 300 participants for each intervention arm to avoid future studies from being underpowered.

Furthermore, while research has examined the impact of support systems (both technical and clinical) among computer‐based systems, it has not compared them with in‐person delivery methods. The level of support may be a critical factor associated with adherence, and thus we recommend that future research that compares computer‐based and in‐person stress management interventions in employees also varies the level of support (e.g. completely self‐guided versus self‐guided with online support group). Creating this gradation of human support in separate study arms may provide insight on the direction of the effect size.

Summary of findings

Open in table viewer
Summary of findings for the main comparison. Computer‐based interventions compared to in‐person interventions for reducing stress in employees, less than 3 month follow‐up

Computer‐based (CB) interventions compared to in‐person (IP) interventions for reducing stress in employees, less than 3 month follow‐up

Population: employees

Settings: any workplace

Intervention: computer‐based stress management intervention, less than 3 month follow‐up

Comparison: in‐person stress management intervention

Outcomes

Illustrative comparative risks* (95% CI)

Relative effect
(95% CI)

No of Participants
(studies)

Quality of the evidence
(GRADE)

Comments

Assumed risk with in‐person stress management intervention

Corresponding risk with computer‐based stress management intervention

Stress

Various Measurement Instruments

0.81 standard deviations higher (0.21 higher to 1.41 higher) in one study and 0.35 standard deviations lower (0.76 lower to 0.05 higher) in another study

data not pooled1

159
(2 studies)

⊕⊝⊝⊝
very low2

0.2 standard deviations indicates a small effect

0.5 standard deviations indicates a medium effect

0.8 standard deviations and beyond indicates a large effect

Burnout

none of the studies reported this outcome

*The corresponding risk (and its 95% confidence interval) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI).
CI: confidence interval; SMD: standard mean difference

GRADE Working Group grades of evidence
High quality: Further research is very unlikely to change our confidence in the estimate of effect.
Moderate quality: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low quality: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low quality: We are very uncertain about the estimate.

1. Pooling of data not appropriate due to considerable heterogeneity (I² > 75%).

2. We downgraded the level of evidence once due to small sample size and underpowered studies. We also downgraded once due to high risk of bias due to incomplete outcome data (high and unequal attrition between interventions). Finally we downgraded once more because of inconsistency, due to considerable heterogeneity precluding meta‐analysis.

Background

Description of the condition

Stress can be defined as a relationship between a person and his or her environment that is perceived by the person as taxing, exceeding his or her resources, or endangering his or her well‐being (Lazarus 1984). Consequently, stress depends on environmental factors, an individual’s perception of the environment, and the interaction of the two. Stress elicits short‐term responses, including elevated blood pressure (Backé 2012), and unhealthy behaviours, such as smoking, as coping mechanisms (Conway 1981). Chronic exposure to stress is associated with both physiological and psychological adverse health outcomes, including cardiovascular problems (Brotman 2007; Kivimäki 2013), musculoskeletal problems (Van Rijn 2009), anxiety (Ding 2014; Acquadro 2015; Lee 2015), depression (Hammen 2005), alcoholism (Grunberg 1999), and increased mortality (Levi 1989; Nilsen 2016; Rueppell 2017).

Stress in employees can originate from workplace or work‐related stressors, such as excessive workload or an effort‐reward imbalance (French 1982; Siegrist 1996). It can also originate from individual sources, such as inadequate coping skills (Laranjeira 2012), or from the interaction of work‐related and individual stressors, such as work‐family conflict (Greenhaus 1985). This means that it can be difficult to distinguish stress that originates solely from work‐related stressors from that originating from individual factors or stressors outside the workplace. As such, work‐related or occupational stress can be considered a subset of all stress experienced by employees.

In any case, stress and its associated health effects in employees lead to direct and indirect costs for employers, including higher healthcare costs and reduced productivity from absenteeism and presenteeism (Moreau 2004; Cooper 2008; Ahola 2009). The British Labour Force Survey reported that 1510 per 100,000 workers experienced work‐related stress, depression, or anxiety, resulting in an average of 23.9 days lost per case (HSE 2016). An American survey cited anxiety and stress as the cause of 25 days of absence every year (U.S. Department of Labor 2001). Total costs to society from stress in employees, including related legal and insurance costs, are generally estimated to be around 1% to 3% of Gross Domestic Product (GDP; Rosch 2001; EU‐OSHA 2014).

Description of the intervention

Stress management interventions in the workplace are programmes organized by an employer to reduce the presence of stressors or reduce the negative impact of exposure to these stressors on employees (Ivancevich 1990). They can be categorised according to: 1) their level of focus, and 2) their target (De Jonge 2002; Richardson 2008; Bhui 2012).

First, stress management interventions in the workplace differ in their level of focus. Commonly, workplace stress management programmes are classified as organizational‐level, individual‐organizational‐level, or individual‐level interventions (DeFrank 1987; Jordan 2003). Organizational‐level stress management interventions can include changing the physical and environmental characteristics of the workplace or restructuring the job and its responsibilities (Bergerman 2009). Individual‐organizational‐level stress management interventions can include adding coworker support groups and implementing regular appraisals that try to match employee and employer expectations. Individual‐level stress management interventions can include relaxation, biofeedback, cognitive behavioural therapy (CBT), and time management techniques, among others (Richardson 2008).

Secondly, workplace stress management interventions differ in their target. Often interventions can be classified as targeting primary factors (i.e. stressors), secondary factors (i.e. individual factors), or tertiary factors (i.e. symptoms (Bergerman 2009)).

Traditionally, stress management interventions have been delivered live by a trainer, therapist, or similar expert individual, who personally takes his or her client or clients through the programme. However, the promulgation of technology, and focus on efficient use of resources, has introduced several computer‐based (web and mobile) alternatives to deliver the same interventions without a person physically present to lead the process (Andersson 2009; Carey 2009; Andrews 2010; Mohr 2010). Consequently, stress management interventions can also be classified a third way, according to their delivery method.

Computer‐based stress management interventions in the workplace

The term 'computer‐based' is used here to refer to any delivery method that uses technology and is not delivered live and in‐person (face‐to‐face). As such, self‐taught methods using books or print material would not fall into either of the groups defined in this review. In general, any computer‐based intervention (e.g. to address health promotion or mental health) is more commonly text‐based, and aimed at education and goal setting, using modules with a fixed order. In some cases, the content has been developed earlier for an offline programme, and has been made available on an easily accessible electronic platform, such as a website or software (Ludden 2015). Computer‐based interventions can be delivered by or accessed via a computer, text message, email, mobile phone application, CD player, or web browser (Zetterqvist 2003; Griffiths 2006; Ruwaard 2007). They can also vary according to the media used (e.g. text, audio recording, video, or game), and degree of therapist involvement (e.g. from entirely self‐help to remote client‐therapist interaction (Proudfoot 2011)).

A key characteristic of computer‐based interventions is whether or not the intervention is guided or unguided (self‐guided). Guided interventions have some kind of human support, which can come in the form of email reminders, counsellor support, or peer support groups (Brouwer 2011; Baumeister 2014). Furthermore, the guidance can be classified by whether or not it is synchronous, by the qualifications of the one giving guidance (e.g. trained therapist or non‐clinical support), by the mode of guidance (e.g. email reminders or live chat), and the dosage or frequency of guidance (Baumeister 2014).

Computer‐based stress management interventions are generally focused on the individual level. They can use a variety of theoretical bases, such as CBT, mindfulness, or physical activity. Unguided or self‐paced interventions are more common for computer‐based stress management interventions (Heber 2017). The duration of computer‐based stress management interventions in the workplace generally vary from two to 12 weeks (Heber 2017). Employees can access the intervention via work‐provided devices or their personal devices. Interventions can be either self‐paced or have a regular schedule.

In‐person stress management interventions in the workplace

In‐person stress management interventions in the workplace can be delivered by a trained instructor, counsellor, practitioner, or teacher, and they can occur in small groups or in one‐on‐one sessions. The sessions can be as short as 15 minutes or as long as one day. Similar to computer‐based methods, they generally focus on the individual level. The duration of workplace in‐person stress management interventions is usually similar to computer‐based stress management interventions, ranging from two to 12 weeks (Richardson 2008).

How the intervention might work

In general, workplace stress management interventions aim to reduce stressors, improve reactions to stressors, or mitigate physiological or psychological effects from stress. Both computer‐based and in‐person delivery methods can use these mechanisms. However, only certain interventions, usually directly targeting workers, can be transposed into computer‐based ones (e.g. CBT, mindfulness, problem solving training).

Computer‐based workplace stress management interventions most commonly ‐ but not exclusively ‐ operate on the individual level and target secondary prevention. Secondary prevention stress management interventions aim to modify an employee’s perception of, or ability to cope with, or respond to, existing stressors. Often, the interventions use cognitive behavioural techniques, meditation or relaxation, exercise, or time management techniques. Cognitive behavioural techniques educate employees about the roles of their thoughts and emotions in stressful events to provide new ways to feel, think, and act in stressful situations. Meditation and relaxation divert attention away from stress. Exercise provides a physical release from tension, increases endorphins, or provides an outlet for anger. Time management techniques allow an employee to reduce the work‐demand imbalance.

While the theoretical mechanisms of stress management interventions' effect on stress should be dependent on the level (i.e. primary, secondary, tertiary) and technique, the delivery method may affect the outcome by facilitating or mitigating exposure and adherence. Computer‐based interventions are known to commonly suffer from higher attrition, reduced exposure, and less adherence, compared to in‐person interventions (Kelders 2012). While in‐person interventions can be more responsive to participants, computer‐based interventions are more prone to a mismatch between the goals of the intervention and the goals of the participants, and are less flexible in adjusting to situations and user characteristics, which have been associated with higher attrition (Kelders 2011; Postel 2011; Ludden 2015). Consequently, some computer‐based interventions add a guidance component (i.e. a form of human support) or adherence‐facilitating component, such as automated prompts (Baumeister 2014). Guided interventions have been shown to be significantly more effective at symptom reduction and increased intervention completion than unguided interventions (Baumeister 2014; Heber 2017). Guidance in the form of peer support, counsellor support, and email contact may increase exposure, which may increase adherence, and thus, the efficacy (Brouwer 2011; Baumeister 2014).

Why it is important to do this review

In‐person interventions have been shown to be effective in reducing stress in employees, when compared to a control (Van der Klink 2001; Richardson 2008; Bhui 2012; Ruotsalainen 2015). Computer‐based interventions have also been shown to be effective in reducing stress in employees, when compared to a control (Heber 2017). However, computer‐based interventions offer many advantages to employers: they are easily accessible at any time or place; greater anonymity is possible; workers can follow the course whenever and wherever they wish; they may reach workers earlier than traditional health services; they can cost less; and computer‐based interventions are easily scalable (Griffiths 2006; Ebert 2017).

However, a systematic review directly comparing computer‐based and in‐person stress management interventions among workers has not yet been completed. Such a review could provide the necessary evidence to help employers and occupational health services choose the best method for reducing stress in their employees. This Cochrane review aims to fill this gap in the evidence base.

Objectives

To compare the effects of computer‐based interventions to in‐person interventions for preventing and reducing stress in employees.

Methods

Criteria for considering studies for this review

Types of studies

We included only randomised controlled trials (RCTs).

Types of participants

We included studies in which participants were full‐time, part‐time, or self‐employed working individuals over 18 years of age.

Types of interventions

We considered for inclusion all studies assessing the effectiveness of any type of worker‐focused web‐based stress management intervention, aimed at preventing or reducing work‐related stress with techniques such as CBT, relaxation, time management, or problem‐solving skills training. These interventions had to be delivered via email, a website, or a stand‐alone computer programme, and they had to be compared to a face‐to‐face stress management intervention with the same content (e.g. web‐based CBT versus face‐to‐face CBT). Interventions could vary by the device providing access (e.g. computer, laptop, or mobile device), the type of multimedia used (e.g. text, graphics, animations, audio, video), and the degree of therapist involvement (from entirely self‐help to remote client‐therapist interaction).

The in‐person comparator interventions could be delivered by a trained instructor, counsellor, practitioner, or teacher, and they could occur in small groups or in one‐on‐one sessions.

Types of outcome measures

Primary outcomes

We included studies that measured the effect of the interventions on stress or burnout in employees. We included studies measuring:

  • stress with Perceived Stress Scale (PSS (Cohen 1983)), Occupational Stress Inventory (OSI (Osipow 1998)), or similar; or

  • burnout with Maslach Burnout Inventory (MBI (Maslach 1996)) or similar.

Secondary outcomes

  • Sick leave

  • Absenteeism

  • Return to work

We considered self‐reported stress and burnout scales to be subjective outcome measures, while sick leave, absenteeism, and return to work could be objective outcome measures, as long as the employer, rather than the worker, supplied the data.

Studies could measure time until partial RTW as:

  • number of days of sick leave until partial RTW;

  • total number of days of partial sick leave during follow‐up; or

  • rate of partial RTW at follow‐up measurements.

They could measure time until full RTW as:

  • number of days of sick leave until full RTW;

  • total number of days of full‐time sick leave during follow‐up; or

  • rate of full RTW at follow‐up measurements.

We did not include the reporting of one or more of the secondary outcomes listed here as an inclusion criterion for the review.

Search methods for identification of studies

Electronic searches

We conducted a systematic literature search to identify all published and unpublished trials eligible for inclusion. We adapted the search strategy developed for MEDLINE to use in the other electronic databases. We did not impose any limitation on the language of publication. In future updates, if we identify any potentially eligible papers in languages other than those spoken by the review team, we will either arranged for the translation of key sections prior to assessment, or arrange for their full assessment by people who are proficient in the publications' language(s).

We searched the following electronic databases from inception to 27 February 2017 to identify potential studies (i.e. no date restrictions):

  • Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 2) in the Cochrane Library (searched 27 February 2017; Appendix 1);

  • MEDLINE Ovid (1946 to 27 February 2017; Appendix 2);

  • PubMed (Appendix 3);

  • Embase Ovid (1974 to 27 February 2017; Appendix 4);

  • PsycINFO Ovid (1806 to 27 February 2017; Appendix 5);

  • NIOSHTIC, NIOSHTIC‐2, HSELINE, CISDOC (OSH‐UPDATE; Appendix 6);

Searching other resources

We checked reference lists of all articles that we retrieved as full‐text articles, related systematic, and narrative reviews in order to identify additional potentially eligible studies. We contacted other researchers, but they did not identify any unpublished studies.

Data collection and analysis

Selection of studies

Three review authors (AA, TD, YLT) independently screened the titles and abstracts of all the potentially eligible studies we identified during the search, and coded them as 'retrieve' (eligible, potentially eligible, or unclear) or 'do not retrieve'. We coded studies as 'do not retrieve' if the title and abstract provided sufficient information to decide that the study did not fulfil our inclusion criteria. We excluded studies in this phase only if the study clearly was not randomised or clearly had no computer‐based stress management intervention.

We retrieved the full‐text study reports or publications, and three review authors (AK, YLT, QDM) independently screened these for inclusion, also noting the reasons for excluding the ineligible studies in the 'Characteristics of excluded studies' table. From full‐text review, we were able to identify and exclude duplicates (e.g. study protocols or conference presentations of included studies) and multiple reports of the same study, so that each study, rather than each report, was the unit of interest in the review. We resolved any disagreement through discussion, or if required, we consulted one of the remaining two authors (AA, TD).

Data extraction and management

Three review authors (AK, YLT, QDM) independently extracted the following study characteristics, using a standard data collection form.

  1. Methods: study design, total duration of study, study location (country), study setting, and date of study.

  2. Participants: number of participants and allocation to intervention groups, method of analysis ('as‐treated' or 'intention‐to‐treat'), demographic data, inclusion criteria, and exclusion criteria.

  3. Interventions: description of intervention, comparison, duration, intensity, content of both intervention and control condition, and co‐interventions.

  4. Outcomes: description of primary and secondary outcomes specified and collected, and time points reported.

  5. Notes: funding for trial, notable conflicts of interest of trial authors, and other sources of information (e.g. communication with author or another publication of same study).

We noted in the 'Characteristics of included studies' table if outcome data were not reported in a usable way. We resolved disagreements by consensus or by involving a fourth review author (TD). One review author (AK) transferred data into the Review Manager 5 file (RevMan 2014). We double‐checked that data were entered correctly by comparing the data presented in the systematic review with the study reports.

Assessment of risk of bias in included studies

Two review authors (TD, AK) independently assessed risk of bias for each study using the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011). We resolved any disagreements by discussion or by involving a third author (QDM). We assessed the risk of bias according to the following domains:

  1. Random sequence generation.

  2. Allocation concealment.

  3. Blinding of participants and trial personnel.

  4. Blinding of outcome assessment.

  5. Incomplete outcome data.

  6. Selective outcome reporting.

  7. Other sources of bias.

In addition to evaluating risk of bias in these standard domains, we added two domains: the presence or absence of co‐interventions (and if included, their degree of similarity to the intervention), and treatment fidelity. The idea was that these two additional domains would shed light on theory failure or programme failure pertinent to the intervention being evaluated.

We graded each potential source of bias as high, low, or unclear and provided a justification in the 'Risk of bias' table in RevMan 5 (RevMan 2014). We considered random sequence generation, allocation concealment, selective outcome reporting, and incomplete outcome data to be key domains. We judged a study to have a high overall risk of bias when we judged one or more key domains to have a high risk of bias. For instance, a study with substantial variation in attrition between treatment and comparison groups, which had not been appropriately accounted for, warranted a high risk of bias assessment for incomplete outcome data, and as a result, for the overall study. Conversely, if a future study is identified in which we judge all key domains to be at low risk of bias, we will judge the study to have low overall risk of bias. We summarised the risk of bias judgements across different studies for each of the domains listed. Where information on risk of bias related to unpublished data or correspondence with a study author, we noted this in the 'Risk of bias' table.

Assessment of bias in conducting the systematic review

We conducted the review according to the methods reported in our published protocol (Kuster 2015). We reported all deviations from the published protocol in the Differences between protocol and review section of the review.

Measures of treatment effect

The included studies measured stress with self‐report instruments that yielded continuous data. We put means, standard deviations, and the number of participants for each arm of the study, from the latest available reporting time, into the data tables in RevMan 5 (RevMan 2014). Because the included studies used different instruments, we calculated the standardised mean difference (SMD) with a 95% confidence interval (CI) between groups as the summary effect measure. The included studies did not report any dichotomous outcomes.

We considered whether the computer‐based delivery mode was equivalent to the in‐person delivery mode. We defined equivalence to mean that the difference in effect size between the two interventions was 0.2 SMD or less. Since there is no generally accepted minimal clinically important difference for measures of stress (i.e. an amount of change that on average would be perceived as improvement by a participant), we believe this approach provides a reasonable approximation.

Unit of analysis issues

The included studies' interventions aimed to achieve changes at the individual level (in thinking, feelings, behaviour, or all three) in order to reduce the level of stress. Hence, the unit of analysis was the individual. For studies that used a cluster‐randomised design and did not consider the design effect in the analyses, we had planned to calculate the design effect by following the methods stated in the Cochrane Handbook for Systematic Reviews of Interventions for the calculations, using a fairly large assumed intra cluster correlation of 0.10 (Campbell 2001; Higgins 2011). However, we found no cluster‐randomised studies to include in this review, and thus did not need to consider design effects.

Dealing with missing data

If the SDs were not presented in the publication, we contacted the authors with a request to provide these data. Whenever authors were unable or unwilling to provide this information, we calculated SDs from available information following the instructions of the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2011).

Assessment of heterogeneity

We assessed clinical heterogeneity based on the degree of similarity in the population, interventions, outcomes, and follow‐up periods. Due to the nature of the interventions and the narrowness of our inclusion criteria (i.e. adult workers), we did not expect to find study populations with significant differences, and we did not.

We considered follow‐up times of less than three months, three months to one year, and more than one year to be different. Our included studies only provided short‐term follow‐up data.

If we include sufficiently similar studies in future updates of this review to conduct meta‐analyses, we will assess heterogeneity by visual inspection of forest plots, and by using the I² statistic. We will then quantify the degree of heterogeneity as follows (Higgins 2011).

  • 0% to 40% might not be important

  • 30% to 60% may represent moderate heterogeneity.

  • 50% to 90% may represent substantial heterogeneity.

  • 75% to 100% equals considerable heterogeneity.

In the presence of substantial heterogeneity and a sufficient number of studies, we will conduct subgroup analyses as described below in Subgroup analysis and investigation of heterogeneity.

When we found considerable heterogeneity (I² > 75%), we did not pool the data, and downgraded the quality of the evidence because of inconsistency, according to the GRADE system (see Data synthesis and Quality of the evidence for details).

Assessment of reporting biases

We tried to prevent location bias by searching across multiple databases, and language bias by including all eligible articles, regardless of publication language. When we detected multiple articles on the same study, we extracted data only once. If we can include a sufficient number of studies in future updates of this review, we will assess publication bias using funnel plots, and we will test for funnel plot asymmetry (Higgins 2011).

Data synthesis

We judged both included studies to be sufficiently clinically homogeneous to be reported in a single comparison. However, because the I² value of their pooled numerical results exceeded 75%, we refrained from reporting the pooled results, and we analysed the results for each study separately using Review Manager 5 software (RevMan 2014). If future updates of this review identify studies that are less statistically heterogeneous, such that results can be pooled in one or more meta‐analyses, we will use a random‐effects model and combine effect sizes using the general inverse variance method. In such a case, we will conduct a sensitivity check by using the fixed‐effect model to compare differences in results. However, if the heterogeneity might not be important (I² ≤ 40%), we will use a fixed‐effect model. We included a 95% confidence interval (CI) for the intervention effect of studies.

When studies reported multiple trial arms, we only included the relevant arms. Had we needed to include two comparisons from one study (e.g. intervention A versus face‐to‐face intervention and intervention B versus the same face‐to‐face intervention) in the same meta‐analysis, we would have halved the number of participants in the control group to avoid double‐counting.

Quality of the evidence

We used the GRADE approach, as described in the Cochrane Handbook for Systematic Reviews of Interventions, to assess the quality of the body of evidence for the primary outcomes (Higgins 2011). The quality of a body of evidence for a specific outcome is based on five factors: 1) limitations of the study designs; 2) indirectness of evidence; 3) inconsistency of results; 4) imprecision of results; and 5) publication bias.

The GRADE approach specifies four levels of quality (high, moderate, low, and very low), incorporating the factors noted above. Quality of evidence by GRADE should be interpreted as follows:

  • High‐quality: We are very confident that the true effect lies close to that of the estimate of the effect;

  • Moderate‐quality: We are moderately confident in the effect estimate: The true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different;

  • Low‐quality: Our confidence in the effect estimate is limited: The true effect may be substantially different from the estimate of the effect;

  • Very low‐quality: We have very little confidence in the effect estimate: The true effect is likely to be substantially different from the estimate of effect.

Subgroup analysis and investigation of heterogeneity

Given the limited number of studies included in this review, we could not perform subgroup analyses. If there are sufficient data in future updates of this review, we will undertake subgroup analyses based on type of workers (e.g. salaried versus hourly, or blue‐ versus white‐collar workers), techniques used in the computer‐based intervention (e.g. CBT versus relaxation), and level of human support (e.g. guided versus unguided). Since our criteria for inclusion were limited to studies involving a face‐to‐face intervention comparator, we would not require further analyses by comparator arm.

If future updates of this review include studies that report a return‐to‐work outcome in multiple ways (e.g. full‐time, part‐time), we will conduct a subgroup analysis to see if there is a difference.

We will use the Chi² statistic to test for subgroup interactions in Review Manager 5 (RevMan 2014).

Sensitivity analysis

We explored the impact of including studies with missing data and multiple reports of the primary outcome, which could introduce bias, with a sensitivity analysis. We used the sensitivity analysis (and the inherent estimates) to understand how the conclusions were affected by the choice of data used in the comparison.

We had also planned to perform a sensitivity analysis to test the robustness of our results by omitting studies with a high overall risk of bias. However, we could not conduct this sensitivity analysis, because we judged both included studies to have a high risk of bias.

'Summary of findings' table

We created a 'Summary of findings' table that reported on the primary outcomes, stress and burnout. The table omits secondary outcomes (sick leave, absenteeism, and return to work) because none of the included studies reported these outcomes. If future studies are identified to warrant multiple comparisons (e.g. different follow‐up times), we will add additional 'Summary of findings' tables. We used the five GRADE considerations (study limitations, consistency of effect, imprecision, indirectness, and publication bias) to assess the quality of the body of evidence for each outcome. We used the methods and recommendations described in Section 8.5 and Chapter 12 of the Cochrane Handbook for Systematic Reviews of Interventions, and used GRADEpro GDT software to develop the 'Summary of findings' table (Higgins 2011; GRADEpro GDT 2015). We were transparent, and justified all decisions to downgrade the quality of evidence in the footnotes.

Results

Description of studies

Results of the search

We ran the original search in February 2016, which identified 2037 unique records for review. We updated the search strategy to use consistent keywords across databases and to be more sensitive, and then we re‐ran it in February 2017. Figure 1 displays a PRISMA study flow chart of the inclusion process from the updated February 2017 search, which identified 5004 records. After removing duplicates, we identified 3431 unique reports to assess against our inclusion and exclusion criteria. We assessed the titles and abstracts of these 3431 reports, and identified 89 reports to be read as full text. After identifying duplicates (e.g. study protocols or conference presentations of included studies) and multiple reports of the same study, we considered 75 unique studies for inclusion. We excluded 73 of those and included two studies in this Cochrane review.


PRISMA study flow diagram

PRISMA study flow diagram

Included studies

Study design

We included two randomised controlled trials (Eisen 2008; Wolever 2012). Both used a parallel group design with three arms. We only analysed the data from the two arms that compared computer‐based interventions to equivalent in‐person interventions. Details can be found in the 'Characteristics of included studies' table.

Country and time period

Both trials were set in the USA, between 2000 and 2010.

Type of settings and participants

In the Eisen 2008 study, the 63 included participants were a mix of hourly workers and salaried workers from three different manufacturing sites of a single corporation. In the Wolever 2012 study, the 96 included participants were almost entirely full‐time employees from two different sites of a single national insurance company. In both studies, the majority of participants were white, Caucausian, married, and had at least a college education.

Sample sizes

A total of 159 participants completed the interventions in the included arms of the included studies. A total of 92 participants completed the in‐person interventions, and a total of 67 participants completed the computer‐based interventions. The sample sizes were generally small. One study's computer‐based arm had fewer than 20 participants, while all other arms had between 20 and 60 participants.

Interventions

Both studies delivered education ‐ broken into eight to 12 modules ‐ about stress and its causes, together with strategies to reduce stress or its causes, to employees via a computer. In the Eisen 2008 study, the video was prerecorded, and delivered via computer software, and the modules could be completed at any time. In the Wolever 2012 study, the education was delivered live on a computer via a virtual classroom with bi‐directional communication between teacher and participants. However, if participants missed the online class, they could watch a video recording of it. Thus, we judged the Eisen 2008 study to have provided an unguided intervention, and the Wolever 2012 study to have provided a guided intervention.

In Eisen 2008, the total education time was one and one‐half to two hours, completed over a two‐week period; in Wolever 2012, it was 14 hours completed over a 12‐week period.

Both studies compared the computer‐based intervention to an in‐person intervention with the same educational content. A teacher delivered the in‐person intervention to small groups of up to 28 people.

Outcomes

Both studies measured stress as an outcome. The Wolever 2012 study used the 10‐item Perceived Stress Scale (PSS). The Eisen 2008 study used two stress outcomes: Subjective Units of Distress Scale (SUDS), and a 12‐item Stress Survey (composite of stress‐related items from Johnson & Johnson Health Care System Insight combined with Health Risk Appraisal survey and Occupational Stress Inventory ‐ Revised Edition). We used the latest available outcome measurement for which sufficient information was available.

Neither of the included studies measured burnout as an outcome, which we had defined as our second primary outcome.

Neither of the included studies measured any of our secondary outcomes.

Follow‐Up

Both studies used short‐term follow‐up (less than one month). The Eisen 2008 study had a one‐month follow‐up but the authors did not report the data from that follow‐up. Thus we used the post‐intervention data in our quantitative analysis. The Wolever 2012 study had a two‐week follow‐up.

Missing information

We sought and obtained additional information on study details and statistical data from the authors of both included studies. One author of the Eisen 2008 study provided a PhD thesis published in 2005 that reported SDs for outcomes at post‐intervention, and answered questions about study details for assessing risk of bias. However, the authors were unable to provide missing SDs for the Stress Survey instrument at one‐month follow‐up. One author of the Wolever 2012 study answered questions about study details for assessing risk of bias, but was unable to provide SDs. We reported all information obtained via correspondence with authors in the 'Characteristics of included studies' table.

Excluded studies

We read 91 reports as full‐text, and excluded 89 of them because they did not meet our inclusion criteria (see 'Characteristics of excluded studies' table). In some cases, studies could have been excluded for more than one reason. In those cases, we listed the highest priority reason for exclusion.

Most commonly, we excluded studies because they compared a computer‐based stress management intervention against a wait‐list control only (e.g. Ruwaard 2007; Billings 2008; Heber 2016; Hammer 2015), or they compared a computer‐based intervention to an active control group that contained different content than the computer‐based intervention (e.g. Cook 2007; Rose 2013; ACTRN12615000574549; Erdman 2015). In other cases, we excluded studies because the computer‐based intervention was not a stress management intervention (e.g. Kawakami 2005; Van Drongelen 2013), or the outcome of the study was not stress or burnout (e.g. Prochaska 2008; Schell 2008; Hasson 2010).

Other, less common reasons for exclusion were: the study was not a randomised controlled trial (e.g. Mackenzie 2014; Zarski 2016); the population was not employees (Wiegand 2010; Drozd 2013); the intervention was not web‐ or computer‐based (e.g. Bragard 2009; Noordik 2013; Baker 2015), or it was not actually a study at all (Hughes 2013).

Risk of bias in included studies

We judged both included studies as being at low risk of bias for random sequence generation and allocation concealment. On the other hand, we judged both included studies as being at high risk of bias due to incomplete outcome data, which was caused by attrition. We also judged the Eisen 2008 study as being at high risk of bias due to selective reporting.

We judged a study to have an overall high risk of bias when we assessed one of the key domains (random sequence generation, allocation concealment, selective outcome reporting and incomplete outcome) as being at high risk of bias. Consequently, we considered the overall risk of bias to be high in both included studies (Eisen 2008; Wolever 2012).

Figure 2 reports the 'Risk of bias' assessments for the two studies. Details for the judgments can be found in the 'Characteristics of included studies' tables.

Allocation

While the publications only reported that they 'randomised participants', personal communications with the authors provided evidence that they used adequate randomisation methods, including both the random sequence generation and allocation concealment, to warrant a judgment of low risk of bias in this domain.

Blinding

Both Eisen 2008 and Wolever 2012 measured stress by means of self‐reported questionnaires. Such outcome measures could be biased by knowledge and expectations of the intervention. This could create an overestimation of the effect of the intervention. The authors of neither included study mentioned that blinding could be an issue. We judged both studies as being at high risk of performance and detection bias due to their use of self‐reported outcome measures.

Incomplete outcome data

We judged both included studies as having a high risk of bias due to incomplete data caused by attrition. In both studies, attrition proportions were statistically significantly different between the two intervention arms analysed in this review (P < 0.05). Both studies had at least one study arm with more than 20% attrition. The study by Eisen 2008 suffered significant dropouts. In that study, 64% of participants from the in‐person group and 88% from the computer‐based group did not complete the intervention, with the majority of the dropouts occurring between randomisation and the start of the intervention. These high dropout rates reduced the effectiveness of randomisation and introduced potential confounding that may explain differences in the intervention effect more than the delivery method variable.

Selective reporting

The authors of the Wolever 2012 study had published a protocol that they provided to us upon request, and the outcomes reported in their study matched the ones in the protocol and their stated study objectives. Therefore, we judged the risk of bias from selective reporting as low. However, in the Eisen 2008 study, the authors collected data for two stress outcome measures, but we could not calculate an effect size from one of the measures, because the authors did not publish SDs, nor could they provide them when queried. The authors did not report the results that were not statistically significant, which were possibly more appropriate than the statistically significant results they chose to publish. Our sensitivity analysis confirmed different conclusions based on the selection of outcome measure. Consequently, we judged the study as being at high risk of bias due to selective reporting.

Other potential sources of bias

We judged the Wolever 2012 study as being at a high risk of bias because the study was funded by the owner of the software used in the intervention, which is a clear conflict of interest. We could not identify any other sources of bias in the Eisen 2008 study. Thus, we judged it as being at low risk of bias for this domain.

Presence of co‐interventions

Neither of the two included studies reported that their participants receiving any other interventions in addition to either the in‐person or computer‐based stress management intervention. Therefore, we judged both studies as being at low risk of bias for this domain.

Treatment fidelity

We judged the Wolever 2012 study as having an unclear risk of bias due to treatment fidelity, because we could not determine if sufficient measures were implemented to ensure the intervention was delivered as planned. We judged the Eisen 2008 study as being at high risk of bias due to problems in treatment fidelity that were caused by technological issues and inadequate monitoring of adherence in the computer‐delivered group.

Effects of interventions

See: Summary of findings for the main comparison Computer‐based interventions compared to in‐person interventions for reducing stress in employees, less than 3 month follow‐up

Both included studies reported stress outcomes such that a higher number indicated a higher amount of stress. To compare results produced with different scales, we used the standardised mean difference (SMD) with a 95% confidence interval (CI) between the intervention and comparison groups as the summary effect measure. Thus, a negative effect measure indicated a lower stress outcome in the intervention group (computer‐based stress management interventions) compared to the comparator group (in‐person stress management interventions with equivalent content) ‐ that is, favouring the computer‐based intervention.

1. Web‐based interventions versus in‐person interventions for reducing stress in workers

1.1 Any stress outcome, follow‐up less than three months

Two studies (159 participants) compared stress levels in employees after a stress management intervention delivered via computer‐based means to equivalent interventions delivered in‐person. Both studies had follow‐up times shorter than one month. Heterogeneity between the results obtained by the two studies was very high (I² = 90%) and the confidence intervals did not overlap. Thus, we did not pool the data in a meta‐analysis. Instead, we reported the results of the studies individually. Eisen 2008 found that stress levels were statistically significantly lower in the in‐person group (0.81 standard deviations, 95% CI 0.21 to 1.41) immediately after the intervention, while Wolever 2012 did not find a clear difference between the groups (‐0.35 standard deviations, 95% CI ‐0.76 to 0.05) one month after the intervention (Analysis 1.1). Table 1 reports the results of the two studies in their original scales.

Open in table viewer
Table 1. Effect estimates of included studies reported in original outcome scales

Study, outcome scale

Assumed risk, in‐person stress management intervention

Corresponding risk, computer‐based stress management intervention

Effect estimate (95% CI)

Mean (SD)

No. of participants

Mean (SD)

No. of participants

Eisen 2008, SUDS

21.1 (14.48)

48

33.3 (16.33)

15

+12.2 (+2.98 to +21.4)

Wolever 2012, PSS‐10

16.94 (5.7)

44

14.91 (5.7)

57

‐2.03 (‐4.32 to +0.26)

SD = standard deviation

CI = confidence interval

SUDS = Subjecive Units of Distress Scale

PSS‐10 = Perceived Stress Scale

1.2 Any burnout outcome

Neither of the included studies evaluated the effectiveness of their interventions using a burnout measure.

1.3 Sensitivity analysis: effect of selective outcome reporting and missing data

We conducted a sensitivity analysis to simultaneously assess the impact of two potential sources of bias: 1) the availability of more than one outcome measure for the primary outcome and, correspondingly, 2) selective reporting of data. Eisen 2008 reported the means for a second stress outcome measure (sum of Likert‐type items on Stress Survey) without SDs. This measure of stress was more clinically similar to the Wolever 2012 study in content and follow‐up time. Without SDs, this outcome measure could not be used in a meta‐analysis. However, the authors did report that ANOVAs between the items were non‐significant (P > 0.05). Therefore, we assumed P values from 0.1 to 0.7 (see Analysis 1.2 for an illustration of the results assuming P = 0.50). This led to an estimate range from 1.2 to 5.4 of the SD of the outcomes for each intervention, when we used the calculator tool provided in Review Manager 5 (RevMan 2014; see Table 2). This significantly changed the conclusion of the study from 95% CI not inclusive of no effect (as presented in Analysis 1.1) to 95% CI inclusive of no effect. The results of our sensitivity analysis showed that the choice of outcome measure and missing data significantly impacted the heterogeneity of results when pooled in meta‐analysis. Taking the Stress Survey results from Eisen 2008, instead of the SUDS scores reported in our Analysis 1.1, and combining these data with the PSS data obtained from Wolever 2012, led to I² values ranging from 26% to 80%.

Open in table viewer
Table 2. Sensitivity analysis results: effect of selective outcome reporting by assuming P value to impute missing data

Assumed P value

t value1

SE1

SD1

I² (Meta‐analysis)2

SMD (95% CI), Eisen 20082

0.1

1.7

0.46

1.23

80%

0.62 (‐0.13 to 1.37)

0.3

1.05

0.74

1.99

66%

0.38 (‐0.35 to 1.12)

0.5

0.68

1.15

3.08

49%

0.25 (‐0.49 to 0.98)

0.7

0.39

2.00

5.37

26%

0.14 (‐0.59 to 0.87)

Comparison to meta‐analysis (post‐intervention SUDS), Analysis 1.1

N/A

90%

0.81 (0.21, 1.41)

SD = standard deviation, SE = standard error, SMD = standard mean difference, SUDS = Subjecive Units of Distress Scale

1. Imputed using RevMan 5 for estimating SDs from P values, entering n1 = 11, n2 = 21, difference in means = 0.78

2. Computed using RevMan

Discussion

Summary of main results

We included two RCTs with a total of 159 participants in this Cochrane review. Both studies evaluated interventions consisting of education about stress and its causes, as well as strategies to reduce stress, delivered via a computer. Both studies compared computer‐based intervention delivery to interventions with the same content provided to another group of employees by a live instructor. The results were substantially heterogeneous and could not be pooled in a single meta‐analysis. Considering the studies independently, one study found no evidence of a difference between a computer‐based method or an in‐person method. The other study found that a computer‐based method was significantly less effective than an in‐person method. However, another more appropriate measure of stress from that same study, which could not be used in this review due to missing data, would most likely have found a similar result of no evidence of a difference between a computer‐based method or an in‐person method. We judged both studies as being at high risk of bias overall. According to our judgment, the quality of the evidence produced by these two studies was very low.

Overall completeness and applicability of evidence

We found two studies comparing computer‐based and in‐person stress management interventions at the workplace. Both studies were conducted in the USA and they covered only a narrow range of occupations. Therefore, the generalisibility of these findings to other countries and other occupations is limited. While our inclusion criteria aimed to capture studies from any location or any occupation, research directly comparing computer‐based and in‐person stress management interventions has not been extensively conducted. In addition, the limited number of studies conducted to date was insufficient to adequately answer our review's objective.

We found studies that used cognitive behavioural and relaxation techniques. There are many other techniques for workplace stress management, such as exercise, goal‐setting, or journaling. The included studies used computers and video. However, now there is a broader range of delivery technologies, such as mobile devices. Therefore, it is unclear if any conclusions from this study would apply to other techniques and delivery technologies. The included studies were either completely unguided or guided. Other forms of guidance and adherence support exist, such as reminders or peer support groups. It is unclear from this review what difference the level of guidance makes when compared to in‐person versions.

Because we limited our review objective to comparing the effectiveness of computer‐based and in‐person interventions, we could not include organizational‐level interventions to reduce stress.

The follow‐up in both included studies was short, so it is uncertain what long‐term effects these interventions may have.

Quality of the evidence

We judged the overall quality of the evidence provided by the included studies to be very low. We downgraded the quality of evidence due to high risk of bias caused by study limitations (serious attrition bias), imprecision (limited sample size and wide confidence intervals), and substantial statistical heterogeneity. Due to heterogeneity, we were unable to combine the results of the two studies in a meta‐analysis. Significant statistical heterogeneity does not necessarily suggest that the true intervention effect is very different. Instead, it may arise from methodological diversity or differences in outcome assessments. One significant form of bias affecting the results of this review arose from incomplete data due to attrition and selective reporting (see 'Characteristics of included studies' table). Our sensitivity analysis also revealed that the choice of outcome measure and selective reporting in one study significantly influenced the overall estimated effect size.

Due to the small number of included studies, we could not assess publication bias.

The overall outcome of very low‐quality evidence means that there is substantial room for improvement in future studies.

Potential biases in the review process

We used a search strategy that was very broad and put very few restrictions on inclusion (e.g. any language, any workplace, any country, any stress intervention type, any date), which was reflected in the large number of reports identified by our search strategy. We also assessed similar systematic reviews for any mention of possible studies, and did not uncover any studies outside our search. Therefore, we believe it is unlikely we missed any published studies that would meet our inclusion criteria.

We could not obtain some relevant data requested from authors. One study did not publish enough data to compute an effect size for its most relevant outcome. The study authors' conclusion was very different from the one based on our sensitivity analysis in which we computed results from available data. With such a limited number of studies included, one study can have a major influence on the overall conclusions. The addition of future studies will allow us to have a clearer picture of intervention effects.

Finally, this Cochrane review evaluated the effectiveness of a range of interventions aiming to reduce stress in workers, rather than one specific intervention. While we believe this is appropriate for a complex and multifactorial outcome such as stress, it fails to differentiate between the many approaches to stress management in the workplace. We will conduct subgroup analyses in future updates of this review to consider differences in workers or approaches, provided that more data become available. However, any future categorization of techniques in subgroup analyses would be subject to bias from our categorization choices.

Agreements and disagreements with other studies or reviews

While this Cochrane review could not reasonably conclude that the delivery modes were equivalent, a conclusion of no clear differences between computer‐based and face‐to‐face methods in stress management in employees would have agreed with another review that examined internet‐based psychotherapeutic interventions to address any psychological problem in any population (Barak 2008). In that review, a subgroup analysis showed no significant difference in effect size when directly comparing equivalent Internet‐based and face‐to‐face therapies.

Other evidence shows that stress management interventions given via computer‐based technologies can be effective in employees (Heber 2017). Similarly, stress management interventions given in the traditional face‐to‐face way have been shown to be effective as well (Richardson 2008; Bhui 2012). However, it is still unclear how strongly the delivery method alone impacts the effectiveness of the programme. The body of evidence that currently directly compares these two delivery methods is weak. The very low‐quality evidence from this review suggests the differences in effects between the two may be small or non‐significant. Moreover, these differences may be due to other factors.

A key factor in the effectiveness of stress management interventions is the level of engagement and participation (i.e. adherence), which is also dependent on exposure. An effective stress management technique cannot induce change if participants do not engage with, and practice the technique. Computer‐based stress management interventions offer greater accessibility by allowing more employees to access the programme remotely. They also offer greater flexibility and convenience. However, that flexibility and convenience (e.g. a self‐paced programme without a fixed schedule) can reduce engagement and participation as employees choose to put off the programme for other, higher priorities with immediate deadlines (as reported in one included study, Eisen 2008). One key difference in the computer‐based interventions given in the two studies included in this review was that Wolever 2012 employed a predetermined schedule, while the intervention in Eisen 2008 was pre‐recorded and fully self‐paced. In addition, the Wolever 2012 intervention had a form of human support (i.e. it was guided), whereas the Eisen 2008 intervention was unguided. Attrition was lower and reductions in stress were greater in the Wolever 2012 version of a computer‐based stress management intervention. This result agrees with similar observations that by adding support or guidance to an online stress management intervention, participation, engagement, practice, and stress outcomes all improved relative to the online stress management intervention alone (Baumeister 2014; Allexandre 2016; Zarski 2016). A systematic review in the field of depression supports this conclusion as well (Richards 2012). Thus, the inclusion of guidance may be an explanatory factor in the differences between the two included studies, and should be investigated further when more research becomes available.

PRISMA study flow diagram
Figuras y tablas -
Figure 1

PRISMA study flow diagram

original image
Figuras y tablas -
Figure 2

Comparison 1 Computer‐based interventions compared to in‐person interventions for reducing stress, Outcome 1 Any stress outcome.
Figuras y tablas -
Analysis 1.1

Comparison 1 Computer‐based interventions compared to in‐person interventions for reducing stress, Outcome 1 Any stress outcome.

Comparison 1 Computer‐based interventions compared to in‐person interventions for reducing stress, Outcome 2 Sensitivity analysis: missing data.
Figuras y tablas -
Analysis 1.2

Comparison 1 Computer‐based interventions compared to in‐person interventions for reducing stress, Outcome 2 Sensitivity analysis: missing data.

Summary of findings for the main comparison. Computer‐based interventions compared to in‐person interventions for reducing stress in employees, less than 3 month follow‐up

Computer‐based (CB) interventions compared to in‐person (IP) interventions for reducing stress in employees, less than 3 month follow‐up

Population: employees

Settings: any workplace

Intervention: computer‐based stress management intervention, less than 3 month follow‐up

Comparison: in‐person stress management intervention

Outcomes

Illustrative comparative risks* (95% CI)

Relative effect
(95% CI)

No of Participants
(studies)

Quality of the evidence
(GRADE)

Comments

Assumed risk with in‐person stress management intervention

Corresponding risk with computer‐based stress management intervention

Stress

Various Measurement Instruments

0.81 standard deviations higher (0.21 higher to 1.41 higher) in one study and 0.35 standard deviations lower (0.76 lower to 0.05 higher) in another study

data not pooled1

159
(2 studies)

⊕⊝⊝⊝
very low2

0.2 standard deviations indicates a small effect

0.5 standard deviations indicates a medium effect

0.8 standard deviations and beyond indicates a large effect

Burnout

none of the studies reported this outcome

*The corresponding risk (and its 95% confidence interval) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI).
CI: confidence interval; SMD: standard mean difference

GRADE Working Group grades of evidence
High quality: Further research is very unlikely to change our confidence in the estimate of effect.
Moderate quality: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low quality: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low quality: We are very uncertain about the estimate.

1. Pooling of data not appropriate due to considerable heterogeneity (I² > 75%).

2. We downgraded the level of evidence once due to small sample size and underpowered studies. We also downgraded once due to high risk of bias due to incomplete outcome data (high and unequal attrition between interventions). Finally we downgraded once more because of inconsistency, due to considerable heterogeneity precluding meta‐analysis.

Figuras y tablas -
Summary of findings for the main comparison. Computer‐based interventions compared to in‐person interventions for reducing stress in employees, less than 3 month follow‐up
Table 1. Effect estimates of included studies reported in original outcome scales

Study, outcome scale

Assumed risk, in‐person stress management intervention

Corresponding risk, computer‐based stress management intervention

Effect estimate (95% CI)

Mean (SD)

No. of participants

Mean (SD)

No. of participants

Eisen 2008, SUDS

21.1 (14.48)

48

33.3 (16.33)

15

+12.2 (+2.98 to +21.4)

Wolever 2012, PSS‐10

16.94 (5.7)

44

14.91 (5.7)

57

‐2.03 (‐4.32 to +0.26)

SD = standard deviation

CI = confidence interval

SUDS = Subjecive Units of Distress Scale

PSS‐10 = Perceived Stress Scale

Figuras y tablas -
Table 1. Effect estimates of included studies reported in original outcome scales
Table 2. Sensitivity analysis results: effect of selective outcome reporting by assuming P value to impute missing data

Assumed P value

t value1

SE1

SD1

I² (Meta‐analysis)2

SMD (95% CI), Eisen 20082

0.1

1.7

0.46

1.23

80%

0.62 (‐0.13 to 1.37)

0.3

1.05

0.74

1.99

66%

0.38 (‐0.35 to 1.12)

0.5

0.68

1.15

3.08

49%

0.25 (‐0.49 to 0.98)

0.7

0.39

2.00

5.37

26%

0.14 (‐0.59 to 0.87)

Comparison to meta‐analysis (post‐intervention SUDS), Analysis 1.1

N/A

90%

0.81 (0.21, 1.41)

SD = standard deviation, SE = standard error, SMD = standard mean difference, SUDS = Subjecive Units of Distress Scale

1. Imputed using RevMan 5 for estimating SDs from P values, entering n1 = 11, n2 = 21, difference in means = 0.78

2. Computed using RevMan

Figuras y tablas -
Table 2. Sensitivity analysis results: effect of selective outcome reporting by assuming P value to impute missing data
Comparison 1. Computer‐based interventions compared to in‐person interventions for reducing stress

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Any stress outcome Show forest plot

2

Std. Mean Difference (IV, Random, 95% CI)

Totals not selected

2 Sensitivity analysis: missing data Show forest plot

2

128

Std. Mean Difference (IV, Random, 95% CI)

‐0.14 [‐0.70, 0.43]

Figuras y tablas -
Comparison 1. Computer‐based interventions compared to in‐person interventions for reducing stress