Scolaris Content Display Scolaris Content Display

Cochrane Database of Systematic Reviews Protocol - Intervention

Offline and computer‐based eLearning interventions for medical doctors' education

This is not the most recent version

Collapse all Expand all

Abstract

This is a protocol for a Cochrane Review (Intervention). The objectives are as follows:

The primary objective of this systematic review is to evaluate the effectiveness of offline computer‐based eLearning educational interventions compared to traditional classroom learning and/or other types of eLearning or no intervention, for improving medical doctors’ knowledge, skills, attitudes, satisfaction and patient‐related outcomes. We will assess the cost and cost‐effectiveness of offline computer‐based eLearning, and adverse, unintended, or undesirable effects of the interventions as secondary objectives.

Background

Description of the condition

Many people in low‐ and middle‐income countries (LMICs) suffer from poor quality healthcare services, as a result of deficiencies in healthcare systems and infrastructure, lack of essential medical supplies and a marked shortage of medical doctors (Lindelow 2006; Chen 2010). It is estimated that 43 countries have less than 2 medical doctors per 10,000 people (WHO 2015). In 2013, the World Health Organization (WHO) estimated a shortage of approximately 7.2 million healthcare professionals worldwide, and this shortage is expected to reach 12.9 million by 2035 (Campbell 2013). In addition to the limited faculty and institutional resources which contribute to the suboptimal quality of available health services in many countries, deficient knowledge and skills of medical staff (Berendes 2011) are further worsened by the widening gap between innovations in the field and the dissemination of such information to medical professionals, so that they can update their knowledge and skills (Pakenham‐Walsh 2009).

Training programmes such as continuing professional development (CPD) and continuing medical education (CME) have been developed to improve the quality of healthcare services by updating the skills and knowledge of medical doctors (Nancy 2008). Evidence suggests that CPD and CME programmes are effective in improving the knowledge, skills and practice of healthcare professionals, as well as patient‐related outcomes (Lawton 2003; Marinopoulos 2007; O'Neil 2009; Thepwongsa 2014). Traditionally CPD and CME programs are delivered through face‐to‐face teaching in the classroom. However, classroom‐based teaching is associated with many obstacles, such as costs and inconveniences associated with time and distance. CPD and CME can be delivered via distance learning, which employs information and communication technology (ICT), commonly known as eLearning (Masic 2008). eLearning has recently emerged as a plausible and low‐cost solution to addressing the shortage of medical providers, as well as a tool to improve deficiencies in knowledge and skills by providing convenient access to educational materials and flexibility in terms of pace, place and time (Lam‐Antoniades 2009).

This review is one of a series of systematic reviews assessing the scope for, and potential impact of, a range of eLearning modalities for pre‐ and post‐registration health professionals’ education and training. At the present time, other published protocols in this series are: Gentry 2016; Hervatis 2016; Kononowicz 2016; Paul 2016; and Saxena 2016. This review was originally commissioned by the WHO, Department of e‐Health, Knowledge Management and Sharing. The current and potential users of eLearning have different needs and there are different resources available to deliver eLearning. Therefore, we split our series into individual reviews by categorising them according to the resources required for the eLearning intervention to work, but also according to pedagogical aspects. eLearning, and the reviews in our series, include offline and online computer‐based eLearning, digital game‐based eLearning (DGBL), massive open online courses (MOOCs), virtual reality environments (VREs), virtual patient simulations (VPS), psychomotor skills trainers (PSTs) and mLearning, amongst other approaches (Rasmussen 2014). Each of these types of eLearning has its own specificities, and related advantages, limitations and challenges. Each of the Cochrane reviews in this eLearning series evaluates the efficacy of the type of eLearning in improving skills, knowledge and attitudes of pre‐ and post‐registration health professionals during formal education. The present systematic review will specifically focus on the effectiveness of offline computer‐based interventions compared with various control groups in improving medical doctors' knowledge and skills.

Description of the intervention

eLearning is gaining recognition as one of the key strategic platforms to build strong health education and training systems (Kogozi Kahiigi 2008). The United Nations and the WHO consider eLearning to be an effective means of providing educational development for medical doctors, especially in LMICs (Childs 2005). eLearning encompasses a broad spectrum of interventions, outlined above, which are characterised by their tools, contents, learning objectives, pedagogical approaches and delivery setting. Blended learning, which merges eLearning with traditional classroom learning, might be another suitable option for the training of medical doctors, especially when there is a need to combine practical skills–based training with self–directed learning (Rowe 2012; Duque 2013; Makhdoom 2013).

Whilst online eLearning is widely used for distance learning, this may not always be possible, especially in situations where there is an absence of internet or local area network (LAN) connections. In 2015, 3.2 billion people globally were using the internet, of whom 2 billion live in developing countries; and only 43% of the world’s population has access to the internet. However, 4 billion people from developing countries still remain offline, representing 2/3 of the population residing in developing countries (International Telecommunication Union 2015). Therefore, offline eLearning approaches may be the solution to overcoming the geographical, financial and temporal obstacles faced by learners (Greenhalgh 2001).

How the intervention might work

Distance learning, in the form of eLearning, has many advantages over classroom learning. It has the potential to reach large number of learners at the same time and, in addition, caters to the pace and time constraints of different learners, whilst reducing the overhead costs of the learning process (Bouhnik 2006; Andreazzi 2011; Sezer 2016). eLearning has been employed effectively to establish and improve existing health services in LMICs (Andreazzi 2011) and to improve the diagnostic and therapeutic competencies of medical professionals (Paixão 2009).

As well as providing a solution for those learners faced with geographical, financial and temporal barriers to online eLearning, offline computer‐based eLearning has been found to improve the teaching and learning process and outcomes (Cook 2008). Furthermore, the educational materials can be customised to the objectives of the curriculum, which leads to consistent quality and relevant pedagogy (Greenhalgh 2001). Computers, which have become more affordable over time, can be used to deliver an interactive learning experience by combining text, images, audio and video material (Adams 2004), thereby making offline computer‐based eLearning suitable for delivering educational materials with strong visual, auditory and spatial components (Brandt 2006). Furthermore, offline computer‐based eLearning can be used for self‐assessment of knowledge gained, by practicing exercises and undertaking tests (Greenhalgh 2001; Triola 2012).

Possible disadvantages and risks of the intervention

Despite the many aforementioned advantages of offline computer‐based eLearning (Rasmussen 2014), certain disadvantages can be associated with the intervention. As offline computer‐based intervention mostly relies on self‐learning, it is not always feasible to monitor the learner’s progress (Bouhnik 2006) and maintain constant student‐teacher interaction (Cantoni 2004). Therefore, the learner is unable to consult the facilitator in real time when he or she encounters difficulties. Another disadvantage is the limited social interaction and group discussion among students (Cantoni 2004; Bouhnik 2006). In addition, the training materials for offline learning are limited to what is available on external storage devices, and updating this information consistently is often challenging when compared to online eLearning interventions.

Why it is important to do this review

Offline computer‐based eLearning has the potential to contribute to solutions for the global problems with the medical workforce. It is important to evaluate the effectiveness of offline computer‐based eLearning using an evidence‐based approach, hence the importance of this systematic review.

Previous systematic reviews of offline computer‐based eLearning interventions have highlighted the need for further research on the topic, due to the limited scope of existing evaluations, in terms of duration (short‐term studies only), professional field (focus on dentistry students only) and income level (mostly high‐income countries) (Greenhalgh 2001; John 2013; George 2014). Two reviews that focused on the use of computer‐assisted learning/offline learning were published more than a decade ago (Greenhalgh 2001; Rosenberg 2003). Another review (Rasmussen 2014) that focused on the use of offline eLearning in undergraduate health professional education was a scoping review, and only covered evidence from the year 2000 onwards. In this review we are building on the work presented in Rasmussen 2014, by performing a thorough and systematic search of the relevant literature from 1990 onwards, synthesising all of the available evidence from different sources, and aiming to perform a more thorough analysis.

This review will contribute to addressing the existing evidence gaps by:

  • updating the fast‐growing body of evidence on the topic of eLearning through offline computer‐based educational interventions for medical doctors;

  • evaluating the impact of offline computer‐based educational interventions on the knowledge, skills, attitudes, and satisfaction of medical doctors;

  • evaluating the impact of offline computer‐based educational interventions on patient‐related outcomes;

  • including all relevant and available evidence from developed and developing countries; and

  • being integrated in a series of reviews and a final overview which will provide a systematic approach to the multiple uses and applications of eLearning, in term of channels (online and offline computers, simulated environments, games and blended learning) and training stages (pre‐registration or postgraduate in CPD or CME).

Objectives

The primary objective of this systematic review is to evaluate the effectiveness of offline computer‐based eLearning educational interventions compared to traditional classroom learning and/or other types of eLearning or no intervention, for improving medical doctors’ knowledge, skills, attitudes, satisfaction and patient‐related outcomes. We will assess the cost and cost‐effectiveness of offline computer‐based eLearning, and adverse, unintended, or undesirable effects of the interventions as secondary objectives.

Methods

Criteria for considering studies for this review

Types of studies

We will include randomised controlled trials (RCTs), cluster RCTs (cRCTs) and quasi‐RCTs of offline eLearning for medical doctors, in which efficacy outcomes are expressed in quantitative terms. We will include RCTs with unclear or high risk of bias for sequence generation, but will exclude cross‐over trials due to the high likelihood of a carry‐over effect (Higgins 2011). We will include eligible trials reported in conference proceedings and abstracts. When considering cost‐effectiveness we will only use economic data collected as part of efficacy studies that meet our eligibility criteria.

Types of participants

We will include studies in which participants (learners) are medical doctors (including interns) enrolled in a post‐graduate medical education programme, namely, any type of study after a university qualification that is recognised by the relevant governmental or professional bodies and enables the qualification holder primary entry into, or continuation of work in, the healthcare workforce in a more independent or senior role. This includes learners from medicine, and excludes learners of nursing and midwifery, medical diagnostic and treatment technology, therapy and rehabilitation, pharmacy, and traditional, alternative and complementary medicine.

Participants will not be excluded on the basis of age, sex or any other socio‐demographic characteristic. We will exclude studies with mixed participant groups, such as medical doctors, nurses, pharmacists as well as pre‐ and post‐registration healthcare professionals, in which results are not presented separately.

Types of interventions

We will include studies in which offline computer‐based eLearning interventions were used to deliver learning content. CME and CPD programs that involve the use of offline eLearning interventions will be included. CME will be defined as "all educational activities which serve to maintain, develop, or increase the knowledge, skills, and professional performance and relationships that a physician uses to provide services for patients, the public, or the profession" (AAMC 2015) and CPD as "a range of learning activities through which health and care professionals maintain and develop throughout their career to ensure that they retain their capacity to practice safely, effectively and legally within their evolving scope of practice" (HCPC 2015).

Offline computer‐based eLearning interventions refer to the use of personal computers (PCs) or laptops to assist in the delivery of stand‐alone multimedia materials, without the need for internet or LAN connections (Ward 2001; Rasmussen 2014). Trials of offline interventions where a computer was not explicitly used but could potentially be used for delivery (e.g. a videotape) will be included. Content can be delivered via videoconferencing, emails and audio‐visual learning material; and kept in either magnetic storage (floppy discs), optical storage (CD/DVD), flash memory, multimedia cards, external hard discs or information pre‐downloaded from a networked connection, as long as the learning activities do not rely on this connection (Rasmussen 2014). This includes studies where the offline eLearning methods are the sole means by which the interventions are delivered, or where the offline eLearning methods are part of a multi‐component intervention (i.e. blended learning), provided that this is the only method of eLearning used alongside classroom learning. If the eLearning intervention could fit into multiple categories (e.g. both offline and DGBL), the study will be excluded from this review and subsequently included in the more focused review categories.

We will only include studies that make one of the following intervention comparisons:

  • offline intervention versus traditional face‐to‐face learning;

  • offline intervention versus no intervention;

  • blended learning (with an offline component) versus traditional face‐to‐face learning;

  • offline intervention versus other types of eLearning interventions;

  • offline intervention versus another offline intervention; and

  • offline intervention versus blended learning.

Types of outcome measures

In this review we will investigate patient‐related outcomes, and intervention user outcomes, such as knowledge, comprehension, intellectual skill and its application, analysis, synthesis, evaluation and attitudes (cognitive and affective domains). Studies of offline eLearning which assess motor skills and skills‐based learning (psychomotor domain) will be excluded, as they are analysed in a separate review in the eLearning series (currently pre‐publication). However, we will include studies assessing intellectually‐based skills, including reading instructions, solving problems and a variety of other tasks that involve recalling and processing information (Clark 2002). We will include studies reporting at least one of the following primary or secondary outcomes.

Primary outcomes

As per previous systematic reviews of eLearning interventions (Rasmussen 2014), we will assess the effectiveness of offline computer‐based eLearning interventions based on the following primary outcomes.

  • Learners’ knowledge, assessed using any validated or non‐validated instrument to measure difference in pre‐ and post‐test scores. If several post‐test results are available, data as to when those tests were conducted will be recorded and the difference between the pre‐test and the first post‐test will be used for the analysis. Other tests will be used for the sensitivity analysis.

  • Learners’ skills, measured using any validated or non‐validated instrument (e.g. pre‐ and post‐test scores, time to perform a procedure, number of errors made whilst performing a procedure). Using Miller's classification of clinical competence (Miller 1990), the different types of tests for students' knowledge and skills will be grouped and analysed together. For example, multiple choice questions (MCQs) assessing knowledge (i.e. knows) will be analysed together, and essay questions assessing competence (i.e. knows how) will be analysed together. The focus will therefore be on the testing method rather than the delivery method (i.e. if skills were assessed by a knowledge test, the outcome would be categorised as knowledge).

  • Patient‐related outcomes of the eLearning intervention measured using validated or non‐validated instruments. Patient‐related outcomes will include:

    • clinical outcomes, such as improvement of health; and

    • service establishment, such as training to establish a vaccination program.

  • Learners' satisfaction with the eLearning intervention measured using validated or non‐validated instruments. Learners' satisfaction will include the satisfaction and attitudes towards the intervention they were exposed to. Learners' professional attitudes and satisfaction will only be assessed in a narrative form, as preliminary work conducted by the Global eHealth Unit suggests that there is a high level of heterogeneity in the operational definition of these outcomes across different studies (Campbell 2013; Rasmussen 2014).

Secondary outcomes

  • Cost and cost‐effectiveness of the intervention.

  • Adverse, unintended, or undesirable effects of the interventions. This may include, but is not limited to, feelings of depression and loneliness, dropout risk, and computer‐induced anxiety.

We will not impose a restriction on outcomes used based on the time point at which they are measured; however where the difference between pre‐ and post‐test scores are calculated and several post‐test results are available, the difference between the pre‐test and the first post‐test will be used.

Search methods for identification of studies

Electronic searches

We will search the following databases:

  • Cochrane Central Register of Controlled Trials (CENTRAL), (Cochrane Library)

  • MEDLINE (via Ovid)

  • Embase (via Ovid)

  • Web of Science

  • Educational Resource Information Centre (ERIC) (via Ovid)

  • PsycINFO (Ovid)

  • CINAHL (via EBSCO)

  • ProQuest Dissertation and Theses Database

The MEDLINE search strategy is presented in Appendix 1. This strategy will be adapted to the other databases. This strategy will be used to search for studies eligible for the whole series of reviews assessing the use of eLearning modalities for pre‐ and post‐registration health professionals’ education and training. Databases will be searched from and including the year 1990 to present. The reason for selecting 1990 as the starting year for our search is because prior to that year, use of the computer and internet was limited to very basic tasks. We will search in English, but will include papers in any language.

Searching other resources

We will search the reference lists of all included studies and relevant systematic reviews. We will also search the International Clinical Trials Registry Platform Search Portal and Current Controlled Trials metaRegister of Controlled Trials to identify unpublished trials, and contact the relevant investigators for further information.

Data collection and analysis

Selection of studies

We will implement the search strategy as described under the ‘Electronic searches’ section and import all references identified to EndNote X7.4. The search results from the different electronic databases will be combined in a single EndNote library and duplicate records of the same reports will be removed. Two review authors will independently screen titles and abstracts to identify studies that potentially meet the inclusion criteria. The full‐text versions of these articles will be retrieved and read in full. Finally, two review authors will independently assess these articles against the eligibility criteria. Any disagreements will be resolved through discussion between the two authors. If no agreement can be reached, a third author will act as arbiter. Studies that initially appeared to be relevant but are excluded at this stage will be listed in the table ‘Characteristics of excluded studies’, where a reason for exclusion will be noted. Two review authors will verify the final list of included studies.

Data extraction and management

Two review authors will independently extract and manage the data for each of the included studies using a structured data recording form. We will pilot the data extraction form and amend it according to the received feedback. In addition to the usual information on study design and participant demographics, we will extract data on relevant fields, including the type of device used, delivery channel/method (CD‐ROM, external hard disc, USB stick etc.), type of content (video, text, images, etc.), and mode of offline eLearning (active or passive, linear or dynamic). We plan to contact study authors in the case of any unclear or missing information. Disagreements between review authors will be resolved by discussion. A third review author will act as an arbiter in cases where disagreements cannot be resolved.

For economic analysis, we will separately extract estimates of resource use associated with interventions and comparators, and estimates of their unit costs. We will also collect information on price per year and the currency used to calculate estimates of costs and incremental costs. We will aim to collect measures of incremental resource use and costs at the individual student level. We will extract both a point estimate and a measure of uncertainty (e.g. standard error or confidence interval (CI)) for measures of incremental resource use, costs and cost‐effectiveness, if reported. Additionally, we will also collect details of any sensitivity analyses undertaken, and any information regarding the impact of varying assumptions on the magnitude and direction of results.

Assessment of risk of bias in included studies

Two review authors will independently assess the risk of bias of the included studies using the Cochrane Collaboration’s ‘Risk of bias’ tool (Higgins 2011). Studies will be assessed for risk of bias in the following domains: random sequence generation; allocation concealment; blinding (participants, personnel); blinding (outcome assessment); completeness of outcome data (attrition bias), selective outcome reporting (reporting bias); and other sources of bias (e.g. baseline imbalance, inappropriate administration of an intervention, and contamination). For cRCTs, we will also assess and report on the risk of bias associated with an additional domain: selective recruitment of cluster participants. Judgements concerning the risk of bias for each study will fall into three categories: high, low, or unclear risk of bias. We will incorporate the results of the 'Risk of bias' assessment into the review using 'Risk of bias' tables, graphs and summaries.

We will also critically appraise any included economic evidence using the British Medical Journal Checklist for authors and peer reviewers of economic submissions (Drummond 1996).

Measures of treatment effect

For continuous data, we will estimate mean differences and 95% CIs for each study. For dichotomous outcomes, we will calculate risk ratios (RR) and 95% CIs. Where possible, all data will be analysed on an intention‐to‐treat basis.

Unit of analysis issues

When a study has more than one active intervention arm, we will label the study arms as ‘a’, ‘b’, ‘c’ and so on. If more than one intervention arm is relevant for a single comparison, we will compare the relevant eLearning arm with the least active control arm, so that double counting of data does not occur. We will list the arms that were not used for the comparison in the 'Notes' section in the table ‘Characteristics of included studies’.

For cRCTs, we will attempt to obtain data at the student level. Where the effects of clustering have already been adjusted for, we will extract the reported estimates and use them directly for our analyses. Otherwise, we will check for unit of analysis errors; and if such errors are found, and sufficient information is available, we will reanalyse the data using the appropriate unit of analysis, by taking account of the intra‐cluster correlation coefficients (ICCs). We will obtain estimates of ICCs by imputing them using estimates from external sources. If it is not possible to obtain sufficient information to reanalyse the data, we will report effect estimates and annotate ‘unit of analysis error' (Ukoumunne 1999).

Dealing with missing data

We will conduct an intention‐to‐treat analysis, including all those who were randomised to either the intervention group or comparator, regardless of losses to follow‐up and withdrawals. Whenever possible, we will attempt to obtain missing data (e.g. number of participants in each group, outcomes and summary statistics) from the original author(s). For dichotomous outcomes, data imputed as ‘complete case analysis’ can be used to fill in the missing values (Higgins 2008). When standard deviations of continuous outcome data are missing, we will try to calculate the standard deviations from relevant statistics, such as 95% CIs, standard errors, or P values. If these are unavailable, then we will contact study authors or impute the standard deviations from other similar studies.

Assessment of heterogeneity

Where studies are judged homogeneous enough (in terms of their populations, interventions, comparator groups, outcomes and study designs) to draw meaningful conclusions when combined, we will consider pooling them together in a meta‐analysis. If such meta‐analyses are feasible, we will assess heterogeneity through a visual inspection of the overlap of forest plots and by calculating the Chi2 tests and the I2 inconsistency statistics (Higgins 2011). If there is a moderate level of heterogeneity (I2 = 50% to 75%), we will explore possible reasons for inconsistency by conducting subgroup analysis. Where substantial clinical, methodological or statistical heterogeneity (I2 > 75%) is detected, we will not present pooled data, but will use a narrative approach to data synthesis instead. In this event, we will attempt to explore possible clinical or methodological reasons for this variation by grouping studies that are similar in terms of populations, intervention features, methodological features, or other factors to account for differences in intervention effects (described below).

Assessment of reporting biases

We will assess reporting bias qualitatively based on the characteristics of the included studies (small studies with positive findings), and if information that we obtain from contacting experts and authors or studies suggests that there are relevant unpublished studies. If we include at least 10 studies, we will assess reporting bias using a funnel plot and regression weighted by the inverse of the pooled variance. A regression slope of zero will be interpreted as an absence of small study bias.

Data synthesis

Data will be reported using Review Manager software (RevMan 2014). Extracted data will be entered into tables grouped by study design and type of intervention to create a descriptive synthesis. The results of individual RCTs and cRCTs will be reported as mean differences for continuous variables and risk ratios for dichotomous variables with a 95% CI, as stated above. Using Miller's classification of clinical competence (Miller 1990), the different types of tests for learners' knowledge and skills will be grouped and analysed together. For example, multiple choice questions (MCQs) assessing knowledge (i.e. knows) will be analysed together, and essay questions assessing competence (i.e. knows how) will be analysed together. The focus will therefore be on the testing method rather than the delivery method (i.e. if skills are assessed by a knowledge test, it will be categorised as knowledge).

For learners' attitudes, the different types of assessment will be grouped and analysed as cognitive attitudes, behavioural attitudes or affective attitudes as described by Martin et al (Martin 2002). Learners’ satisfaction will include the satisfaction with and attitudes towards the learning intervention to which they were exposed. Learners’ attitudes and satisfaction will only be assessed in a narrative fashion, as the preliminary work conducted by the Global eHealth Unit suggests that there is a high level of heterogeneity in the operational definition of these outcomes across different studies (George 2014; Rasmussen 2014).

If adverse, unintended, or undesirable effects are reported numerically, for example, the number of participants that dropped out due to social isolation, anxiety from using the computer and/or mean anxiety score, we will analyse the adverse effects using risk ratios for dichotomous outcomes; however for continuous variables we will use mean difference. In the case that adverse, unintended, or undesirable effects are not expressed numerically, we will narratively synthesise the evidence. We also intend to narratively synthesise economic evidence and to identify key determinants of resource use, costs and/or cost‐effectiveness. We will then analyse how these determinants may be distributed within and between different settings to allow readers to determine the implications of findings of presented data in their own settings and to inform their particular context‐specific decisions. We will follow the Cochrane Handbook guidance on incorporating economic evidence in systematic reviews (Higgins 2011).

Where studies report more than one measure for each outcome, we will use the primary measure as defined by the primary study authors in the analysis. Where no primary measure has been reported, a mean value of all the measures for the outcome will be calculated and used in the analysis. The choice of model used to pool data will depend on the level of heterogeneity observed (assessed as described in the Assessment of heterogeneity section) between studies to be included in the meta‐analysis. Where trials and outcomes appear to be homogenous, we will use fixed‐effect models; however if trials appear to be heterogenous then we will use random‐effects models, which provide a more conservative estimate of effect and can be used where there is moderate heterogeneity.

Subgroup analysis and investigation of heterogeneity

We will conduct the following subgroup analyses in this review where possible.

  • By countries’ income (LMICs versus high‐income countries).

  • By sub‐speciality, using the International Standard Classification of Occupations (ISCO‐08).

  • By number of repeated interventions (one‐off versus repeated interventions).

  • Comparing quartiles of adherence/time spent on the intervention. We will recalculate and present the measure of adherence/time spent on the intervention as a percentage to account for the difference in intervention duration between studies.

We acknowledge that there are further subgroup analyses that could be performed, e.g. comparing interventions according to learning objectives and interactivity of interventions. In future reviews conducted after completion of our series of initial reviews, authors will be in the best position to look at these subgroup analyses because such comparisons would be most meaningful from the perspective of an educator if multiple methods of eLearning were to be compared.

Sensitivity analysis

Sensitivity analyses will be considered to explore the impact of the 'Risk of bias' dimensions on the outcomes of the review. We will remove studies from the analysis that are deemed to be at high risk of bias after examination of individual study characteristics. To examine the effect on the pooled effects of the intervention we will exclude studies according to the following filters.

  • High risk of bias studies (we will include RCTs with unclear or high risk of bias for sequence generation, however where meta‐analysis is feasible and appropriate we will conduct sensitivity analyses excluding these trials to examine the robustness of the meta‐analysis results to methodological limitations of the included studies).

  • Small studies (less than 30 participants per randomisation group).

  • Source of funding (industry sponsorship (solely industry funded) or mixed sponsorship (public and industry funded, including the free provision of study material only).

  • Time lapse between end of intervention and first post‐test (quartiles) as well as last post‐test.

  • If studies compared more than one offline eLearning intervention or blended learning intervention to traditional learning, we will perform a sensitivity analysis to assess the impact of successively replacing the results of each intervention group on the measure of effect. Additionally, we will average the mean scores for each intervention group and use this average in the meta‐analysis. We will then compare the difference between the two approaches.

'Summary of findings' table

We intend to prepare 'Summary of findings' tables to present results of the most relevant comparisons, as described in chapter 11 of theCochrane Handbook (Schünemann 2011). We will present the results of meta‐analysis for each of the primary outcomes, as well as potential adverse effects, as defined in the Types of outcome measures section. We will provide a source and rationale for each assumed risk cited in the table(s). We will consider the following limitations: risk of bias, inconsistency of results, indirectness of the evidence, imprecision, and publication bias, and subsequently downgrade the quality of evidence contributing to outcomes where appropriate. Two authors will apply the GRADE criteria independently of one another to assess the quality of evidence using the GRADEprofiler (GRADEpro) software (Schünemann 2011). If meta‐analysis is not feasible, we will present results in a narrative ‘Summary of findings’ table format, as demonstrated by Chan 2011 (see also Cochrane 2014).