Literature reviews of educational technology research in low and middle-income countries: an audit of the field

One of the overall objectives of the EdTech Hub is to conduct a series of literature reviews on the state of educational technology in primary and secondary school settings within low and middle-income countries (LMICs). Given the variety of approaches which can be considered as ‘educational technology’ and the range of settings which are LMICs, the scale of the task presents an initial challenge. Furthermore, it would be valuable to design the initial literature search in such a way that would subsequently support detailed, systematic reviews on particular themes or topics depending upon trends within the body of literature. In order to learn from existing related studies and inform the practical direction of the literature review, a collection of documents was examined and analysed. The collection included seven methodological documents about conducting systematic reviews, and 15 recent systematic reviews, which addressed topics related to the focus of the Hub (including a range of EdTech-related topics or education for development, for example). In this report we have two objectives​ : keywords.

: Number of records returned via Scopus based on the query "a systematic review", focussing upon journal papers (reviews and articles) within the Social Sciences (orange markers, left-hand axis)). Note that 2019 is incomplete (search undertaken 10th October 2019). For comparison, the number of articles indexed in Scopus for Social Sciences as a whole (grey markers, right-hand axis). 10 Figure 2: Number of records returned via Scopus based on the query "systematic review" AND ( "educational technology" OR "edtech" OR "e-learning" OR "elearning" OR "technology enhanced learning" ), focussing upon journal papers (reviews and articles) within the Social Sciences. Note that 2019 is incomplete (search undertaken 10th October 2019). 11    Educational Technology research in Low and Middle-Income countries: an audit of the field List of figures Figure 1. Number of records returned via Scopus based on the query "a systematic review", focussing upon journal papers (reviews and articles) within the Social Sciences (orange markers, left-hand axis)). Note that 2019 is incomplete (search undertaken 10th October 2019). For comparison, the number of articles indexed in Scopus for Social Sciences as a whole (grey markers, right-hand axis). 10 Figure 2. Number of records returned via Scopus based on the query "systematic review" AND ( "educational technology" OR "edtech" OR "e-learning" OR "elearning" OR "technology enhanced learning" ), focussing upon journal papers (reviews and articles) within the Social Sciences. Note that 2019 is incomplete (search undertaken 10th October 2019). 11 Figure 3. Research databases and other data sources used within the sampled documents, arranged according to broader subject area. Number of occurrences in brackets. 17  Table 2. Parameters for the inclusion and exclusion of literature. Numbers in parentheses indicate how many of the papers in the sample used the database. 18 Table 3. Research questions aligned with appropriate approaches to systematic reviews. 20 Table 4. Inclusion and exclusion criteria. 22 Table 5. Research databases and other data sources within the sample. Number of occurrences in brackets. 28 Table 6. Broad subject areas of literature sources. 29

Abstract
One of the overall objectives of the EdTech Hub is to conduct a series of literature reviews on the state of educational technology in primary and secondary school settings within low and middle-income countries (LMICs). Given the variety of approaches which can be considered as 'educational technology' and the range of settings which are LMICs, the scale of the task presents an initial challenge. Furthermore, it would be valuable to design the initial literature search in such a way that would subsequently support detailed, systematic reviews on particular themes or topics depending upon trends within the body of literature.
In order to learn from existing related studies and inform the practical direction of the literature review, a collection of documents was examined and analysed. The collection included seven methodological documents about conducting systematic reviews, and 15 recent systematic reviews, which addressed topics related to the focus of the Hub (including a range of EdTech-related topics or education for development, for example).
In this report we have two objectives : 1. Summarise methodologies for systematic literature reviews in the field of educational technology in LMICs. 2. Provide specific methodological recommendations on conducting a systematic literature review of the state of research on educational technology in LMICs.
To investigate systematic literature reviews in the field of interest (Objective 1), insights were drawn from an analysis of the sample of documents. The papers selected for inclusion were chosen either because they were existing literature reviews relevant to our theme of EdTech in LMICs, or because they were analyses of specific literature review methodologies. The papers were mapped onto a framework according to their methodological stance, approaches to data gathering, and data analysis.
This paper also discusses the implications of the analysis in relation to the work of the EdTech Hub, and how to translate the findings of the analysis into practical considerations for addressing the Hub's research questions through a systematic literature review (Objective 2). As such, this report also represents a case study in planning a literature review in this context, which may be a useful resource for others intending to undertake similar reviews in the future. 7/31

Introduction
The EdTech Hub is a recently instituted initiative to undertake the world's largest ever educational technology research and innovation programme. Run through a collaborative partnership of research institutions and management consultancies, it will explore how educational technology might be used to enhance educational outcomes in low and middle-income countries (LMICs) around the world. Within those countries, special focus will be given to the most marginalised groups: economically, socially and otherwise.
Whilst broad overarching goals and methodological approaches are already defined through the hub's research questions and methodological stance, there remain more specific methodological questions about how we might begin answering our research questions. Among those unanswered questions is: what is the best way to synthesise and build upon existing information relevant to the Hub's aim and research questions? Our chosen approach is to conduct a rigorous and comprehensive literature review so that we have a foundation upon which to conduct further research on EdTech in LMICs.
Given the wide scope, a two-stage process will be used: initially, a large-scale scoping review will provide a breadth of understanding of the EdTech literature in LMICs, followed by focused systematic reviews of particular topics and themes. However, no protocol currently exists on 2 how to conduct such a review. The current paper seeks to begin addressing that gap. In the following sections, we will present the process through which we arrived at a protocol for conducting a large-scale literature review and subsequent systematic reviews on EdTech in LMICs. The resultant protocol will be published as a separate document; however, it is hoped that the steps presented herein, on how we arrived at that protocol, may be useful for other researchers seeking to develop protocols in other fields where none currently exist, or indeed seeking to do a systematic review of EdTech in LMICs themselves.
The presentation of our process for arriving at a protocol begins in Section one with a presentation of the Hub's guiding research questions, followed by an analysis of the use of systematic reviews in the Social Sciences and more explanation of why we thought it necessary to develop our new protocol. Section two begins the examination of 22 relevant systematic reviews and systematic review methodologies that informed the development of our protocol. This examination is continued in Section three with an analysis of the methodological approaches used in those papers. The final section focuses on key takeaways from our analyses of those 22 reference documents.   Despite the trend towards conducting systematic literature reviews in EdTech, there is no agreed protocol for conducting systematic literature reviews in the field, and particularly within LMIC contexts. This contrasts with Health research and Biomedical Sciences, from which systematic reviews originate, and which already have established, and rigorous, review protocol. As such, there was an immediate question as to how we should tailor and adapt our approach and protocol to the research context at hand, particularly as there are few existing systematic reviews in the same field for us to draw upon. To address the question of what we could learn from the field to inform our approach and protocol development, we performed an audit of 22 existing literature reviews and review methodologies, with foci related to those of the Hub.
The present document therefore has two objectives: 1. Summarise methodologies for systematic literature reviews in the field of educational technology (EdTech) in low and middle-income countries; and 2. Provide specific methodological recommendations in conducting a systematic literature review of the state of research on educational technology (EdTech) in low and middle-income countries.
The remainder of the document is structured around three main sections. First, the sample of systematic reviews included in the analysis is introduced (Section @ 2). Trends which emerged from the analysis across the sample of papers are discussed (Section @ 3), and finally, some of 11/31 the practical implications of the trends for designing the systematic review approach in the context of the work of the EdTech Hub are explored (Section 4). While the primary aim of the analysis is to inform our research design, by providing this as a 'worked example', the principles will also be valuable to others seeking to conduct similar analyses in this area.

Existing literature
To investigate systematic literature reviews relating to the topic of EdTech in LMICs, insights were drawn from an analysis of a sample of related documents. As few existing studies have focused explicitly on the combined fields of educational technology and LMICs, a well-defined corpus of studies to draw from was not available. Conversely, an exhaustive review of both bodies of literature relating separately to systematic reviews within educational technology and LMICs would have been too wide a scope.
A pragmatic solution was to draw upon the expertise present within the network of the EdTech Hub to construct a sample of documents which reported recent reviews undertaken on topics related to the scope of the EdTech Hub. These 22 documents (summarised in Table   # 1) were identified through a combined approach of discussions with EdTech Hub researchers, reviewing known publications from the EdTech Hub project proposal and searching the British Journal of Educational Technology for relevant systematic reviews. The documents shown in Table  # 1 were examined in terms of their data sources, and data analysis approaches. Additionally, a number of methodology-focused resources were also used, to examine the range of potentially appropriate literature review methodologies ( Mobile learning and student cognition: A systematic review of PK-12 research using Bloom's Taxonomy. ⇡Effective Public Health Practice Project (1998) Quality assessment tool for quantitative studies.

⇡Evans, & Popova (2015)
What really works to improve learning in developing countries? An analysis of divergent findings in systematic reviews.
School resources and educational outcomes in developing countries: A review of the literature from 1990 to 2010.
Tablet use in schools: A critical review of the evidence for learning outcomes.
⇡Haßler, et al.  (2007) Guidelines for performing systematic literature reviews in software engineering.
What public media reveals about MOOCs: A systematic analysis of news reports.
Games for enhancing basic reading and maths skills: A systematic review of educational game design in supporting learning by people with learning disabilities.
The efficacy of learning analytics interventions in higher education: A systematic review.
What have we learned after ten years of systematic reviews in international development?
How to do a good systematic review of effects in international development: A tool kit.
⇡White, & Waddington (2012) Why do we care about evidence synthesis? An introduction to the special issue on systematic reviews.
The documents were mapped on to a framework within a spreadsheet which comprised the following fields: • Methodology , i.e. the type of literature review approach was used; • Data gathering , in relation to the strategies employed, specific databases included in the searches, and inclusion and exclusion criteria; and • Data analysis , including approaches and software use.
In drawing lessons and recommendations regarding methodological considerations from these papers, a saturation point was reached where findings were repeated ( ⇡Morse, 2004 ) and a set of key criteria were developed for planning a large-scale literature review.

Types of data analysis approaches
Four approaches to systematic reviews were identified from methods-based papers within the literature: meta-analysis, vote counting, narrative review, and meta-synthesis ( ⇡Evans, & Popova, 2015 ;⇡Siddaway, et al., 2019 ). Note that while systematic reviews have their roots in Biomedical Sciences, three of the approaches are qualitative in nature (meta-analysis being quantitative). While the systematic review component will take place after the initial scoping literature review, it is useful to be aware of the types of systematic reviews at this stage in order to ensure that the initial search is conducted in a way which supports subsequent systematic analysis.

Meta-analysis
A meta-analysis converts the results of all the included studies to standardised point estimates and then pools the estimates within a category of interventions to estimate the average effect of that category. ⇡Glewwe and colleagues present an example within the sampled documents ( ⇡Glewwe, et al., 2013 ).
Key strengths: • Incorporates the data that vote counting excludes (e.g., effect size); • Increases statistical power by pooling across smaller studies; • Allows controls for the quality of studies or other moderating factors; • Useful when bringing together many studies that have empirically tested the same hypothesis.
Areas of weakness: • Studies that fail to report certain elements of underlying data may be excluded, despite being of high quality; • Does not explore the mechanisms behind effective interventions; • Labour intensive.

Vote counting
Vote counting shows the pattern of significant and insignificant positive and negative impacts across categories of studies and draws inferences from that. This involves assigning one of three outcomes (positive, negative, no relationship) to each study in a review based on the statistical significance of a study's outcomes. The hypothesis is supported if a large proportion of studies find a statistically significant effect. For example, ⇡Karabulut-Ilgu and colleagues arranged reviewed studies according to whether their findings support hypotheses, as part of a wider review focused on trends in the literature ( ⇡Karabulut-Ilgu, et al., 2018 ).
Key strengths: • Effectively captures patterns of statistical significance • Effectively captures the amount of evidence (i.e., number of studies) for a given class of interventions • Can incorporate all relevant studies (not limited by particular statistics reported) • Transparent Areas of weakness: • Ignores sample size and effect size, and so may overemphasise small significant effects at the expense of large effects that narrowly miss the significance cut-off • Can yield misleading results if some studies are underpowered • Performs poorly as the number of studies increases.

Narrative review
A narrative review examines the evidence qualitatively, usually discussing study by study, and then infers conclusions.

Meta-synthesis
The aim of a meta-synthesis is to synthesise qualitative studies on a topic in order to locate key themes, concepts, or theories that provide novel or more powerful explanations for the phenomenon under review ( ⇡Noblit & Hare, 1988 ;⇡Paterson et al., 2001 ;⇡Thorne et al., 2004 ). Examples within the sample which align with meta-synthesis include ⇡Bozkurt, Koseoglu and Singh (2018)  Key strengths: • Interpretive vs deductive/aggregative approach (meta-analysis as aggregative) • For assessing qualitative research • Understand and explain phenomena Areas of weakness: • Difficult to determine "quality" studies to include in the review • Relatively new methodology, not yet widely accepted

Sourcing data
In relation to sourcing data for literature reviews, three areas were addressed in the analysis: 1. The types of search strategies employed; 2. Specific databases included in the searches; 3. Inclusion and exclusion criteria.

Search strategies
Manual database searches were the most prevalent form of searching used in the sample (featuring in 15 of the papers). Automated database searches were also used, but less frequently (four papers), which may reflect limitations on the availability of Application Programming Interfaces (APIs) at present and is likely to become more prevalent in the future. Other strategies included consulting experts (three instances), snowballing or looking at reference lists (three instances), exploring grey literature (two instances), and pearl-growing techniques (one instance).

Databases
Databases and other data sources, such as academic journals and institutional repositories, were listed in the papers dealing specifically with literature reviews (19 papers). The databases and their frequency, grouped according to subject area, are shown in Figure # 3 (note that full data is shown in table form in the Annex). This is not an exhaustive list, but it does reflect the most commonly searched sources used by researchers investigating EdTech in LMICs. These sources were categorised as being of 'high' (more than five papers), 'medium' (between two and four papers), and 'low' (one paper) frequency based on the frequency with which they were referenced in the 19 papers. The literature search sources that were referenced in the 22 literature review documents span a wide range of subject areas. Notably though, many of the high and medium frequency sources are broad (general) in the subject areas they cover. Two source subject areas are closely matched with the overall envelope of the Hub's research focus ('education' and 'international development').

Steps for screening sourced literature
Inclusion and exclusion parameters must balance the need for casting a wide research net against the need to limit results to relevant literature. Given the wide scope of the initial literature review, even careful inclusion criteria will likely yield substantial results; yet finding the relevant results for the present project is a key reason for exploring what has been done in previous literature. Setting and clearly articulating the inclusion criteria is also central to the rigour of systematic reviews as a methodology as it facilitates the reproducibility of approaches and results.
In the analyses of existing literature reviews under the theme EdTech in LMICs, common criteria were recorded for the inclusion and exclusion of literature. These parameters for inclusion and exclusion were categorised by frequency of use among the reviewed papers. Parameters were designated as "often considered" if used in six or more of the studies; the category "sometimes considered" included those used between two and five times; while those referenced once were categorised as "rarely considered". Note that these categorisations indicate frequency and are not a reflection of quality.

Data analysis
In the review of research methodologies, common approaches to understanding relationships within the meta-data of eligible literature were identified. These approaches are presented and described below. Many of these approaches overlap in methodology, and it is the expectation that multiple approaches will be necessary to better understand emergent concepts and themes in the identified literature. The most frequently considered approaches to data analysis in the sample included content analysis, thematic analysis, descriptive statistics and related analyses, bibliometric maps, and text mining.

Thematic analysis:
Thematic analysis involves analysing codes assigned to literature for the identification of common themes. Themes can be developed inductively through analysis of the raw data or deductively from existing theory and prior research ( ⇡Nowell, et al., 2017 ). Thematic analysis was found to be a prevalent approach in the sampled papers. Examples included ⇡Jensen (2019) , and ⇡Muyoya, Brugha and Hollow (2016) .

Content analysis:
Similar to thematic analysis, content analysis also employs the mapping of codes assigned to relevant literature. However, the primary difference between the two approaches is that content analysis involves looking at the frequency of occurences between coded elements. See, for example, ⇡Crompton, Burke and Lin (2019) .

Text mining:
Text mining is an automated process of deriving key information from textual documents to determine trends, patterns, or relationships. Text mining is a useful process for identifying key words, and is often a step in other approaches examining relationships between data. For example, see Descriptive statistics: Descriptive statistics is the analysis and summarisation of data variables found in the literature, such as meta-data (type of study, study size, publication date, etc). Often this is used to determine patterns or to better understand relationships in the data. It is also useful in presenting a high-level summary of the types of literature found.
Bibliometric maps: Bibliometric mapping is a type of bibliometric analysis in which scientific maps are created based on bibliometric data. These maps are useful in understanding relationships between meta-data such as authors and journals, and content data such as keywords. The most common process of creating maps is multidimensional scaling. Examples can be found in ⇡Bozkurt, Koseoglu and Singh (

Emergent principles and implications for the EdTech Hub literature search and reviews
Based on the analysis of the documents, three important principles emerged with implications of particular relevance for designing a large-scale literature review in the context of the EdTech Hub. These emergent implications reflect the need to: 1. Match the data analysis approach to the research question; 2. Use multiple sources (databases, etc) to locate information for a systematic review; 3. Screen literature in a way which balances rigour and inclusivity.

Match the data analysis approach to the research question
The selection of an appropriate analytical methodology is largely determined by the needs of the specific research question(s) of the given study. Considering the range of systematic review approaches identified and given the current set of research questions that the EdTech Hub seeks to address, a combination of approaches might be used in order to accommodate the breadth and scope of the different questions.
Some research questions -e.g. RQ3, 'What are the characteristics of EdTech interventions (systems perspective) that are effective, and in particular are able to reach 'at-scale' use? What are the barriers and enabling factors (including policy)?' -are broad in scope and may require a review of literature spanning educational theories and empirical research design necessitating a more narrative approach where conclusions are inferred. Other questions are narrow in scope -e.g. RQ1, 'From a systems perspective (6Ps), what interventions accelerate, spread and scale EdTech initiatives to deliver better learning outcomes for all children, including the most marginalised, in low-income countries?' -and might require an examination of literature testing similar hypotheses, whereby conclusions are best drawn from a meta-analytic approach. Accordingly, each RQ must be reviewed and paired with relevant systematic review approaches, shown in Table  # 3. In some cases, multiple methodologies were potentially applicable, and the ultimate choice depends on the nature of the identified literature (i.e., testing the same hypothesis or reviewing broad phenomena). RQ2: From a systems perspective (6Ps), which educational technology interventions present the greatest value for money and social return on investment?
Narrative review or meta-analysis if criteria for measuring value and social return on investment are consistent between identified literature.
RQ3: What are the characteristics of EdTech interventions (systems perspective) that are effective, and in particular are able to reach 'at-scale' use? What are the barriers and enabling factors (including policy)?

Narrative review
RQ4: What are the most rigorous, scalable, iterative, efficient research designs and methodologies for answering those questions? How can these methods be made accessible for use by EdTech researchers, leading to higher quality research?

Meta-synthesis
or vote counting (if large number of studies find statistical significance for a specific design/methodology) RQ5: How can researchers utilise and build upon better research designs? How can a global Community of Practice effectively promote this?

Narrative review
RQ6: How can evidence-based insights about EdTech (including those generated under RQ1-RQ4) be used by a wide-range of implementers and decision-makers, leading to better learning outcomes for all Narrative review RQ7: From a systems perspective (6Ps), what is the most effective role of the programme, and of an empowered, cross-sector, global Community of Practice, in answering RQ1-RQ6 and in securing long-term impact across the sector?

Use multiple sources of publications
Literature for the EdTech Hub systematic review will use a combination of sources in order to compensate for the limitations of any one given source, and in order to present the most comprehensive and balanced view of the field possible. The analysis identified the range of databases most commonly included in systematic reviews of the field (Figure # 3); as a core source of data, the most frequently used sources will be included. Database searches will be carried out on several platforms since any single platform search is inadequate because "no database contains the complete set of published materials" ( ⇡Xiao & Watson, 2017, p. 11 ).
Conversely, only using the mainstream platforms would also risk missing relevant literature from sources which are under-represented here ( ⇡Mongeon, & Paul-Hus, 2016 ), and may be of particular relevance to the Hub.
In addition to recommended databases, underrepresented literature will be sought through strategic collection via specialist databases and web resources related to the geographic range and topics at hand. For example, in order to ensure the adequate representation of publications by African researchers and institutions from LMICs, consulting the 'Mapping Education Research in Sub-Saharan Africa' database is recommended. The database currently 5 contains around 3,000 selected entries with contributions by scientists and researchers based in Africa. Since policy-relevant research in educational research (including EdTech) is not easy to find, this will help increase the visibility and impact of African educational research.
In addition, specific research questions necessitate searching high-priority databases that align closely with the information being sought in the research question, but that may not have been included initially. For example, RQ2, 'From a systems perspective (6Ps), which education technology interventions present the greatest value for money and social return on investment?' , seeks to measure the economic efficiency of an EdTech intervention in terms of its social return on investment (SROI). In order to capture all relevant literature regarding this topic, it would be prudent to search databases relevant to social return on investment which may not have a specific focus on educational research, such as the Stanford Social Innovation Review.
As the EdTech Hub literature review is ambitious in scope, it is recommended that automated approaches to data collection are used wherever possible. However, manual queries will also be necessary as not all of the database and web resources provide Application Programming Interfaces (APIs). The scope also calls for the development of a comprehensive keyword strategy, accounting for the technological, pedagogical and geographical bounds of the project, which will be described in detail in the accompanying search protocol document.
To safeguard against missing literature by focusing upon databases, data collection will draw upon the PRISMA framework (an approach utilised by four of the papers in the analysis: ⇡Crompton, et al., 2019, ⇡Lämsä, et al., 2018, ⇡Sangrá, et al., 2019and ⇡Larrabee Sønderlund, et al., 2018 and supplement structured searches with opportunistic searches through experts and networks ( ⇡Moher, et al., 2009 ). Grey literature, defined as the "diverse and heterogeneous material that is not subject to the traditional academic peer review process" ( ⇡Adams, et al., 2017: 433 ), is at risk of being excluded by database searches yet is particularly important in relation to the study of EdTech, where many research contributions are not recorded in journals. Blogs, presentations, informal publications and other communications play an important role in our work. As with formal literature, the identified grey literature must meet the basic review criteria.

Create criteria to screen sourced literature which balance quality and inclusivity
Our review of literature methodologies captured specific steps other researchers used in narrowing search results to the most relevant literature, corresponding with the 'Screening' Category of the PRISMA Framework ( ⇡Moher, et al., 2009 ). After careful consideration of the criteria used in other EdTech reviews, we developed a suggested list of inclusion and exclusion criteria (Table  # 4) that considered the strengths and weaknesses of criteria in previous studies as well as the specific research aims of the present study.

Time frame and date of publication
Papers published from 2008 onwards are included in this study. Given the fast pace at which technology advances, it was deemed necessary to explore research that is relatively recent (in the last 10 years).

Keywords/relevance
A comprehensive list of keywords was developed and trialled in order to identify and rank terms that sourced the most relevant literature to our study.

Language of publication
The review includes literature written in the official UN languages : English, French, Chinese, Russian, Spanish and Arabic. Portuguese and German were tentatively selected as they are dominant languages in which research is published.

Geographical location and country/regional wealth
The research questions for this study focus on EdTech within LMICs.

Demographics
The research questions focus on pre-tertiary education, and therefore the studies considered must exclude higher education. We make an exception to that rule if the higher education referenced focuses on teacher education.

Length of publication
We consider literature that is a minimum of two pages long because this excludes literature for which only an abstract or an extended abstract is available.
While commonly used parameters were used to inform the methodology and many have been replicated or adapted to fit the objectives and research focus of this study, the unique characteristics of the proposed study have also meant explicitly dismissing two commonly used criteria: the publication form and research design. For parameters around the publication form (e.g., peer-reviewed journal, conference paper, evaluation, etc.), it is recommended that this study does not differentiate or limit the scope (i.e. literature from any source that meets the other inclusion criteria will be included). This is because many important contributions to the field of EdTech in LMICs are not documented in peer-reviewed journals. There is also a clear need to capture literature from under-represented and under-resourced areas yet there is relatively less peer-reviewed research on LMICs than there is on high-income countries, and research from LMICs may be less likely to be published in 23/31 popular databases. It is for a similar reason that research will not be excluded based on its design (e.g., observational, experimental, secondary research). A substantial amount of the available evidence on EdTech in LMICs is not based on empirical research, and therefore introducing this filter risks painting a false narrative of the state of research and knowledge on EdTech in LMICs.
It should also be noted that during the analysis of existing EdTech in LMICs literature reviews, it became clear that authors often include specific literature that would otherwise be excluded based on the defined parameters. Exceptions include: • Papers that fall into a "low relevance" category may be considered if they are on a topic that is under-researched or under-represented.
• Countries may be excluded after finding that the socio-economic nuances of the country place it in a different category than the one identified under recorded metrics. For example, Country A may be listed as a high income country, but an analysis of Country A's Gini coefficient reveals vast discrepancies in income equality, placing much of Country A's population within the low income country category.
• Research added from snowballing. As discussed in the previous section, snowballing is useful in collecting literature from under-researched areas, and involves the harvesting of references from existing research or the identification of research through referrals. This process often pulls in literature from outside of the inclusion/exclusion parameters.
The papers that 'pass' the initial screening process will be referred to as the 'long list'. The subsequent step follows the PRISMA step associated with categorising and coding this long list in order to further narrow and classify relevant papers. The long list of papers are first classified based on the available metadata; papers included at this stage will form the basis of the broader, initial, scoping review . This is to be followed by a manual classification of papers based on relevance and quality. At this stage, the criteria for assessing quality will need to be developed in a way which balances academic rigour with including as wide a range of perspectives and types of evidence as possible. Papers deemed as having high relevance and quality are to be moved onto a 'short list' to then be manually coded, and ultimately analysed thematically by a team of researchers. Themes of particular relevance and importance will be selected for systematic review, with the corresponding literature re-examined with stricter application of the criteria, particularly with respect to the quality of research.

Conclusion
In preparation for undertaking a large-scale scoping literature review for the EdTech Hub, and subsequent systematic reviews of specific topics, it has been useful to learn from the protocols used by recent reviews undertaken on related topics. In terms of planning the practical considerations of our literature review, and how to develop our own protocol, three areas were highlighted as being particularly important. First, that there are a range of models of systematic literature reviews. At this stage, it is challenging to be prescriptive about selecting exactly which type of reviews will be used, as this depends both on our specific research questions and the types of literature we will find. Second, that it is important not to restrict our literature searching to the most commonly used database, as this would risk excluding relevant literature in relation to LMICs. We will also supplement database searches with grey literature searches, manual searches of specialist research centres, and seek inputs from experts and the community. Third, some of the commonly used criteria for inclusion and exclusion would compromise inclusivity (e.g. including only English language articles), so this will also require careful consideration in developing the protocol, as the practical steps for undertaking our review. The protocol we have developed for the EdTech Hub literature search will also be published in due course.