full screen background image
 

Typed examinations and handwritten examinations

Systematic literature review procedure for typed examinations and handwritten examinations

1. Literature search procedure

The process of reviewing literature in this paper used the approach of Chan and Chen (2021) by following Petticrew and Roberts’ six-step review process (2006) in social sciences (see Figure 1). This section will explain the first four steps in detail, which include setting the scope of literature search by formulating the research question, searching literature via research databases, screening identified literature with the inclusion/exclusion criteria and extracting data from relevant literature. The next section below will go in details of the fifth step—synthesis of the studies, while the last step is referring to the writing-up of this paper.

Figure 1. The systematic review process

Figure 1. The systematic review process

Step 1: Formulating the research question

This systematic review aims to answer three research questions on the subject of handwritten and typed assessments. The first question intends to identify the advantages and disadvantages of typing compared to handwriting in examinations, while the second question seeks to highlight the key issues and considerations in implementing typed examinations. The third question is related to students’, teachers’, and staff members’ perceptions towards this type of examination.

Step 2: Conducting the literature search

A literature search for this research was conducted using four online databases: ERIC, PsycInfo, Scopus, and Web of Science, chosen for the high volume of educational research journals they provide access to. Only articles written in English and published between 2001 and 2022 were selected.

In order to identify literature relevant for answering the research questions, the following sets of search strings were applied to refine the literature search. The first set concerns the level of education: “university” OR “higher education” OR “college”. The second set concentrates on the specific thematic focus: “assessment” OR “examination” OR “evaluation” OR “test”. The final set identifies the targeted types of assessment: “typing” OR “typed” OR “handwriting” OR “handwritten” OR “keyboarding” OR “e-assessment” OR “word-processed” OR “computer-based” OR “digital” OR “digitized” OR “e-exam” OR “paper-based”.

18,394 publications were found in the four electronic databases using the above three sets of search string. On top of that, 30 articles were identified from snowballing. Thus, a total of 18,424 publications were selected to be further evaluated against the inclusion and exclusion criteria in the next step (see Table 1).

Search String Database Fields Searched Articles Identified
AB ( “university” OR “higher education” OR “college” ) AND AB ( “assessment” OR “examination” OR “evaluation” OR “test” ) AND AB ( “typing” OR “typed” OR “handwriting” OR “handwritten” OR “keyboarding” OR “e-assessment” OR “word-processed” OR “computer-based” OR “digital” OR “digitized” OR “e-exam” OR “paper-based”) ERIC Abstract 609
PsycInfo Abstract 870
Scopus Abstract 10,439
Web of Science Topic field, including title, abstract and keywords 6,476
Total 18,394

Table 1 Result of literature search (breakdowns of each database)

Step 3: Screening and assessing the studies against the inclusion/exclusion criteria

In this step, the abstracts of the 18,424 publications were examined to determine whether they fulfilled the following inclusion criteria and, hence, could be included for full-text assessment. Abstracts from all publications acquired in Step 2 were reviewed and assessed according to two distinct inclusion criteria for empirical and non-empirical papers. Non-empirical articles discussing the advantages and challenges of typed examinations in higher education were incorporated. For an empirical study to be chosen, it had to:

  • Concern research in a tertiary education setting. Studies that were conducted in school contexts were omitted.
  • Implement typed examinations as assessments in a course, activity, or curriculum with the purposed of investigating the benefits, issues or perceptions and impacts of different stakeholders are included. Pure reflections or opinion pieces are excluded.

After the first screening, 51 publications fulfilled the criteria and were selected for the next stage of full-text assessment. In this second screening, publications were assessed based on another set of inclusion criteria as listed below:
Empirical publications would be excluded at this stage if they met the following criteria:

  • Insufficient information
  • Unclear methodology
  • Sources of data are not clearly indicated

After two rounds of screening, 48 publications (4 non-empirical) were selected for further analysis. The summary of the inclusion and exclusion of publications in this step is presented in Figure 2.

Figure 2. Number of articles in each step of the review process

Figure 2. Number of articles in each step of the review process

Step 4: Data extraction and quality check

In this step, the full texts of the 48 selected publications were examined to extract data for answering the research questions. The articles were categorized according to the types of paper (empirical or non-empirical) and the types of study (qualitative, quantitative, or mixed-methods). Relevant information from the articles for further analysis, such as research design, methodology, participants, data analysis, and key findings, was also extracted and summarized in a table.

Echoing Petticrew & Roberts (2006), the data extraction process also included a quality check, aimed at assessing the validity and credibility of the findings and determining the extent to which the study is influenced by bias. Accordingly, the Mixed Methods Appraisal Tool (MMAT), developed by Hong et al. (2018), was adopted as the quality checking tool. The reasons for using MMAT are, first, it offers separate criteria to evaluate research quality for different types of studies, which aligns with the aim of this review, and second, it is supported by professionals in the specific field as a reliable tool for evaluating different types of study designs (Hong et al., 2019). With the help of MMAT, the validity of the outcomes of the selected publications could be ensured before they were further analyzed in the next step.

2. Method of analysis

Step 5: Synthesis of the studies

Inductive thematic analysis was adopted as the method for analysing the data extracted from the selected publications because it allowed researchers to process research findings and translate them into specific themes which were qualitative in nature. These qualitative data obtained from the analysis was essential in providing answers to the research questions of identifying the pros and cons of typing and handwriting assessment as well as the key issues in executing word-processed examinations.

In performing the analysis, topical codes were first given to each article based on the nature of their content and summary of their research findings. After that, these codes would be compared, amended and summarized into broader categories. Findings could then be drawn from these categories and their sub-topics in answering the research questions.

References

  • Chan, C. K. Y. & Chen, S. W. (2021). Students’ perceptions on the recognition of holistic competency achievement: A systemic mixed review. Educational Research Review. DOI: https://doi.org/10.1016/j.edurev.2021.100431
  • Hong, Q. N., Pluye, P., Fàbregues, S., Bartlett, G., Boardman, F., Cargo, M., … Vedel, I. (2018). Mixed methods appraisal tool (MMAT) version 2018: User guide. Retrieved from http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018criteria-manual_2018-08-01_ENG.pdf
  • Hong, Q. N., Pluye, P., Fàbregues, S., Bartlett, G., Boardman, F., Cargo, M., … Vedel, I. (2019). Improving the content validity of the mixed methods appraisal tool: A modified e-Delphi study. Journal of Clinical Epidemiology, 111, 49-59. doi:10.1016/j.jclinepi.2019.03.008
  • Petticrew, M. & Roberts, H. (2006). Systematic reviews in the social sciences: A practical guide. Malden, MA: Blackwell Publishing.