Examination of preservice teachers’ electronic assessment anxiety
The increasing use of digital technologies in higher education has reshaped assessment practices, making electronic assessment (e-assessment) a key component of teacher preparation. While e-assessment offers flexibility and rapid feedback, it can also create anxiety among preservice teachers due to limited digital skills and anxieties about technical or assessment -related issues. This study examined preservice teachers’ electronic assessment anxiety and explored whether their anxiety levels differed by gender, field of study, and grade level. A quantitative survey design was employed with 513 preservice teachers. Data were collected using the Electronic Assessment Anxiety Scale and analyzed through Independent Samples T-Test, One-Way ANOVA, and Welch ANOVA based on variance homogeneity. Effect sizes were calculated using eta-squared (η²) and omega-squared (ω²). The findings indicated that preservice teachers experienced moderate levels of electronic assessment anxiety. Significant but small differences were found between genders and across fields of study, with female and preschool education students reporting higher levels of anxiety. Grade-level differences were observed only in social anxiety. Overall, the results suggest that although demographic and academic variables explain small portions of variance, electronic assessment anxiety remains an important factor to consider in teacher education. Strengthening digital competence and providing structured e-assessment experiences may help reduce anxiety and improve preparedness for technology-supported assessment practices.

This work is licensed under a Creative Commons Attribution 4.0 International License.
How to Cite
Alruwais, N., Wills, G., & Wald, M. (2018). Advantages and challenges of using e-assessment. International Journal of Information and Education Technology, 8(1), 34–37. https://doi.org/10.18178/ijiet.2018.8.1.1008 DOI: https://doi.org/10.18178/ijiet.2018.8.1.1008
Bennett, R. E. (2015). The changing nature of educational assessment. Review of Research in Education, 39(1), 370–407. https://doi.org/10.3102/0091732X14554179 DOI: https://doi.org/10.3102/0091732X14554179
Bryman, A. (2016). Social research methods (5th ed.). Oxford University Press.
Cassady, J. C., & Gridley, B. E. (2005). The effects of online formative assessment on anxiety and performance. Journal of Technology, Learning, and Assessment, 4(1), 1–30.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Routledge.
Cohen, L., Manion, L., & Morrison, K. (2018). Research methods in education (8th ed.). Routledge. DOI: https://doi.org/10.4324/9781315456539
Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE.
Dawson, P. (2021). Defending assessment security in a digital world: Preventing e-cheating and supporting academic integrity in higher education. Routledge. DOI: https://doi.org/10.4324/9780429324178
DeVellis, R. F. (2017). Scale development: Theory and applications (4th ed.). SAGE.
Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83(1), 70–120. https://doi.org/10.3102/0034654312474350 DOI: https://doi.org/10.3102/0034654312474350
Field, A. (2013). Discovering statistics using IBM SPSS statistics (4th ed.). SAGE.
Fluck, A. (2019). An international review of e-exam technologies and impact. Computers & Education, 132, 1–15. https://doi.org/10.1016/j.compedu.2018.12.008 DOI: https://doi.org/10.1016/j.compedu.2018.12.008
Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2019). How to design and evaluate research in education (10th ed.). McGraw-Hill.
Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers & Education, 57(4), 2333–2351. https://doi.org/10.1016/j.compedu.2011.06.004 DOI: https://doi.org/10.1016/j.compedu.2011.06.004
Hillier, M. (2014). The very idea of e-exams: Student-centred approaches to digital assessment. In Proceedings of the 31st ASCILITE Conference (pp. 77–88). University of Otago. DOI: https://doi.org/10.14742/apubs.2014.1065
Hu, L.-T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118 DOI: https://doi.org/10.1080/10705519909540118
Ifenthaler, D., & Yau, J. Y.-K. (2020). Utilising learning analytics to support study success in higher education: A systematic review. Educational Technology Research and Development, 68(4), 1961–1990. https://doi.org/10.1007/s11423-020-09788-z DOI: https://doi.org/10.1007/s11423-020-09788-z
JISC. (2007). Effective practice with e-assessment. Joint Information Systems Committee.
Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56(5), 746–759. https://doi.org/10.1177/0013164496056005002 DOI: https://doi.org/10.1177/0013164496056005002
Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). Guilford Press.
Little, R. J. A., & Rubin, D. B. (2019). Statistical analysis with missing data (3rd ed.). Wiley. DOI: https://doi.org/10.1002/9781119482260
Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090 DOI: https://doi.org/10.1080/03075070600572090
Nomie-Sato, S., Condes Moreno, E., Villanueva, A. R., Chiarella, P., Tornero-Aguilera, J. F., Beltrán-Velasco, A. I., & Clemente-Suárez, V. J. (2022). Gender differences of university students in the online teaching quality and psychological profile during the COVID-19 pandemic. International Journal of Environmental Research and Public Health, 19(22), 14729. https://doi.org/10.3390/ijerph192214729 DOI: https://doi.org/10.3390/ijerph192214729
Redecker, C., & Johannessen, Ø. (2013). Changing assessment—Towards a new assessment paradigm using ICT. European Journal of Education, 48(1), 79–96. https://doi.org/10.1111/ejed.12018 DOI: https://doi.org/10.1111/ejed.12018
Shermis, M. D., & Hamner, B. (2013). Contrasting state-of-the-art automated scoring of essays. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 313–346). Routledge. DOI: https://doi.org/10.4324/9780203122761
Spielberger, C. D., & Vagg, P. R. (1995). Test anxiety: Theory, assessment, and treatment. Taylor & Francis.
Tabachnick, B. G., & Fidell, L. S. (2019). Using multivariate statistics (7th ed.). Pearson.
Tat, E., & Kılıç, A. (2024). Development of the electronic assessment anxiety scale. Journal of Educational Measurement and Evaluation, 12(3), 45–62. https://doi.org/10.17718/tojde.1380131 DOI: https://doi.org/10.17718/tojde.1380131
Wainer, H., Dorans, N. J., Flaugher, R., Green, B. F., Mislevy, R. J., Steinberg, L., & Thissen, D. (2000). Computerized adaptive testing: A primer (2nd ed.). Routledge. DOI: https://doi.org/10.4324/9781410605931
Yeşilçınar, S. (2024). Preservice English teachers’ views on online language assessment in Turkey. Nevşehir Hacı Bektaş Veli University Journal of ISS, 14(3), 1727–1740. https://doi.org/10.30783/nevsosbilen.1524449 DOI: https://doi.org/10.30783/nevsosbilen.1524449