The ultimate aim of this project is to make available a computerised language test system for four languages which is administered by computer, scored automatically and linked to the Common European Framework of Reference (CEFR) for Languages. There is a growing need for such a system for reasons of academic management but the development of such a system will also give rise to research in language testing, psychometrics and IT applications. In particular, the system will be:
- calibrated, i.e., items will be item-banked and Rasch calibrated so that different versions of tests extracted from the item bank will yield strictly comparable difficulty and hence comparable results,
- computer-adaptive, i.e., test administration will be computer-driven with items submitted to candidates according to a choice algorithm that takes into account candidates’ responses to an item before the next item is chosen depending on the candidates’ performance on the preceding item, thereby reducing testing time considerably,
- CEFR-linked, i.e., the results will be interpretable in terms of the six levels of the CEFR.
Phase 1 of the implementation initially aims to (i) identify texts suitable for generating C-tests; (ii) select appropriate software enabling automatic generation of C-tests; (iii) examine and apply existing regression formulae for predicting C-test scores; (iv) planning the research design, with particular focus on correlating C-test batteries with the Oxford Placement Test for the purpose of CEFR linking. This will involve collecting data from a minimum of 150 subjects, coding of test-sheets, and creating data files.
project lead: Nikola Dobric