05 set 2017 Testing Times
I feel for test designers. They have an impossible task. To design a test of language proficiency that is considered valid and reliable by the various stakeholders involved in the testing process. This is particularly true of the test takers themselves. And it was in relation to these people that two stories recently caught my eye.
The first related to a native English speaking Irish woman with two university degrees who had her visa application for Australia rejected after she failed a computerized English speaking test. Inexplicably, she managed to fail the oral fluency component of the Pearson Test of English (PTE) Academic. She scored 74 points – below the 79 points required – having scored 90 or more in all the other parts. The test involved writing, reading, and speaking, with the oral section scored by voice recognition technology. The test demanded that she read a paragraph which appeared on the screen. The audio recording was then marked by a “scoring engine” that has been trained to identify acceptable and unacceptable answers to the questions.
Needless to say, she was quite taken aback at the result. Pearson, however, told the Australian Associated Press there were no problems with its system. Sasha Hampson, the head of English for Pearson Asia Pacific, said the immigration department set the bar very high for people seeking permanent residency. Very high indeed if even ‘native’ speakers don’t manage to make the grade.
The other story involved an experienced nurse from the Philippines who wanted to take up a nursing post in the National Health Service (NHS) in the United Kingdom. She first had to take the International English Language Testing System (IELTS) test, run by the British Council, and used by many employers around the world. The Nursing and Midwifery Council (NMC) uses IELTS to assess overseas applicants to work in the UK and requires a minimum score of seven.
Although she just about achieved the result she needed, she was surprised to find that in the reading section of the exam, she had to demonstrate an understanding of the process of jam making. About which she questioned its relevance to nursing.
Some NHS employers say the bar is set too high and that some of the knowledge and skills which test takers are expected to demonstrate are hindering the process of recruitment at a time when it is hard to fill vacancies. As such, the NMC is currently looking at ways to improve the testing of potential recruits.
Indeed, a quick perusal of IELTS texts which candidates are expected to read and analyse include the history of the steam engine, gangs in the United States prison system, and bagpipe finger positions. If you happen to be a bagpipe playing convict with a passion for steam trains, then you might find these texts both relevant and relatively easy to analyse. However, for most of us this is not the case.
Both these stories highlight the problem for testing bodies, along with the designers of their tests. How do you design tests that are universally perceived as being valid and reliable, whilst at the same time being cost effective in terms of the time and manpower needed to administer them?
Such questions are becoming more imperative as English continues to spread its tentacles around the globe. The British Council predicts that the number of people actively learning English around the world is set to exceed 1.9 billion by 2020. It is estimated that there are currently over one billion people learning English worldwide and this will double in a little over five years. Add to this the increasing movement of people around the globe due to globalization and the changing nature of work brought about by technology and digitalization, and this means that there will be an exponential growth in the number of people being required to do some type of test to demonstrate their proficiency in English.
As I said above, test designers have an almost impossible task. There is no such thing as the perfect test. However, what we can do is to try to diminish the perceived disconnect between the test itself and the language requirements of the stakeholders, whether they be the test takers themselves, immigration departments or public health bodies.