Editor's Note: This post is co-authored by Paul Gollash, founder and CEO of Voxy and Katharine Nielson, chief education officer at Voxy.
English language learning is fraught with ineffective products and failed instructional approaches, complicated by disparate proficiency scales and non-standard interpretations of terms like “intermediate” and “advanced.” This leads to confusion about what results learners should expect after language study. It also contributes to unclear guidelines for stakeholders who evaluate learners’ proficiency, from university admissions offices to future employers.
Testing monopolies, such as ETS are now unreliable. Learners have been caught cheating the system and paying for their results, and oftentimes high scorers cannot communicate adequately, while those with poor scores can easily accomplish tasks in English. Additionally, commonly used tests such as TOEFL, TOEIC, IELTS, and PTE rely on different proficiency scales which makes it impossible to understand the scores of one in terms of another.
Unifying scales, such as Pearson’s Global Scale of English, are a step in the right direction; however, we need to move beyond standardized tests that look at “global proficiency,” and build new tests that actually identify what learners can and cannot accomplish in their target languages.
We need to radically re-think how we test our learners.
Since the real stakeholders in language proficiency are the interlocutors with whom learners communicate, the tasks and conversations that they need should form the basis of a truly innovative and more meaningful proficiency assessment. These stakeholders, such as the employer who needs a new hire to speak effectively with customers, or the new friend who needs to make weekend dinner plans with a language learner, or the lover who needs to be able to understand his partner’s feelings, possess the only real rubric that matters when it comes to measuring proficiency.
So what do these new assessments look like? Well, to start with, they need to be grounded in what a learner needs to do in the real world. In the immortal words of Ferris Beuller, “You can’t eat an irregular verb.” In other words, whether or not learners can identify the appropriate relative pronoun to fill in a blank likely has nothing to do with how well they will do in a college admissions interview or when buying a plane ticket on their first hard-earned vacation to an English-speaking country.
The future of assessments, and in our opinion the future of all education, lies in project-based assessments. With project-based assessments, which simulate or otherwise model the actual experiences that a language learner will have, we can more closely and accurately evaluate the true competency of a learner. Learners' real-life performances on well-defined tasks will be the ultimate measure of their success, and the strong, clear signal of their proficiency.
Up until now, this type of task-based assessment at scale has been elusive; by its very nature, task-based assessment is personalized and high-touch. Because each learner will have unique needs, he or she will also require a unique assessment. And while some tasks, like completing online forms or opening online bank accounts, can be easily simulated and scored by computer, others, such as answering interview questions appropriately or giving a business presentation, require at least one (if not multiple) human raters to guarantee that the assessment is valid and is scored reliably. However, recent technological advances have helped pave the way for personalized, adaptive instruction, and are poised to do the same for personalized assessment. And when a language learner can offer a potential employer a certification that he or she can successfully negotiate a simulated deal--rather than a piece of paper with a random score on his or her listening proficiency--the archaic field of language testing will finally live up to its potential.
Picture License Some rights reserved by Walter Parenteau