Back to Top

A single exam board might seem a tidy solution, but further rationalisation of exams provision should be avoided

As part of its on-going enquiry into the administration of 15-19 examinations in England, the Education Select Committee took evidence in January on the strengths and weaknesses of the English system and how it compares with other countries. In the light of the Secretary of State’s publicly stated concerns that competition between awarding bodies may be contributing to declining standards, discussion focused on his proposal that the present system should be consolidated into a single exam board, as in Singapore or Finland. The single board option is appealing because it provides a tidy bureaucratic solution to what appears to be a messy and profligate problem. Proponents (TheWellcome Trust, SCORE) argue that a single, nationalised awarding body would bring to an end competition on standards, so making redundant the regulatory apparatus required to ensure comparability of different versions of the same qualifications. Consolidation, proponents argue, would concentrate expertise and investment in research and development in a single institution, be more conducive to sharing of best practice, avoid the unnecessary replication of functions across multiple boards, and allow for greater economies of scale. There are a number of flaws in this theory. To begin with, fears of competition on standards are not well-grounded. Recent Daily Telegraph reports of board officials apparently giving clues as to the content of forthcoming papers, and emphasising how easy they were making it for schools to coach their pupils to success, however alarming, do not constitute evidence of widespread abuse, nor do they supply justification for system overhaul. Such boasts are misleading in that they suggest a degree of insider knowledge on the part of examiners about what will come up in a given exam, how criterion will be applied, and where grade boundaries will fall, that they simply do not have. The reality is that standards are set in England across committees and exam boards in a distributed fashion. It is neither in the individual nor collective interest of exam boards to compete for custom on the basis of the accessibility of passes, as to do so would undermine the currency of their qualifications (Cresswell 1995; Malacova and Bell 2006). In framing the problem as one of competition on standards then, proponents of the single exam board presuppose the solution, overstating in the process the degree to which comparing and maintaining marking standards across multiple boards is in fact an issue. On the contrary, numerous studies have found steady improvement and a high degree of marking accuracy and reliability in the English system over the past decade, attributable largely to innovations in the application of online technologies (Fowles 2005; Taylor 2007; Pinot de Moira 2009; Chamberlain & Taylor 2010), which have been the direct result of competition (see CERP Select Committee submission, para. 3.3; see also Cambridge Assessment, para. 30). As longitudinal student-level data has become available, this data has increasingly informed standard-setting procedures too – both those adopted by the boards themselves and, post hoc, through the Joint Council for Qualifications’ (JCQ) in-year checks, and Ofqual’s five-yearly reviews. One may wish to question, as SCORE did in its submission (paragraphs 4, 7 and 10), whether the content of specifications, and the way in which they are assessed, are stretching enough, but those are different questions and ones which go significantly beyond exam boards’ remit/power to address. Boards have to work within the constraints of the National Curriculum, and a regulatory system that fears variation of specifications and examination offerings might make the maintenance of a single standard more difficult. Suffice to say that whatever the complex drivers at work in rising pass rates, it is unlikely that marking inaccuracy contributes to any significant degree. Consolidation into a single board then contributes nothing to resolving the issues that matter, namely the challenges of comparability of standards between subjects (i.e. distinguishing between ‘hard’ and ‘soft’ subjects), the equivalence or otherwise of different qualifications designed for the same stage of education, and of vocational qualifications. As Cambridge Assessment pointed out in their submission to the Inquiry, even if there existed a single board, or a system of subject franchises were in operation, a variety of syllabuses would still be needed to suit diverse student needs (para. 6). Above all, what is needed is a robust methodology for assessing comparability – not least how standards are upheld over time (perhaps the most difficult aspect of this to gauge). Other measures have already been set in train by the government to highlight the relative currency of different GCSEs and the inflationary effect of equivalencies designed to promote take-up of 14-16 vocational qualifications: consolidation into a single exam board would not make such measures any more straightforward to implement. It would also introduce new risks, and jeopardise many of the benefits of the current system. Experience demonstrates that whereas a system of multiple exam boards spreads risk, a single board, in centralising control, actually concentrates it. CERP cautions that ‘the experiences of Scotland (2000), New Zealand (2004) and the National Curriculum test crisis of 2008 in England all serve as a warning against centralisation, concentration of risk, and perceived political interference in assessment operations’ (para 4.7; see also Cambridge Assessment’s submission paras 1-5). The more centralised the system becomes, the less likely it is to produce qualifications that gain and sustain the confidence of end-users. (Note that many argue that GNVQs, NVQs and Diplomas have suffered from just this problem of perception.) Proponents argue, however, that the benefits of concentrating expertise in a single institution would outweigh the disadvantages. Some perceive that the reluctance of some professionals operating in proprietary frameworks to work collaboratively on research and development and share best practice would be overcome in a nationalised system (SCORE, para. 7: innovation; Wellcome Trust). However experience shows that in monopolistic situations prevailing practices are more likely to become normative than to be seriously challenged by alternative ways of doing things, which in turn has a knock-on effect for the opportunities for emerging new talent to signal their developing expertise and to progress in their careers. The monopoly assumed by one board would thus invite stagnation – both in terms of its own operational performance and innovative potential, and its responsiveness to developments in teaching and learning, academia, and society more broadly. By contrast, competition stimulates innovation in service delivery and product offer. As previously mentioned, both CERP and Cambridge Assessment attribute the rapid advances in the application of information technology to assessment over the past decade directly to competition between the boards. Furthermore, in their submissions to the Select Committee, CERP, Cambridge Assessment and the British Academy all underscored the importance of the market in encouraging the development of innovative syllabuses. In recent decades (though less so in recent years), there have been convincing examples of local school and university-led, ‘bottom-up’ curriculum projects being developed by exam boards into high quality certified programmes of learning and assessment (e.g. the Nuffield and Salters’ science suites, Ridgeway History, MEI Maths and OCR’s Computer Literacy and Information Technology (CLAIT) qualification). Though few in number, each of these developments effectively expressed end-user demand, helped keep required subject knowledge up-to-date with developments in their respective fields of learning, and ensured that learning remained relevant and engaging to students. Multiple bodies clearly offer greater potential for successful identification of programmes of study and assessment that meet these requirements. It is difficult to envisage how a single body could enable the same level of access to opportunity for entrepreneurs (Cambridge Assessment, paras. 25 and 26). This is not of course to say that the present system cannot be improved. In a more open market one would expect to see a greater degree of innovation in product offer. As maintained by the Oxford/Queen’s/IoE team, ‘the regulator’s concern for the maintenance of standards has come to overshadow innovation’ – but the structure of the market (in effect a regulated oligopoly) is a problem too. With relatively few dominant players, the incentives are not as great as they might be, so that though ‘we have a large number of syllabuses in a given subject, the differences between them are small’. So long as the government determines the content of the curriculum and the means and objectives of assessment, the benefits that might ensue from competition between exam boards will be limited. In the days when English examinations system was at its most dynamic, a plethora of regional providers, with strong links to universities and a direct interest in test outcomes, were rigorous in maintaining qualification standards. Since then democratisation of access, facilitated by the transition from cohort to criterion referencing, has necessitated greater investment in standardisation and supplied the justification for the development of the ‘high stakes’ accountability framework now widely considered by practitioners to be the bane of British education. The way to overcome that problem, while ensuring a diverse qualifications market, tight to the requirements of higher education providers and employers, and appropriate to the needs of diverse learners, is not through a diminution of choice, and further rationalisation of government control over test setting and outcomes.  Rather, it is through opening up supply to even greater competition; greater support from school leadership for subject enthusiasts wishing to collaborate with assessment professionals in qualification design; removing the selection and appointment of Ofqual’s key leadership from political control; lifting the nebulous requirement for it to promote public confidence; and focusing its remit on its fundamental task – the development and application of robust methodologies for ensuring that testing is valid and standards reliably upheld within and between the boards.

Blog Category: 
About the author
James Croft is Director of the Centre for Market Reform of Education and the co-author of the CMRE discussion paper 'When Qualifications Fail: Reforming 14-19 assessment' (2012) and previously of 'Profit-making free schools' (2011).