Since it was launched in 2008 by then education minister Julia Gillard, NAPLAN (National Assessment Program – Literacy and Numeracy) has been administered across the country to students in Years 3, 5, 7 and 9 every May (with the exception of 2020 due to the COVID crisis). A controversial test that generates strong sentiments in the education community may not be the holy grail, but does it have a place? And if so, what is its value?
It comes down to what these types of tests are designed to do, the data and measurements they provide and what insights can be drawn from them.
NAPLAN can provide valuable information to policy makers. As NAPLAN assesses all students rather than samples of students it is possible to study changes in performance at a system or cohort level – including systems in which results have improved since 2008. This, in turn, introduces the possibility of identifying policies and practices that may have led to these improvements.
However, tying the improvements to a particular set of strategies or policies can be challenging. In the article “How well are we learning from NAPLAN?“ Geoff Masters presents the example of the very significant improvements in the reading and numeracy levels of Queensland Year 3 students between 2008 and 2016. These gains could be in part related to the introduction of a Preparatory year in Queensland from 2008, but much of the observed improvement occurred prior to the first cohort entering Year 3 in 2011 and Year 5 in 2013. This means more analysis would be needed to work out what other interventions or strategies affected these results.
Standardised achievement tests can do a good job of supplying the evidence needed to make norm-referenced interpretations of the knowledge and/or skills of one defined cohort of students in relationship to those of students nationally or within other cohorts. This evidence can be used as an accountability mechanism for systems and policymakers, providing evidence concerning the performance of schools relative to benchmarks, and allowing tracking of long term trends. However, standardised achievement tests should not be used as a measure of teaching quality or to identify individual students’ needs to target teacher practice, primarily because their data isn’t meaningful unless you have a huge sample size to work with .
Another point of concern, largely because NAPLAN data only makes sense in very large sample sizes (like boys vs girls, or one sector vs another sector), is the use of NAPLAN to compare single schools. This creates ‘league tables’ that push some school leaders to seek better test results in an effort to compete for enrolments. In a survey conducted by the Whitlam Institute with more than 7K educators, 45% of participants strongly agreed that NAPLAN was a school scoring tool and 93% of respondents believed that poor results strongly impact a schools’ ability to attract and retain students. The problem is that improving NAPLAN scores (other than by ‘teaching to the test’) is not always within the control of a single school. We should promote less emphasis on directly comparing schools and more emphasis on supporting learning through evidence-based practice.
Similarly, using NAPLAN as the ‘stick’ to hold teachers and leaders accountable to excellence tends to promote behaviours that are contrary to what good teaching and learning is all about. For example, teachers are more likely to ‘teach to the test’ if they believe they will be ‘punished’ if results are not as expected. Spending weeks on NAPLAN preparation adds to an already crowded curriculum and can also affect the range of teaching strategies used in the classroom. Creating a high-stakes atmosphere also has a significant impact on how students feel about assessments and can add to the anxiety they feel in relation to subjects such as maths, as well as increase stress levels for teachers. It has been shown that these types of measurements are ineffective in lifting schools’ performance.
Accountability is incredibly important, but NAPLAN does not provide the credible evidence of student achievement at the level of an individual teacher that is required if the focus is on improving learning outcomes and providing teachers with data that supports better practice.
This is where the limitations of tests like NAPLAN need to be recognised. These tests can not provide the granularity (specificity) and regularity required to support differentiation. To personalise learning teachers need formative assessments that can pin-point with accuracy the concepts that students have mastered and the gaps and misconceptions that exist. Having a ‘point-in-time’ snapshot of students’ performance in relation to the learning outcomes expected for their year level can provide useful information to measure progress, similar to taking a polaroid every birthday to see how much you have changed. Yet, a once a year (or in the case of NAPLAN once every two years) assessment can not be used as a formative tool. Teachers need to have access to timely data to effectively meet students’ needs. The information collected as part of a formative assessment cycle supports better planning, like informing which concepts and with which students teachers are better off using the limited time they have for explicit instruction, or which students may benefit from targeted interventions.
Creating better learning opportunities, and in time, better learning outcomes, is what most people in the education sector want to achieve, but intention and impact do not always match. Yes, NAPLAN has its place and can help reform policy and analyse the impact of particular education initiatives but if we continue to use it to compare schools and judge teachers we may be doing more damage than good.