TL;DR: It’s a shit-show.
As the daughter of a primary school teacher and principal, I’ve definitely listened in on a few tirades about the National Assessment Program – Literacy and Numeracy (NAPLAN). If you’re not an Aussie native, it’s basically a big test that everyone in grades 3, 5, 7 and 9 take to test how students are performing around Australia. The data is attached to schools and often has a massive influence over the amount of funding schools receive. My Principal father has the ridiculous job of making sure that the education environment he’s lovingly crafted within his school serves his students well in this annual test. If it fails, the school’s funding is cut. The impacts that this ludicrous test has had on students nationwide is ignored while local newspapers put the underperforming schools on blast in page 3. Basically, NAPLAN is the perfect storm to brew a dichotomy between the “successful” and “failing” schools, and it’s our kids who are paying the price.
NAPLAN isn’t done for the right reasons. Ultimately, there should be a strong focus on enriching the learning experiences of our kids, but NAPLAN does the opposite. One study says that, “The impact of this culture on the learning experiences of children may be defined as a shift from a focus on the needs of the child…to a focus on the needs of the system.” Yikes.
Negative Effects on Students
Aussie kids are super diverse, so it’s no surprise that their learning isn’t any different. With NAPLAN being a pretty high stakes test, there is a massive burden of accountability for failure that comes with it. Unfortunately, this accountability often falls unfairly on the shoulders of the kids taking the test. One study found that when kids undergo high stakes testing in childhood, they’re more likely to label themselves as failures at a very early stage of their learning journey.
In fact, the Queensland Studies Authority has expressed concern at the potential of full cohort testing to lower the self-esteem, self-image and long-term confidence of underperforming students, thus widening the gap between them and their higher-achieving peers.
But because NAPLAN is such a chaotic time of the year, the welfare of these kids is often not discussed at length. One study found that a pervasive silence exists around the rights of the child/student while undergoing high stakes testing. So, while kids are suffering under an unjustifiable amount of pressure, the media remains relatively silent about it. This doesn’t mean they don’t talk about NAPLAN, though.
It’s WAY Too Simplistic
Pretty much every study on NAPLAN agrees that human learning is far too complex to be accurately depicted by one annual test. The Journal of Education Policy found that it can only ever deliver a snapshot, point-in-time evidence of performance achievement of a limited amount of knowledge.
When NAPLAN data is published, individual families get a sheet of paper telling them where their child compares to the rest of Aussie kids their age. The public gets data on every school’s performance and how it compares to the rest of Australia.
This kind of data gives us has a really limited scope on the actual academic performance of Aussie kids. The Melbourne School of Education argues that NAPLAN testing actually encourages teaching methods that promote shallow and superficial learning rather than deep conceptual understanding. They also found that high stakes testing has a tendency to a disregard of differences in the needs, talents and achievements of different students and that kids from minorities and those with disabilities and special education needs are especially affected. Surely if kids underperforming, that’s a pretty good indication that they actually need more resources and funding to improve their performance, not the other way around.
Some teachers, principals, parents and community members even agree that NAPLAN tests do not allow Indigenous students the opportunity to demonstrate their attributes, as the test is too Eurocentric and inaccessible for some Indigenous students. In fact, NAPLAN’s usefulness is pretty limited when it comes to informing parents and teachers on student performance. The data that NAPLAN gives us isn’t nuanced enough to provide any real impact on learning improvement.
The media is the bane of NAPLAN’s existence
When the media reports of NAPLAN, it isn’t to make sure that testing is carried out correctly with the least amount of negative psychological impact possible. In fact, most outlets that report on NAPLAN only cover results and rush to label high-performing schools as “good” and low-performing schools as “bad”.
SAGE Journals determined that the Sydney Morning Herald, the West Australian and the Herald Sun more often than not published comments from sources strongly opposed to NAPLAN, yet the reporting rarely questioned the validity of the testing results. In fact, the tests were presented as an important and accurate measure of student performance. The study also states that the tone of the coverage was predominantly negative, with a strong focus on inadequate educational standards and that teachers were to blame for the bad results.
Some may argue that this is good because it helps parents decide where to send their kids to school. However, publishing data may lead to inaccurate and unreliable reporting of school data rather than providing any tangible school improvement. Basically, the data doesn’t really do anything to massively change the way the school operates based on one round of NAPLAN results. So, while parents are busy avoiding the under-performing schools, these schools may actually be a great fit for their kids regardless of their NAPLAN results.
How do we fix it?
Well, it’s complicated. We have to monitor the performance of schools and students somehow, but NAPLAN is clearly not the way to do it. The Journal of Education Policy recommends that we start with getting performance data from sample groups within schools instead of testing entire grades. This is so that resources aren’t wasted on preparing an entire cohort for a multiple-day test and instead select kids from different backgrounds to get an intersectional idea of performance. It also recommends that this should be complemented by teacher assessment so that we can accurately determine if student performance is related to teacher performance. These two changes will give us a much broader scope when it comes to assessing the nation’s schools and will show us more clearly what areas need improvement.
Schools shouldn’t be defined by how well they do on a test. They should be defined by their education, and one test once a year should not determine the fate of our kids’ education.