CC Image Source: http://www.morguefile.com/archive/display/138237
Placing assessment for learning at the heart of researchers’ work in education overcomes the potential for learning goals and success criteria to be obscured by high stakes testing and system level evaluation (Heldsinger, 2012). Misconceptions and dichotomies emerge when controversy surrounds NAPLAN (Thompson, 2012; Wu, 2009), students are disengaged (Black & Wiliam, 1998), and global standardised testing practices are criticised (Fullan, 2005; Sahlberg, 2011).
Recommended methods of improving assessment practice are not always in accord in the literature, for example, co-construction of rubrics is identified in one source as playing a role in making learning visible (Wiliam, 2011) whereas another source is more measured, claiming
research shows why rubrics should be developed to be appropriate to specific tasks rather than from abstracted or more general notions of student development (Heldsinger, 2012, p.252).
How formative assessment will be implemented remains open to interpretation, and evidence is still lacking on applying effective formative practice (CRESST report 809, 2011). Moreover, policy-makers claim that
Despite the widespread enthusiasm for expanding formative assessment, there is still much uncertainty about this strategy (CRESST report 802, 2011),
thus complexity compounds confusion. Exercising care in judgment is taken for-granted in the literature, and testing hypotheses serves to clarify process, especially when other ways of looking at the data exist (Heldsinger, 2012).
CC Image Source: http://www.flickr.com/photos/andreaswinterer/5335546577/sizes/l/in/photostream/
Review of the literature shows that applying a measurement paradigm to the context of student assessment is crucial to overcome dichotomies and misconceptions, and the concept of a continuum is essential to that shift in understanding (Heldsinger, 2013). Measurement, then, is a process requiring continual refinement. Since notions of latent ability reveal that
Performance and competence are not in a perfect 1:1 relationship (Andrich, 2002, p. 38),
the most effective practice will synthesise system-wide expectations with individual responsibility and the demands of external accountability (Fullan et al., 2006).
While research illustrates that evaluation at the school and system level exists to monitor assessment for accountability, concerns emerge in relation to margins of error which need to be eliminated (Chapuis, 2009). By aligning methods for achieving best practice, research suggests effective schools are those which develop strategies to develop as robust learning organisations, especially in times of fiscal straitening when criticism of standardised testing is widespread:
Andrich, D. (2002). Implications and applications of modern test theory in the context of outcomes based education. Studies in Educational Evaluation, 28(4), 35 – 59.
Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Retrieved January 26, 2013 from Discovery Education website: http://blog.discoveryeducation.com/assessment/files/2009/02/blackbox_article.pdf
Chapuis, J. (2009). Where am I now? Effective feedback. In Seven strategies of assessment for learning. Moorabbin, Vic: Hawker Brownlow Education.
CRESST report 802. (2011). Knowing and doing: What teachers learn from formative assessment and how they use information. Retrieved January 26, 2013 from website: http://www.cse.ucla.edu/products/reports/R802.pdf
CRESST report 809. (2011). Relationships between teacher knowledge, assessment practice, and learning – chicken, egg or omelet. Retrieved January 26, 2013 from website: http://www.cse.ucla.edu/products/reports/R809.pdf
Heldsinger, S. (2012). Using a Measurement Paradigm to Guide Classroom Assessment Processes. In Weber, C.F. & Lupart, J.L. (Eds). Leading student assessment. New York: Springer.
Heldsinger, S. (2013). Master of school leadership leading assessment and accountability. Retrieved January 26, 2013 from UWA LMS: http://www.lms.uwa.edu.au/my/
Fullan, M. (2005). Leadership & sustainability: System thinkers in action. Thousand Oaks, CA: Corwin Press.
Fullan, M., Hill, P., & Crevola, C. (2009). Breakthrough. Thousand Oaks, CA: Corwin Press.
Sahlberg, P. (2011). Finnish lessons: What can the world learn from educational change in Finland? New York: Teachers College Press.
Thompson, G. (2013). Submission to the House of Representatives standing committee on education and employment. Retrieved February 14, 2013 from the Effects of NAPLAN website: http://effectsofnaplan.edu.au/572/submission-to-the-house-of-representatives-standing-committee-on-education-and-employment/
Wiliam, D. (2011). Embedded formative assessment. Bloomington, IN: Solution Tree Press.
Wu, M. (2009). Interpreting NAPLAN results for the layperson. Retrieved January 26, 2013 from Educational Measurement Solutions website: http://www.edmeasurement.com.au/_publications/margaret/NAPLAN_for_lay_person.pdf