Introducing the TEDM Principle: Improving Written Feedback Workshop

McGuire, W. (2020) Introducing the TEDM Principle: Improving Written Feedback Workshop. Assessment and Feedback Symposium 2020: Assessment and Feedback in the Pandemic Era: a Time for Learning and Inclusion, 28 Oct 2020.

[img] Slideshow
225858.pdf - Presentation

4MB

Abstract

This workshop introduces the TEDM principle and will allow participants to explore key sections from this CPD resource, which was designed to improve the quality of written feedback in HE. Its theoretical underpinnings were framed within a journal article which sought to evaluate the project: 'Showing; Not Telling: Modelling student feedback to improve satisfaction' and, in doing so, address issues identified in relation to student satisfaction, as well as in assessment and feedback on the local, national, European and global dimensions. Broadly, the research aims are as follows: 1. What, if any, are common issues that arise in communicating feedback to students? 2. What, if any, are the common problems students face in internalising feedback from their educators? 3. What can peer review models do to close the gaps between feedback and student action on that feedback? 4. What are the foreseeable drawbacks of feedback as telling peer review models; and how can a ‘showing not telling’ model of peer review remedy these drawbacks? Specifically, the aim is to re-shape the way we both construct and respond to extended written assignments to improve their effectiveness by blending formative and summative assessment modes to: 1. develop a positive learning & teaching culture that includes, but goes beyond the issues identified in a range of NSS and PTES surveys to foster inquisitive minds and collaborative effort; 2. achieve high levels of student satisfaction as well as timely and high-quality assessment & feedback. This will enable students to improve on a current assignment through the provision of feedback which addresses three areas of their formative work to improve their summative grade (and levels of satisfaction) by identifying: 1) positive trends at the formative stage; 2) areas requiring immediate action; and 3) marker – modelling that is needed at the meso - level to demonstrate, explicitly, how improvements can be made to an assignment, thus bridging the gap between telling students what they should do to improve their work and actually showing them what that looks like, in practice. Assessment feedback is often problematic in relation to student surveys - NSS (2017-19), PTES (2017-19) and, in addressing the key issues, we have grounded our work in extant theory and so, where Brookhart (2017) identifies a range of key elements to effective feedback including: timing, content specificity and personalisation, Brookfield (2017) focuses on a kind of enhanced form of reflective triangulation, imagining the use of four lenses to ensure criticality: students, colleagues, personal, theory and research, while within the framework of Lan, L. Xiongyi, L. Steckelberg (2009), peer assessment becomes a strategy for formative assessment and a tool for reflection by students (Cheng & Warren, 1999 cited in Lan, L. Xiongyi, L. Steckelberg (2009). In policy terms, the European Programme Accreditation System (EPAS) cites student feedback processes as a criterion to meet accreditation standards in Europe. This project tries to build on all of these works, national, European and global by making use of all of the key strategies they identify within a coherent pattern. The project outcomes will be of benefit to staff and students not only within Glasgow University, but on the European and global stages where similar issues have been identified with the quality of student feedback. Additionally, in providing staff with research - based evidence generated from the project to enhance their practice, we will be able to move towards more effective procedures in assessment architecture and feedback for all forms of essay/assignment – based summative assessment. In order to address the objectives referred to above, six focus groups of four were established to capture the views of volunteers from two courses from different Schools; the MEd/MSc in TESOL and the MEd in Professional Practice in Education. The goal was also to expand the initiative beyond its starting point, the MEd (Professional; Practice) and the School of Education. Braun and Clarke’s (2006) model of thematic analysis was used as a paradigm followed by all four researchers throughout the process. This six-step model starts with familiarisation and codification then shifts to the identification and extrapolation of themes, leading to eventual publication. An agreed protocol for the application of this model was agreed by all researchers in advance of the data analysis. At the outset, one researcher identified the initial codes identified from one of the six data sets in order to guide the other researchers in their own analyses, while still encouraging individual analyses to capture the richness of the data. The data sets were as follows: Confidence (28); Emotion (6); Opportunity (19); Motivation (13); Usefulness (42); Objectivity (9); Unexpected Outcomes (9); Engagement with Criteria (11); Engagement with Feedback (8). The bracketed numbers refer to the frequency of the references in the single data set. Thereafter, the researchers each analysed their allocated data set, which was chosen to be one with which they were unfamiliar to encourage a freshness of approach. The approach taken can be characterised by a combination of inductive and deductive modes. The primary driver of the research was to explore the effectiveness of an intervention designed to generate new and improved approaches to the deployment of feedback. In this regard, the main approach was inductive-to generate new theory, although a deductive element remained - to test the effectiveness of existing approaches. The researchers met regularly to discuss and to thematise the findings from their analyses of the data sets with the final product being collated by one researcher following the approval of the team, as a whole. Findings are summarised below in relation to the emergent themes identified above: Confidence can be sub-categorised into two forms: intra-personal and extra-personal with impact being greater in relation to the former rather than the latter. Emotion relates to the defensiveness with which participants received feedback, which was, in turn, influenced by the tone and content of the feedback provided. Opportunity might be separated into two forms: personal and structural. In personal terms, participants viewed formative feedback very positively. In structural terms, many were keen to engage with the process of improving the model of feedback deployed. Motivation was improved by the initiative, particularly in terms of focus, which can be categorised in terms of criteria focus and deadline focus. Usefulness was defined mainly in terms of specificity and the extent to which the feedback provided could be converted into improved practice. Objectivity was a key finding in that it offset the purely subjective experience of working on the formative stages of an assignment in isolation. A key Unexpected Outcome was the finding that participants noted improvements in critical thinking. Engagement with Criteria and Feedback both developed criticality while the main aim is that it was, 'short and to the point.' An interpretation of the findings would indicate that support is required for both staff and students to shift from an instructional to a descriptive model of feedback to fully realise its potential. To achieve this, we propose the creation of a co-constructed (staff and students) protocol to exemplify good practice in relation to both the deployment (by staff) and use (by students) of feedback. Bennett, R.E. 2011. Formative assessment: a critical review. Assessment in Education: Principles and Practice 18, no. 1: 5-25. DOI: 10.1080/0969594X.2010.513678 Black, P., and D, Wiliam. 1996. Meanings and consequences: a basis for distinguishing formative and summative functions of assessment?. British Educational Research Journal 22, no. 5. 537-48. Web: http://www.jstor.org/stable/1501668. Black, P., and R. McCormick. 2010. Reflections and new directions. Assessment & Evaluation in Higher Education 35, no. 5: 493-499. DOI: 10.1080/02602938.2010.493696 Black, P., and D, Wiliam,. 2009. Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21.no. 1: 5- 31. DOI: 10.1007/s11092-008-9068-5 Boud, D., R. Cohen, and J. Sampson. 2001. Peer learning in higher education: Learning from and with each other. London: Routledge. Brookfield, S (2017) Becoming the Critically reflective Teacher. Second Edition. San Francisco, California, ISBN 9781119050650 Brookhart, SM (2017) How to Give Effective Feedback to your Students. Alexandria, Virginia, ISBN 9781416623090 Brown, G., J. Bull, and M, Pendlebury,. 1997. Assessing learning in higher education. Routledge, London. Carless, D. 2006. Differing perceptions in the feedback process. Studies in Higher Education 31: 219-33. DOI: 10.1080/03075070600572132 Cartney, P. 2010. Exploring the use of peer assessment as a vehicle for closing the gap between feedback given and feedback used. Assessment & Evaluation in Higher Education 35, no. 5; 551-564. DOI: 10.1080/02602931003632381 Black, P., C. Harrison, and C. Lee. 2003. Assessment for Learning: Putting It into Practice. Maidenhead: Open University Press. Carless, D. 2006 "Differing Perceptions in the Feedback Process." Studies in Higher Education 31 (2): 219233.10.1080/03075070600572132[Taylor & Francis Online], [Web of Science ®], [Google Scholar]

Item Type:Conference or Workshop Item
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:McGuire, Professor Willie
Authors: McGuire, W.
College/School:College of Social Sciences > School of Education
College of Social Sciences > School of Education > Pedagogy, Praxis & Faith
Research Group:PPF
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record