Evaluation is a loaded term. Some see it solely from a judgment perspective and disdain the process. Others see it from a learning perspective and welcome it. Evaluation comes in all shapes and sizes. Depending on our approach, and perhaps shaped by our disciplinary perspectives, we tend to determine how and whether (if at all) we welcome evaluation of our teaching.
When individuals participate in an evaluation process that is collaborative and guided by dialogue and reflection, learning occurs not only at the individual level but also at the team and organization levels.
Preskill & Torres, 2000, p.26
Improving the student experience
In our teaching practices, we are always looking for ways to improve the student experience. We may look at our student feedback data where they have evaluated our teaching or our unit design. We may choose to ignore their comments, or we may decide to learn from them. Pretty much what students do when we evaluate their learning and give them feedback. We can assess their evaluation of us for themes and ideas; Compare it with our self-evaluations and reflections of the semester, and decide whether or not to make any changes.
In a study carried out to understand academics’ conceptions of evaluation, I found a misalignment between evaluation theory and the practice of learning and teaching evaluation. I also found that academics’ perceptions of evaluation can inhibit this relationship (Huber, & Harvey, 2016a).
Evaluating educational innovations
Suppose we do decide to change our teaching due to evaluative feedback from students, peers or perhaps based on the education literature. In that case, it is always good to evaluate these changes. Consistently welcoming evaluation will lead to further learning and development. However, in my research of these scenarios (implementing changes to one’s teaching), I found that academics tend to leave evaluation to the end of a semester or project. Academics then decide what and how they will evaluate the success of a new intervention or change after it has been carried out. They may think that yes, they should evaluate, but in the maelstrom that can be the start of a semester or start of a project, evaluation tends to get overlooked even when intentions are high.
Tools to help with evaluation
Through findings from previous research, I found that academics sometimes felt overwhelmed by evaluation (Huber & Harvey, 2016b). As a result, we discovered a need for more targeted evaluation support mechanisms that are flexible, adaptable, and timely.
There are many great practical evaluation resources (see for example the HEA toolkit or Better Evaluation) produced by leading researchers and evaluation practitioners. However, my research showed that something simpler was needed to assist academics in evaluating the small changes or innovations they may try. For example, how could they overcome their own bias and proximity to the data to make an objective evaluation of their ideas? Indeed, asking a critical friend for constructive feedback through peer observation is one way (Kember et al., 2007).
Another suggestion is to utilise this framework I termed SPELT (small project evaluation in learning and teaching) at the beginning of the period of change. It is a simple set of reflective prompts to help you plan your evaluation. The interactive tool can be accessed here http://tinyurl.com/evalplan Feel free to use it and share with colleagues.
Designing an evaluation plan is a work of art
Cronbach & Shapiro, 1982, p. 27
Step | Reflective prompt |
1. | What is the purpose and scope of the evaluation? |
2. | Who are the stakeholders or the project or innovation? |
3. | What are the Key Evaluation Questions? |
4. | What data and evidence will be collected, and how will it be analysed? |
5. | What are the criteria for judgment? |
6. | What dissemination strategies will be used, and how will these help you? |
For me, item 5 is the most important. If we can develop and agree on the ‘success’ criteria from the start, then when it comes to judgement time, we can avoid our natural tendency to say ‘this worked’ or ‘didn’t work’ and understand why. In my next evaluation blog post, I’ll unpack this a little more.
Completing the cycle
Evaluation outcomes may lead us to reverse the change(s), or we may continue differently. We may decide it was successful and keep it as is for a few more deliveries or decide to build on it iteratively. Having an evidence base on which to make these decisions is good academic practice.
How do you perceive evaluation? How do you evaluate your teaching innovations? What are your ‘success’ criteria? What have you learnt from evaluating?
References
Cronbach, L. J., & Shapiro, K. (1982). Designing evaluations of educational and social programs. Jossey-Bass.
Preskill, H., & Torres, R. T. (2000). The Learning Dimension of Evaluation Use. New Directions for Evaluation, 2000(88), 25–37.
Banner photo by Olenka Sergienko from Pexels
About the author
Associate Professor Elaine Huber has been designing curriculum and teaching adults for over 20 years and is currently the Academic Director of the Business Co-Design team at the University of Sydney.
5 thoughts on “Evaluation for Learning”