Evaluation in Practice: How Do We Know What Works?

In last week’s blog post, Elaine Huber outlined research on evaluation which prompted us to respond to her questions about how evaluation is conducted in practice and what we have learnt from the process.

At Business Co-Design (BCD), we have developed an evaluation framework to determine the success of educational developments, especially those being made as part of the Connected Learning at Scale (CLaS) project. The evaluation of these developments is co-designed by different stakeholders, usually the unit coordinator (UC) and an educational developer and evaluation/research associate team member from BCD.

The purpose of our evaluation is to validate the effectiveness of developments made through the co-design process in supporting student learning and addressing issues of teaching at scale (Preskill and Torres, 2004). For example, we might want to test the effectiveness and usability of a peer feedback tool from both the student perspective and the administrative perspective, ensuring that any developments do not increase administrative burdens. While the main university-wide channel for evaluation, the Unit of Study Survey (USS), provides important data, it cannot be tailored to focus on specific interventions. In addition to USS, other channels for collecting evaluation data in the BCD framework are:

  • Student survey
  • Student focus group
  • UC interview
  • Tutor focus group
  • Learning analytics (e.g. Canvas, Vimeo, H5P, Genial.ly)

Kicking off the process

Our evaluation begins with co-designing an evaluation plan at the outset of a development phase, meaning we are thinking about evaluating an innovation when we conceive of it. This plan is often a dynamic process and is designed to be flexible and responsive to the specific needs of each unit. The plan provides a general direction for the project and follows an indicative timeline with built-in flexibility and the opportunity to learn from the implementation as we go. For instance, the student surveys are conducted before the focus group to provide a deeper investigation of findings from the survey.

Initiating the design of the evaluation plan can be challenging because different stakeholders might hold varied conceptions of the purpose of evaluation. Clear communication is thus imperative for sustaining collaboration among all stakeholders. Therefore, in addition to the evaluation plan, we have created a visual representation to facilitate communication and discussion about evaluation between the project stakeholders. This diagram, see Figure 1, illustrates the timing of the data collection during a semester, the sources of the data, and the process, including where flexibility is built-in to facilitate tailoring the framework to each unit of study’s context.

Figure 1 – Business Co-Design evaluation framework

Learning from insights

Evaluation data is fed back to the teaching team by the educational developer and evaluation team members to give insights that inform future development cycles. For example, in evaluating the design of self-paced online modules for content delivery in a unit, we learned that students like the opportunity to check their understanding with small interactive H5P activities. However, they want more feedback from these interactions about why an answer is incorrect and where to revise it. Thus in the next cycle of development, learning designers can focus not on building more practice questions but on refining the feedback students receive to make the practice more meaningful.

For developments made as part of the CLaS project, the team has HREC approval, so this data can also be used for educational research purposes. There are prescribed processes to follow in collecting this data based on conditions of the HREC approval to mitigate any real or perceived coercion and ensure students are informed and consenting in sharing their feedback. For those students to elect to participate, their feedback is valuable in advancing understanding in the field, especially as we aggregate findings and look across similar developments in different contexts to find shareable patterns.

Challenges

The evaluation framework we introduce here is supported by resourcing through the CLaS project. It is resource-intensive to design, implement and analyse such comprehensive evaluation data collection through multiple channels as outlined in our framework. Other challenges include participation from students, especially when it comes to “survey fatigue”. Our evaluation framework seeks to mitigate this with the timings of data collection and communicate to students why their feedback is being sought and how it will be addressed. In one unit, we were able to “close the feedback loop” (Wisniewski, Zierer & Hattie, 2020) by reporting back to students a summary of what we learned through the student survey and planned changes based on these insights.

An important tool

While there are challenges involved in designing and executing evaluation plans, they are crucial for successfully innovating education. Just as we ask our students to provide evidence to support their arguments and conclusions, as educational developers, we must be accountable to ensure our innovations are research-driven and that we draw on evidence to evaluate them.

Banner photo by Pexels

Leave a Reply

%d