In this post, we share highlights from the annual conference on Learning at Scale. In addition to exploring keynote addresses by Professor Simon Buckingham-Shum and Professor Oleksandra Poquet and unveiling two collaborative work-in-progress papers presented by the authors, we offer insights into the emerging discussions and innovations that are shaping the future of learning at scale.
We look back at the recently concluded 10th annual Conference on Learning at Scale, held from July 20–22, 2023, in Copenhagen, Denmark. This year’s conference was a scholarly meet-up that explored the future of learning at scale and celebrated innovative thinking and interdisciplinary collaboration.
Organised by the Association for Computing Machinery (ACM), the Learning at Scale conference emerged from a need to address the new educational landscape. The rise of Massive Open Online Courses (MOOCs), hybrid learning, AI-supported learning, gamified experiences, and citizen science initiatives all sparked an evolution of needing new methodologies to study how we perceive and approach education. Over the years, the Learning at Scale conference has become a symbol of excellence, providing a platform for research that explores transforming learning and teaching at scale.
The relationship between data and humans in education isn’t merely a technical aspect but a deeply systemic interaction that considers fairness, culture, trustworthiness, equity and more. These themes are now central to the discourse within the community.
The theme for this year’s conference, ‘Learning Futures@Scale’ was so relevant. The global pandemic has elevated online learning into the mainstream, introducing both opportunities and challenges. The future demands careful thought and reflection from technological, social, organisational, cultural and responsibility angles.
Keynote speakers:
Simon Buckingham-Shum, a Professor of Learning Analytics and Director of the Connected Intelligence Centre at the University of Technology Sydney, delivered the first keynote of the conference, titled Trust, Sustainability, and Learning@Scale. This was a sobering reminder that the planetary window for change is closing and whether or not we consider this terminal or something we will resolve, unless we begin to start learning, the situation is indeed grave.
Simon spoke of a failure to learn in a multitude of ways, including failure to master technology to enable not undermine civic society; failure to control our cognitive biases and failure to educate the next generation to break out of this cycle. He introduced us to Lilian Katz’s (1993) work on Dispositions and stressed the importance of behavioural patterns that support lifelong learning. He further referred to two resources that espouse dispositions in meaningful ways: Iain McGilchrist’s book The Matter with Things (2021) which emphasises that our attention to the world is a profoundly moral act; and Henry Giroux, whose vision of critical pedagogy is one at the heart of politics to foster a culture that cultivates critical thinking citizens essential for a deep-rooted democracy.
He left us with his recent reflective study from the past eight years into Trust (in learning analytics) which needs to be embedded through conversations across four ‘rooms’: the boardroom, the classroom, the server room and the staff room (Buckingham Shum, 2023).
The conference’s second keynote was presented by Oleksandra Poquet, a Professor of Learning Analytics at the School of Social Sciences and Technology, Technical University of Munich. Her talk was titled ‘When many learners interact: Towards relational processes at scale’. Throughout her presentation, Oleksandra explored the crucial role of relational processes in large-scale education.
Oleksandra raised a thought-provoking question to the audience: how often do we integrate relationship formation into our courses’ learning objectives? Oleksandra noted that, surprisingly, this is rarely ever the case. Discussing the challenges of researching relationship formation in education, she highlighted a few significant obstacles. Among these are a limited understanding of studies that precisely define relationships and the fact that the majority of highly-cited papers on relationship networks tend to focus solely on one course. While this is interesting, it may not tell us comprehensive research stories about relational processes.
Oleksandra proposed a nuanced approach to studying relational processes in education. This involves distinguishing between participation, communication and relationship processes. By doing so, she argues, we can align more closely with viewing data as events rather than static states. On a concluding note, she emphasised the critical yet often overlooked idea that both pedagogy and teacher behaviour can, at times, hinder these social processes.
Our WIP papers
During many conference workshops, presentations, and poster showcases, we presented two work-in-progress papers on online assessment design. These papers are the result of collaborative research between the University of Sydney Business School in Australia and Cornell University in the United States.

Developing a Prototype to Scale up Digital Support for Online Assessment Design: https://doi.org/10.1145/3573051.3596194
With the rising importance of online education, particularly in the post-pandemic world, there is a need for a tool that assists educators in designing robust online assessments. The paper here shares an innovative automated support system that is designed to meet this demand. This paper demonstrates the symbiotic relationship between data, technology, and the human touch in reshaping educational practices at scale.
Abstract: Educators rarely have access to just-in-time feedback and guiding heuristics when designing or updating assessments in higher education. This study describes the initial development process for an automated support system for designing high-quality online assessments. We identify key elements to embed in this digital artifact to offer just-in-time support for educators to design and evaluate their online assessments. We follow a design science approach in six stages, because it simultaneously generates knowledge about the method used to develop the artifact and the design of the artifact itself. Specifically, we focus on the early stages of problem identification, solution objectives, and initial conceptual design. After reviewing multiple assessment models and frameworks, we discuss a recent framework for evaluating and designing high-quality online assessments, consisting of ten design and contextual elements. This framework underpins the proposed solution which is a digital artifact that encourages consideration of alternate forms of assessment while retaining the flexibility to operate within individual educators’ design practices and contexts. We expect the proposed system to help educators and instructional designers to better understand the strengths and weaknesses of their assessments, consider alternate forms of assessment, and incorporate the system into their assessment design process.
Educator and Student Perspectives on the Impact of Generative AI on Assessments in Higher Education: https://doi.org/10.1145/3573051.3596191
As Generative AI tools, like ChatGPT, begin to spread and blur the lines of conventional educational practices, there is a need to assess their impact on university assessments. This research presents an exploratory journey by diving into the perceptions of both educators and students and how they view Generative AI in the context of assessments.
Abstract: The sudden popularity and availability of generative AI tools, such as ChatGPT that can write compelling essays on any topic, code in various programming languages, and ace standardized tests across domains, raises questions about the sustainability of traditional assessment practices. To seize this opportunity for innovation in assessment practice, we conducted a survey to understand both the educators’ and students’ perspectives on the issue. We measure and compare attitudes of both stakeholders across various assessment scenarios, building on an established framework for examining the quality of online assessments along six dimensions. Responses from 389 students and 36 educators across two universities indicate moderate usage of generative AI, consensus for which types of assessments are most impacted, and concerns about academic integrity. Educators prefer adapted assessments that assume AI will be used and encourage critical thinking, but students’ reaction is mixed, in part due to concerns about a loss of creativity. The findings show the importance of engaging educators and students in assessment reform efforts to focus on the process of learning over its outputs, higher-order thinking, and authentic applications.
A future shaped by collaboration
The future of learning at scale is not limited to technological advancements. It is about the convergence of educators, researchers, and technology. It’s about creating new methods, educational development techniques, productive insights, and infrastructural innovations. All of these require a community that embraces interdisciplinary collaboration.
Cover photo: Adobe Stock / KOTO