Avatars, digital humans and the new ‘faces’ of educational media

As facial recognition technologies, animation and 3D modelling software become more powerful and commonplace, we’re seeing breakthroughs in how—and who—presents media to camera. Developments in 2D and 3D avatars, dubbed ‘digital humans’ at the photoreal end of the spectrum, create possibilities to re-think the presenter when it comes to delivering educational videos.

Academics supported by recent research on the value of ‘flipped learning’ (Wagner, 2020; Perez et al. 2020) are re-imagining the form of the lecture. Experiments with ‘chunking’ content into short explainer videos allow students to learn core concepts at their own pace before actively applying their knowledge in class with peers.

The use of a character or avatar in these videos might offer a creative solution to engaging students with certain kinds of content, or it might align conceptually with the content itself. Or, for some who are wary of having their faces on camera all day, there is a desire to play with other ways of representing themselves and how they appear to students. So, what are some options?

2D avatar pilot

To address this need, we’ve been piloting 2D avatars in educational videos in the Business School. A suite of six original characters have been developed by Business Co-Design that can be ‘inhabited’ by academics.

Left to right: Mayumi, Kaia, Brie, Clive, Al, and Ollie
How does it work?

Academics can come to our DIY media studio which has been designed for semi-automated recording. A media producer will remotely support the academic, who can activate the character menu with a click of a button, and make their 2D avatar selection. Then it’s simply a matter of presenting to camera as usual, and cueing any visuals such as slide decks. The facial recognition technology in Adobe Character Animator will track all the presenter’s body and facial movements, including syncing mouth movements to speech, and automatically animate the character so it appears to deliver the content, controlled like a puppet.

BCD team member Prudence Murphy experiments with controlling one of the 2D character avatars.

The excerpt below shows the pilot of the new 2D avatars in an educational video produced for a strategy, innovation and management unit. Watch to see Bernhard Resch morph into his avatar. While the process for this pilot was not entirely automated, the project gave us a chance to develop the workflow described above, which is now operational.

We caught up with Boyd Britton and Nicolette Axiak – two members of the team responsible for bringing these avatars to life.

Why was the project initiated?

BB: “One of the main drivers has been the issue of reusability of learning media.  What we’ve found is that while the content of a particular video or video series might be relevant to a unit of study, if it happens to be presented by a different educator, it’s less likely to be reused.  It’s much more likely it will need to be recreated, or at least modified.  So we wondered, if we moved the representation of the educator from a specific person to something more generic, would that improve the relevance and shelf-life of certain media artefacts?

We were also curious about the effects on learning efficacy, given the default ‘piece to camera’ is not always the best format for certain types of learning content (Chaohua, 2019; Fiorrella & Mayer, 2018). From an educator’s perspective, would these digital masks help make the process of presenting to camera less intimidating and perhaps open new creative possibilities?”

What were some of the key objectives in terms of the design?

BB: “I think believability was a key consideration. Moving into a non-anthropomorphic space seemed interesting but too much of a jump, at least for this pilot. The characters needed to be relatable and sympathetic from the perspective of students.  

The style also needed to be appropriate for adults, so we weren’t trivializing the content, or the student’s learning experience. We also wanted to ensure there was diversity in the suite of characters – in terms of their individual characteristics – but also cohesion in terms of thinking about them as an ensemble.       

One of the things that helped the design process was thinking about who these characters were as people – their idiosyncrasies, their relationships with the rest of the group. That was a lot of fun but also made the design decisions feel less arbitrary. The characters were rendered in 2D but they felt three dimensional. It’s the same reason we avoided using pre-built templates. We wanted the avatars to be specific, even though they would be inhabited in different ways and by different people”.

Can you describe how the character designs were developed?

NA: “After working out the technical constraints for the design process, I started with a few rough sketches. This took a few iterations before we settled on a level of stylisation. I then illustrated each character in full colour and started splitting the layers for each asset to complete them. As an example, if I’d given a character a scarf, it would need to be on a layer totally separate to the shirt underneath so that it could be programmed to drape and follow the motion of the character on its own. I worked collaboratively with the team to troubleshoot along the way, making sure that all stages of the design worked well with the Adobe Animate engine. To finish, we added the various mouth and eye poses so that the characters were able to talk, blink and emote”.

What were the main challenges encountered along the way?

“There were some quite significant design constraints for the program which restricted the way in which I could approach the character designs. Their features needed to be symmetrical even as their heads rotate. With this technical requirement, the designs had a tendency towards a generic territory so I needed to find ways to keep them interesting. It was also a challenge to develop a style to suit the target audience, as Adobe Animate functions really well when the character designs take a more cartoony aesthetic but that’s not necessarily ideal for the nature of the academic content”. 

What’s next for the project?

BB: “We are in the process of integrating body-tracking as part of the solution offered to educators.  There are also refinements to the character design and rigging to make the movements more natural.  More broadly, we are working on a system for scaling the process so that educators can record with their chosen avatar from the comfort of their home or office”.  

3D avatars

We talked with Mike Seymour from the Business School, who is working with new kinds of ‘digital humans’ and 3D avatars in his research (Seymour, et al., 2018).

Mike tells us about two new platforms. Meta Humans (see top image) provides for a huge variety of highly photorealistic digital human personas that can be rigged from the same underlying structure to be used as avatars. Synthesia allows users to upload a text or audio file and have the platform synthetically generate, or ‘infer’ a digital human to deliver that content.

MS: “One of the things that we can do when we start using digital humans and avatars with Meta Humans is make an experience very much relevant to the individual viewer. The digital human doesn’t have to just have the representative look of one speaker. If the underlying setup is identical, you can basically swap in and out different personas or ‘front ends’ or faces. Which means you can present that lecture or that content with any face, with any gender, with any sort of background that you want, as long as you’re respectful to the various cultures involved”. 

Mike explains how, along with personalisation, Meta Humans allows for interactivity as it can “individualise what’s going on and “moderate its delivery” to “speed up or slow down depending on how somebody is grasping the material”. 

So what uses might this kind of tech have in the education space?

MS: “One is to make sure our students are fully literate in this technology and understand what’s going on because they’re entering a world where this is going to be a very real part of the media landscape. We do our students best when we give them a forward-looking understanding of how to contextualise the technologies that they’re going to be coming across. Obviously the second one is being able to provide a more interactive and more personalised experience, which hopefully leads to higher engagement”.

So could these digital humans have potential replace or act as a substitute for real educators?

MS: “In no way shape or form does a digital human replace an individual lecturer because the educational experience with a real teacher is vastly more complex. We believe that you can have higher engagement by having a digital human as a new thing that augments those occasions when you would otherwise go to a video, but without the strain that places in making the video, and with the added advantage that it can be personalised and interactive”.

You can listen to Mike Seymour’s full interview here:

Mike Seymour’s full interview on Meta Humans and Synthesia.

What’s next?

We’re excited about the future of avatars, digital humans and AI-generated synthetic media. But, at the same time, we acknowledge how some implementations of related new tech such as ‘deep fakes’ raise critical issues around truth, ethics of representation and digital literacy, which are highly pertinent to education contexts.

If you’re in the Business School and you’d like to experiment with 2D avatars, please get in touch with the Business Co-Design media team by commenting below!

One thought on “Avatars, digital humans and the new ‘faces’ of educational media

Leave a Reply