Embodied Learning in VR for Mathematics

Keywords:

Embodied learning, embodied interaction, Virtual Reality, concreteness, mathematics

Introduction

Mathematics is a useful skill to learn, even for students who do not wish to become mathematicians. Indeed, mathematics is about understanding the patterns of the world, be it physical or digital, and, in the age of AI, mathematics is key to understanding the decisions that rule our lives.

However, most students find mathematics useless and disconnected from the real world, and several countries are witnessing a worrisome decline in mathematics ability. To address this, we should challenge our assumptions about how to teach mathematics, specifically as the way we teach mathematics currently does not enable students to understand it well enough to transfer their skills to other classes, nor outside of the classroom setting.

Recent research has identified that, as humans, we extensively rely on our bodies and environments (i.e. “sensorimotor resources”) to think and learn: We use our bodies to represent abstract concepts, to keep track of information, to communicate understanding, or even to regulate our emotions. As a consequence, when designing learning activities, we need to facilitate gesture production, and support sense-making of bodily actions.

Meanwhile, Virtual Reality (VR), a technology heavily focused on bodily movement and manipulations, became more affordable and widespread. Using a wireless Head-Mounted Displays, VR can immerse learners' sensory channels into another world, digitally manufactured. Using hand-tracking technologies, VR seems particularly suitable to support embodied learning activities. But technology is not enough, a question remains:

How to design for embodied learning of mathematics in Virtual Reality?

Note: 🏅 This project received the ETH Medal for outstanding dissertation.

Our framework

To frame our exploration, we first defined the design space to consider when designing embodied learning interventions in Virtual Reality [4].

Figure 1. Design space for embodied learning interventions in VR [4]
Figure 1. Design space for embodied learning interventions in VR [4]

Our framework builds on four main concepts.


Embodied cognition is a set of theories demonstrating that humans rely on their bodies and environments when thinking, for example, to represent abstract concepts, to keep track of information, to communicate understanding, or even to regulate our emotions.


In turn, embodied learning shows that integrating sensorimotor resources in learning interventions can help students, especially those who struggle.


Often, Embodied learning leverages technologies (e.g. VR) to engage learners’ bodies. Thus, it is important to understand the role of learners’ bodies when interacting with virtual environments. Here, work on embodied interaction stipulates that interaction is informed by and grounded in its physical and social context.

Finally, in Virtual Reality, all interaction happens through an avatar, that is a digital body, animated together with users’ movements. Avatar embodiment describes the relationship between users’ real bodies and their digital avatars’ bodies.


We articulate the resulting design space in Figure 1, and a simplified version in Figure 2.

Figure 2. Simplified representation of the design space.
Figure 2. Simplified representation of the design space.

Designing for learners

Learners are at the core of the learning process. To inform the design of embodied learning interventions, we conducted a series of students to observe how learners and teachers move when doing or explaining mathematics. Specifically, we explored both directed actions, i.e. actions triggered by the learning interventions, and spontaneous actions, i.e. actions that happen spontaneously during reasoning or explaining [2].


We conducted two studies: A quantitative user study involving directed bodily actions in Virtual Reality and on tablet reveals VR's support for math-anxious and body-aware learners, and distinct movement patterns related to varying mathematical abilities. A subsequent qualitative analysis identifies key characteristics of spontaneous bodily actions, namely coarseness, muscle tension, repetitions, anchors, perspective, and metaphors.

 Figure 3. Spontaneous (left) and directed (right) actions for learning, based on [2].
Figure 3. Spontaneous (left) and directed (right) actions for learning, based on [2].

Derived from both studies, we propose design recommendations, advocating for expanded embodied interaction design, consideration of embodied metaphors, coarse gesturing for deep features identification, supporting of sense-making anchors, and in-VR learning assessments.

🔗 external page Project page: Directed and spontaneous actions

Designing mathematical objects

While some embodied learning activities only rely on learners’ bodies, for example using gestures, many also include objects that learners can manipulate. We explored how to design such objects, and what the influence on learning outcomes is. Importantly, we argue that embodied learning interventions often rely on concrete representations.


We first compared three representations to teach graph theory [5]. The most abstract one, on paper, uses a circle and lines representation. The second one, on a tablet, adds interactivity and feedback. The final one, in VR, also includes a level of familiarity as it uses a water flow metaphor to represent the mathematical graphs. Our results show that the third type of representation increases students' motivation, but also makes them feel better prepared for the subsequent lecture, even though it relied on symbolic representations.

Figure 4. Three representations of graphs [5].
Figure 4. Three representations of graphs [5].

This project highlighted the importance of defining concreteness clearly: Is a representation concrete because it is familiar? Because it is interactive? Because it is specific? Generally, concreteness and abstraction are key in educational research in mathematics, both when discussing the nature of mathematics in itself, but also when exploring how to learn and teach mathematics. However, while the terms concrete and abstract are often used in the field, they are not always used with the same meaning. For example, the word concrete can be used to mean specific, relatable, visual, tangible, while the word abstract can be used as general, rigorous, vague, symbolic. While several scholars have emphasized the need for a multidimensional and fine-grained framework to articulate these differences in meaning, we further argue that it is crucial to precisely define how the terms concrete and abstract are actually used by the research community. Towards this goal, we offer three contributions. First, we empirically and systematically identify the various meanings of concrete and abstract used in the literature. Second, we offer a data-informed taxonomy to organize this semantic landscape and support research inquiry. And third, we offer templates for the design of future mathematics education interventions and studies.

Figure 5. Taxonomy on concreteness in mathematics education [1].
Figure 5. Taxonomy on concreteness in mathematics education [1].

Designing interaction

In embodied learning interventions, the mathematical objects are usually interactive. Thus, it’s not only about designing the objects, but also how to design the interaction with such objects to support learning. To explore this, we designed an embodied game to teach derivatives. We validated our design with a panel of experts, and then used this prototype to explore different embodied interactions in terms of usability, sense of embodiment, and learning outcomes. In particular, we evaluated different degrees of embodied interactions, and different types of embodied interactions in Virtual Reality. We conclude with insights and recommendations for mathematics education with embodied interactions [6].

Figure 6. Different types of embodied interactions to grasp derivatives [6].
Figure 6. Different types of embodied interactions to grasp derivatives [6].

Designing digital avatars


Finally, a key and often forgotten ingredient of learning in VR is the digital avatar. Indeed, in VR, students interact with the content through an avatar. As such, avatars are often considered only as a puppet that users manipulate to act on the digital world. We argue that they can do much more.

First, the shape and function of the human hand are intimately linked to our interaction with the physical world and sets us apart from our evolutionary ancestors. In the digital age, our hands still represent our main form of interaction. We use our hands to operate keyboards, mice, trackpads, and game controllers. This modality, however, separates content display and interaction.

To challenge this, we developed DigiGlo (Digital Gloves), a system designed to evaluate the benefits of a unified hand display and interaction system. We explore this symbiosis in the context of gaming where users control games using hand gestures while the content is displayed on their bare hand. Building on established learning principles, we explore different hand gestures and other specially tailored interactions, through three carefully designed activities and two user studies. From these we show that this is an idea that has the potential to bring more intuitive, enjoyable, and effective gaming and learning experiences, and offer recommendations regarding how to better design such systems.

Figure 7. Three activities leveraging our Digital Glove interaction paradigm.
Figure 7. Three activities leveraging our Digital Glove interaction paradigm.

Second, we explored this concept of avatar modifications for learning. Indeed, while we leverage our bodies for learning, our bodies make certain concepts easier to grasp than others. For example, as our hands have 10 fingers, it is easy for us to count in base 10. However, it is more complicated to reason in base 8. So the question is: Can we deliberately design avatar morphologies to support learning of mathematical concepts?


First, we named this concept “Semantic avatars” and provided examples of such avatars [4].


🔗 external page Project page: Semantic Avatars in VR


Second, we designed, developed, and evaluated such avatars.

Figure 8. Three semantic avatars.
Figure 8. Three semantic avatars.

Team

These projects were led by Julia Chatain, under the supervision of Manu Kapur and Robert W. Sumner. Teams for each project are detailed in the respective project pages.

Selected publications

📄 = Full paper, 📋 = Extended abstract, 📔 = Dissertation


📄 [1] Chatain, Julia, Charlotte H. Müller, Keny Chatain, Leon Calabrese, Manu Kapur. "Concreteness and Abstraction in Mathematics Education: A Taxonomy of the Semantic Landscape". Educational Psychology Review. (2025). external page (link) external page (pdf)

📄 [2] Chatain, Julia, Venera Gashaj, Bibin Muttappillil, Robert W. Sumner, Manu Kapur. "Designing for Embodied Sense-making of Mathematics: Perspectives on Directed and Spontaneous Bodily Actions". In DIS ’24: Designing Interactive Systems Conference. (2024). external page (link) external page (pdf)📔🏆

[3] Chatain, Julia. "Embodied Interaction in Virtual Reality for Grounding Mathematics". Doctoral dissertation. Recipient of the ETH medal for outstanding dissertation. external page (link) external page (pdf)

📋 [4] Chatain, Julia, Manu Kapur, Robert W. Sumner. "Three Perspectives on Embodied Learning in Virtual Reality: Opportunities for Interaction Design". In CHI EA ’23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. (2023) external page (link) external page (pdf)

📄 [5] Chatain, Julia, Rudolf Varga, Violaine Fayolle, Manu Kapur, Robert W. Sumner. "Grounding Graph Theory in Embodied Concreteness with Virtual Reality". In TEI '23: Proceedings of the Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction. (2023). external page (link) external page (pdf)

📄 [6] Chatain, Julia, Virginia Ramp, Venera Gashaj, Violaine Fayolle, Manu Kapur, Robert W. Sumner, Stéphane Magnenat. "Grasping Derivatives: Teaching Mathematics through Embodied Interactions using Tablets and Virtual Reality". In Interaction Design and Children (IDC’22). (2022). external page (link) external page (pdf)

📄 [7] Chatain, Julia, Danielle M. Sisserman, Lea Reichardt, Violaine Fayolle, Manu Kapur, Robert W. Sumner, Fabio Zünd, Amit H. Bermano. "DigiGlo: Exploring the Palm as an Input and Display Mechanism through Digital Gloves." In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’20), November 2–4, 2020, Virtual Event, Canada. ACM, New York, NY, USA, 12 pages. (2020) external page (link) external page (pdf)

JavaScript has been disabled in your browser