Sensors, Vol. 26, Pages 604: Active Inference Modeling of Socially Shared Cognition in Virtual Reality
Sensors doi: 10.3390/s26020604
Authors:
Yoshiko Arima
Mahiro Okada
This study proposes a process model for sharing ambiguous category concepts in virtual reality (VR) using an active inference framework. The model executes a dual-layer Bayesian update after observing both self and partner actions and predicts actions that minimize free energy. To incorporate agreement-seeking with others into active inference, we added disagreement in category judgments as a risk term in the free energy, weighted by gaze synchrony measured using Dynamic Time Warping (DTW), which is assumed to reflect joint attention. To validate the model, an object classification task in VR including ambiguous items was created. The experiment was conducted first under a bot avatar condition, in which ambiguous category judgments were always incorrect, and then under a human–human pair condition. This design allowed verification of the collaborative learning process by which human pairs reached agreement from the same degree of ambiguity. Analysis of experimental data from 14 participants showed that the model achieved high prediction accuracy for observed values as learning progressed. Introducing gaze synchrony weighting (γ0≥0.5) further improved prediction accuracy, yielding optimal performance. This approach provides a new framework for modeling socially shared cognition using active inference in human–robot interaction contexts.
Source link
Yoshiko Arima www.mdpi.com

