Uncategorized

Enhancing Multi-modal Interaction for Virtual Tour Guidance through Large Language Models



Download a PDF of the paper titled VirtuWander: Enhancing Multi-modal Interaction for Virtual Tour Guidance through Large Language Models, by Zhan Wang and 4 other authors

Download PDF
HTML (experimental)

Abstract:Tour guidance in virtual museums encourages multi-modal interactions to boost user experiences, concerning engagement, immersion, and spatial awareness. Nevertheless, achieving the goal is challenging due to the complexity of comprehending diverse user needs and accommodating personalized user preferences. Informed by a formative study that characterizes guidance-seeking contexts, we establish a multi-modal interaction design framework for virtual tour guidance. We then design VirtuWander, a two-stage innovative system using domain-oriented large language models to transform user inquiries into diverse guidance-seeking contexts and facilitate multi-modal interactions. The feasibility and versatility of VirtuWander are demonstrated with virtual guiding examples that encompass various touring scenarios and cater to personalized preferences. We further evaluate VirtuWander through a user study within an immersive simulated museum. The results suggest that our system enhances engaging virtual tour experiences through personalized communication and knowledgeable assistance, indicating its potential for expanding into real-world scenarios.

Submission history

From: Zhan Wang [view email]
[v1]
Mon, 22 Jan 2024 13:10:23 UTC (42,530 KB)
[v2]
Tue, 23 Jan 2024 06:55:08 UTC (42,531 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *