Skip to main content Skip to main navigation

Publikation

LLMs Enable Context-Aware Augmented Reality in Surgical Navigation

Hamraz Javaheri; Omid Ghamarnejad; Paul Lukowicz; Gregor Alexander Stavrou; Jakob Karolus
In: ACM Designing Interactive Systems Conference (DIS) 2025. ACM Designing Interactive Systems (DIS-2025), July 5-9, ACM, 2025.

Zusammenfassung

Wearable Augmented Reality (AR) technologies are gaining recognition for their potential to transform surgical navigation systems. As these technologies evolve, selecting the right interaction method to control the system becomes crucial. Our work introduces a voice user interface (VUI) for surgical AR assistance systems (ARAS), designed for pancreatic surgery, that integrates Large Language Models (LLMs). Employing a mixed-method research approach, we assessed the usability of our LLM-based design in both simulated surgical tasks and during pancreatic surgeries, comparing its performance against conventional VUI for surgical ARAS using speech commands. Our findings demonstrated the usability of our proposed LLM-based VUI, yielding a significantly lower task completion time and cognitive workload compared to speech commands. Additionally, qualitative insights from interviews with surgeons aligned with the quantitative data, revealing a strong preference for the LLM-based VUI. Surgeons emphasized its intuitiveness and highlighted the potential of LLM-based VUI in expediting decision-making in surgical environments.

Weitere Links