Skip to main content Skip to main navigation

Publication

NEEDLE: Nurse Education Enhanced by Vision-based Deep Learning Evaluation

Matthias Tschöpe; Stefan Gerd Fritsch; Vitor Fortes Rey; Niranjan Narendra Nandurkar; Sarah Trevenna; Eloise Monger; Paul Lukowicz
In: 2025 7th International Conference on Activity and Behavior Computing (ABC). International Conference on Activity and Behavior Computing (ABC), April 21-25, Abu Dhabi, Abu Dhabi, United Arab Emirates, Pages 1-10, IEEE Xplore, 2025.

Abstract

Training nurses in procedures such as venipuncture and cannulation is time-consuming and requires a teacher to supervise and provide verbal feedback. Automating this process could allow students to practice independently, reducing the need for constant supervision. Recent advances in vision-based deep learning models offer the ability to classify and evaluate students’ performance in video recordings, while a Large Language Model can provide feedback. This work lays the foundations for such a system by comparing the performance of six state-of-the-art video classifi cation models to classify key activities of venipuncture and cannulation sessions recorded in a teaching hospital. We also evaluate the zero-shot feasibility of the vision-language model Qwen2-VL (2B, 7B, 72B parameters). The performance is evaluated based on the macro F 1 -Score, VRAM utilization and energy consumption. For cannulation, the Swin3D base model (88M parameters) achieves a macro F 1 -Score of 56.79%, while the largest Qwen2-VL model achieves only 27.67%. A similar trend is observed for venipuncture (43.71% vs. 23.23%). The Swin3D base model is also more energy-effi cient, consuming 15 to 35 times less energy than Qwen2-VL.