๐Ÿง‘๐Ÿผโ€๐Ÿ’ป Research - May 15, 2026

AI-Based Markerless Computer Vision Framework for Open Surgery Skill Assessment: A Prototype Assessment Framework.

๐ŸŒŸ Stay Updated!
Join AI Health Hub to receive the latest insights in health and AI.

โšก Quick Summary

This study introduces a groundbreaking AI-based markerless computer vision framework designed for assessing surgical skills, specifically through the analysis of hand motion during a knot-tying task. The framework demonstrated strong correlations with expert evaluations, indicating its potential for enhancing surgical training and assessment.

๐Ÿ” Key Details

  • ๐Ÿ‘ฉโ€โš•๏ธ Participants: 16 medical students and 1 instructor
  • ๐Ÿ“น Method: One-handed knot-tying task recorded via smartphone camera
  • ๐Ÿ–ฅ๏ธ Technology: Deep learning algorithm tracking 21 hand joints
  • ๐Ÿ“Š Scoring Domains: Economy of Motion (EM), Flow of Motion (FM), Spatial Organization (SO)
  • ๐Ÿ”— Correlation Metrics: r = 0.79-0.88 with sensor-based measures

๐Ÿ”‘ Key Takeaways

  • ๐Ÿค– AI technology can effectively track hand movements in surgical tasks.
  • ๐Ÿ“ˆ Strong correlations were found between AI metrics and expert-rated performance.
  • ๐Ÿ† Economy of Motion metrics showed significant associations with product quality and technical performance.
  • ๐Ÿ’ก Flow of Motion smoothness correlated with expert assessments, indicating its importance in skill evaluation.
  • ๐ŸŒ The framework provides cohort-normalized scores, enhancing the interpretability of surgical skill assessments.
  • ๐Ÿ” Further research is needed to establish the reliability and generalizability of the findings.

๐Ÿ“š Background

The integration of artificial intelligence in surgical training represents a significant advancement in medical education. Traditional methods of skill assessment often rely on subjective evaluations, which can vary widely among instructors. The development of objective, data-driven metrics through AI can potentially standardize assessments and improve training outcomes for future surgeons.

๐Ÿ—’๏ธ Study

Conducted with a cohort of medical students, this study aimed to evaluate the effectiveness of an AI-driven framework for assessing surgical skills. Participants performed a one-handed knot-tying task, which was recorded using a smartphone camera while wearing motion sensors. The deep learning algorithm analyzed the recorded data to derive kinematic parameters, which were then categorized into three scoring domains.

๐Ÿ“ˆ Results

The analysis of 19 performances revealed that the AI metrics exhibited strong correlations with sensor-based measures, with correlation coefficients ranging from r = 0.79 to 0.88 (P < 0.01). Notably, the Economy of Motion metrics, including path length and task time, were significantly associated with expert-rated product quality and technical performance, with correlation coefficients between r = 0.59 and 0.67 (P < 0.01).

๐ŸŒ Impact and Implications

The findings from this study suggest that AI-based frameworks can revolutionize surgical skill assessment by providing objective, interpretable metrics. This could lead to enhanced training programs that are more aligned with actual performance outcomes, ultimately improving surgical proficiency and patient safety. The potential for broader applications in various surgical disciplines is immense, paving the way for more effective educational tools in medicine.

๐Ÿ”ฎ Conclusion

This prototype AI framework demonstrates the potential to transform surgical skill assessment through the use of video-based kinematic scoring. By translating hand kinematics into meaningful performance metrics, the study lays the groundwork for future research aimed at refining these technologies. Continued exploration in this field is essential to establish the reliability and generalizability of these findings, promising a brighter future for surgical education.

๐Ÿ’ฌ Your comments

What are your thoughts on the integration of AI in surgical training? We would love to hear your insights! ๐Ÿ’ฌ Leave your comments below or connect with us on social media:

AI-Based Markerless Computer Vision Framework for Open Surgery Skill Assessment: A Prototype Assessment Framework.

Abstract

BackgroundArtificial intelligence (AI) enables hand motion tracking from standard surgical video recordings; however, translating these data into meaningful performance metrics remains challenging. We evaluated the preliminary validity of a markerless, AI-driven system that generates interpretable technical skill scores from an open-surgery task.MethodsSixteen medical students and one instructor performed a one-handed knot-tying task recorded with a smartphone camera while wearing motion sensors beneath surgical gloves. A deep learning algorithm tracking 21 hand joints mapped wrist trajectories and generated visualization boundaries from which kinematic parameters were derived and grouped into three domains scored from 0 to 10: economy of motion (EM), flow of motion (FM), and spatial organization (SO). AI metrics were validated against sensor-based data. Parameters and domain scores were correlated with expert-rated product quality (PQ) and technical performance (TP) using validated checklists.ResultsNineteen performances were analyzed. AI metrics demonstrated strong correlations with sensor-based measures (r = 0.79-0.88, P < 0.01). EM metrics (path length, number of movements, task time) were associated with PQ and TP (r = 0.59-0.67, P < 0.01). Smoothness within the FM domain correlated with PQ and TP (r = 0.56-0.57, P < 0.01), while the composite FM score moderately correlated with TP (r = 0.44, P = 0.057). Working area within the SO domain demonstrated a moderate association with TP (r = 0.41, P = 0.08).ConclusionThis prototype AI framework translated hand kinematics into interpretable, cohort-normalized domain-level scores that aligned with expert assessment. The findings support the feasibility of video-based kinematic scoring and provide preliminary evidence of construct validity. Further studies are warranted to determine reliability and generalizability.

Author: [‘Zulbaran-Rojas A’, ‘Rouzi MD’, ‘Hansraj N’, ‘Erstad D’, ‘Bargas-Ochoa M’, “D’Silva E”, ‘Kirby RP’, ‘Salas N’, ‘Rojas Y’, ‘Najafi B’]

Journal: Surg Innov

Citation: Zulbaran-Rojas A, et al. AI-Based Markerless Computer Vision Framework for Open Surgery Skill Assessment: A Prototype Assessment Framework. AI-Based Markerless Computer Vision Framework for Open Surgery Skill Assessment: A Prototype Assessment Framework. 2026; (unknown volume):15533506261451990. doi: 10.1177/15533506261451990

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.