Featured Achievements
activity The Unspoken Middle Ground: How Stigma Delays Recognition of AUD
activity “Treat Me Like a Human”: Defining Responsive Clinical Encounters in AUD Treatment
activity Development of an AI-Powered Platform for Object Detection and Spatial Quantification of Anatomical Structures during Robotic-Assisted Surgery
Abstract/SynopsisTitle: Development of an AI-Powered Platform for Object Detection and Spatial Quantification of Anatomical Structures during Robotic-Assisted Surgery Introduction: Critical operative steps in robotic surgery have not been quantified on an instrument/tissue relationship level. Establishing and optimizing these metrics can improve surgical precision, teaching, and outcomes. In collaboration with Yale’s Departments of Computer Science and Colorectal Surgery, we developed an AI-powered object detection and spatial quantification model which measures features of anatomical structures in robotic surgery. To demonstrate this technology, our model analyzed robotic colorectal surgery videos of Inferior Mesenteric Artery (IMA) division. The IMA lies in a high-risk area which requires meticulous dissection and instrument positioning. Incomplete or poorly performed IMA division can lead to devastating consequences like bleeding and tissue ischemia. However, there are no established guidelines for optimal retraction and relative tissue/instrument angles during IMA exposure and division. Methods: Our AI-model was built with Python and designed with a front-end interface using PyQt6 technology. Videos of robotic low anterior resections (n=13) and sigmoid colectomies (n=11) from 2 surgeons at Yale were annotated for the moments before IMA division. Videos were sectioned at 30 frames/second. The first 3-5 frames had manual selection of IMA and stapler followed by application of zero-shot SAM2 technology which precisely delineated and tracked the IMA and stapler for the remaining hundreds of frames. Deep learning algorithms and 3D mesh modeling allowed for quantification of intraoperative structures including 3D and 2D angles of stapler and IMA, surface area and width of IMA, and visualized area under IMA during tissue exposure. Results: The model successfully derived 2D angles in 81% of videos (mean 2D IMA angle 58.4° [95%CI 48.6–68.6°]; stapler angle 29.6° [95% CI 20.0–38.0°]; and intersection angle 91.0° [95%CI 82.8–98.0°]) with consistent measurements for angles despite variations in operative fields and surgeon technique. The model successfully measured 3D stapler angle, IMA tissue width, visualized IMA surface area, and exposure area under the IMA prior to division. Exploratory application utilized support vector machine and principal component analysis of 3D vectors to define the spatial relationship of the stapler and IMA associated with higher bleeding rates after IMA division. Conclusion: We demonstrate the feasibility of our AI-powered model to quantify intraoperative anatomy during robotic surgery. This work provides the first AI-defined metrics for vessel exposure and division. After multicenter validation, our model has potential for optimizing intraoperative maneuvers, real-time teaching, retrospective review/surgeon credentialing, complication prevention, and improving patient outcomes with applicability to a wide range of robotic surgeries.
activity Understanding the Contexts, Factors, and Perspectives Influencing Smoking Behavior Among Low-Income Communities: A Qualitative Analysis
All Achievements
activity HAVEN Free Clinic
honor Phi Beta Kappa
honor Lewis P. Curtis Fellowship for Summer Research in the Humanities
honor Shana Alexander Research in Women’s, Gender, and Sexuality Studies Fellowship
honor Yale College Class of 2004 Summer Travel Fellowship
honor Richter Summer Fellowship
honor Gary Stein Memorial Summer Fellowship
honor Yale College Dean’s Research Fellowship in the Sciences