
Two complementary photoacoustic AI systems developed at Johns Hopkins now precisely track surgical tool tips in 3D with remarkable precision, promising safer procedures with fewer complications for patients, according to a study appearing in IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. One spots the tip like a face detection system for tools, while the other maps the tool’s shape and adjusts for changing sound speeds in different tissues.
This advance fills a critical need. Exact tool tracking is vital during interventions like liver biopsies and cardiac catheterizations, but current imaging options are either radiation-heavy and expensive (CT, MRI, and fluoroscopy) or struggle to produce quality images in complex tissue environments (conventional ultrasound).
“By combining advanced theory with powerful machine learning tools, we’ve pushed the boundaries of what’s possible in real-time surgical imaging,” said co-author Muyinatu “Bisi” Bell, the John C. Malone Associate Professor in the Department of Electrical and Computer Engineering and director of the PULSE lab. “This is a step toward more intelligent, adaptable, and accessible systems for global health care.”
Photoacoustics uses brief pulses of light to make tissue emit sound waves that an ultrasound probe records. Unlike standard ultrasound, which listens for sound waves to bounce back from tissue, the new approach can reveal tool tips in dense, cluttered tissue where standard ultrasound struggles.
In their paper, Bell and co-author and PhD student Mardava R. Gubbi introduced two systems: System A, which uses object detection, and System B, which precisely maps the shape and boundaries of tool tips using theoretical modeling. Both analyze raw photoacoustic channel data to identify “point sources,” which essentially treat surgical tool tips as tiny sound emitters. Unlike earlier techniques limited to two dimensions, these systems detect a tool’s position along all three spatial axes.
“What makes this work especially innovative is that System B not only localizes the tool tip in 3D, but also estimates the speed of sound in the surrounding tissue,” said Gubbi. “This dual capability allows for greater precision and adaptability across a range of procedures.”
Across 6,900+ tests (simulations, phantoms, and real tissue), both methods exceeded 90% accuracy and pinpointed tool tips to about 1.5 mm. System B had fewer misses and was steadier across varied tissues. Related PULSE Lab work includes in vivo (live-subject) testing of similar methods. This is an important bridge from lab studies to eventual patient care.
Built for robotic or manual use, the system provides real-time 3D guidance and can recover tips outside the field of view; the team plans to fuse it with ultrasound, apply sound-speed estimates for on-the-fly image sharpening, and extend it to biomedical imaging and optical diagnostics.
This work was supported by the National Institutes of Health and the National Science Foundation.