RESUMO
PURPOSE: Clinical needle insertion into tissue, commonly assisted by 2D ultrasound imaging for real-time navigation, faces the challenge of precise needle and probe alignment to reduce out-of-plane movement. Recent studies investigate 3D ultrasound imaging together with deep learning to overcome this problem, focusing on acquiring high-resolution images to create optimal conditions for needle tip detection. However, high-resolution also requires a lot of time for image acquisition and processing, which limits the real-time capability. Therefore, we aim to maximize the US volume rate with the trade-off of low image resolution. We propose a deep learning approach to directly extract the 3D needle tip position from sparsely sampled US volumes. METHODS: We design an experimental setup with a robot inserting a needle into water and chicken liver tissue. In contrast to manual annotation, we assess the needle tip position from the known robot pose. During insertion, we acquire a large data set of low-resolution volumes using a 16 × 16 element matrix transducer with a volume rate of 4 Hz. We compare the performance of our deep learning approach with conventional needle segmentation. RESULTS: Our experiments in water and liver show that deep learning outperforms the conventional approach while achieving sub-millimeter accuracy. We achieve mean position errors of 0.54 mm in water and 1.54 mm in liver for deep learning. CONCLUSION: Our study underlines the strengths of deep learning to predict the 3D needle positions from low-resolution ultrasound volumes. This is an important milestone for real-time needle navigation, simplifying the alignment of needle and ultrasound probe and enabling a 3D motion analysis.
Assuntos
Galinhas , Aprendizado Profundo , Imageamento Tridimensional , Fígado , Agulhas , Ultrassonografia de Intervenção , Imageamento Tridimensional/métodos , Animais , Fígado/diagnóstico por imagem , Ultrassonografia de Intervenção/métodosRESUMO
PURPOSE: Commonly employed in polyp segmentation, single-image UNet architectures lack the temporal insight clinicians gain from video data in diagnosing polyps. To mirror clinical practices more faithfully, our proposed solution, PolypNextLSTM, leverages video-based deep learning, harnessing temporal information for superior segmentation performance with least parameter overhead, making it possibly suitable for edge devices. METHODS: PolypNextLSTM employs a UNet-like structure with ConvNext-Tiny as its backbone, strategically omitting the last two layers to reduce parameter overhead. Our temporal fusion module, a Convolutional Long Short Term Memory (ConvLSTM), effectively exploits temporal features. Our primary novelty lies in PolypNextLSTM, which stands out as the leanest in parameters and the fastest model, surpassing the performance of five state-of-the-art image and video-based deep learning models. The evaluation of the SUN-SEG dataset spans easy-to-detect and hard-to-detect polyp scenarios, along with videos containing challenging artefacts like fast motion and occlusion. RESULTS: Comparison against 5 image-based and 5 video-based models demonstrates PolypNextLSTM's superiority, achieving a Dice score of 0.7898 on the hard-to-detect polyp test set, surpassing image-based PraNet (0.7519) and video-based PNS+ (0.7486). Notably, our model excels in videos featuring complex artefacts such as ghosting and occlusion. CONCLUSION: PolypNextLSTM, integrating pruned ConvNext-Tiny with ConvLSTM for temporal fusion, not only exhibits superior segmentation performance but also maintains the highest frames per speed among evaluated models. Code can be found here: https://github.com/mtec-tuhh/PolypNextLSTM .
Assuntos
Aprendizado Profundo , Gravação em Vídeo , Humanos , Pólipos do Colo/diagnóstico por imagem , Pólipos do Colo/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de ComputaçãoRESUMO
OBJECTIVE: Optical coherence elastography (OCE) allows for high resolution analysis of elastic tissue properties. However, due to the limited penetration of light into tissue, miniature probes are required to reach structures inside the body, e.g., vessel walls. Shear wave elastography relates shear wave velocities to quantitative estimates of elasticity. Generally, this is achieved by measuring the runtime of waves between two or multiple points. For miniature probes, optical fibers have been integrated and the runtime between the point of excitation and a single measurement point has been considered. This approach requires precise temporal synchronization and spatial calibration between excitation and imaging. METHODS: We present a miniaturized dual-fiber OCE probe of 1 mm diameter allowing for robust shear wave elastography. Shear wave velocity is estimated between two optics and hence independent of wave propagation between excitation and imaging. We quantify the wave propagation by evaluating either a single or two measurement points. Particularly, we compare both approaches to ultrasound elastography. RESULTS: Our experimental results demonstrate that quantification of local tissue elasticities is feasible. For homogeneous soft tissue phantoms, we obtain mean deviations of 0.15 ms-1 and 0.02 ms-1 for single-fiber and dual-fiber OCE, respectively. In inhomogeneous phantoms, we measure mean deviations of up to 0.54 ms-1 and 0.03 ms-1 for single-fiber and dual-fiber OCE, respectively. CONCLUSION: We present a dual-fiber OCE approach that is much more robust in inhomogeneous tissues. Moreover, we demonstrate the feasibility of elasticity quantification in ex-vivo coronary arteries. SIGNIFICANCE: This study introduces an approach for robust elasticity quantification from within the tissue.