Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Ultrasound Med Biol ; 48(6): 1157-1162, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35300877

RESUMEN

SlowflowHD is a new ultrasound Doppler imaging technology that allows visualization of flow within small blood vessels. In this mode, a proprietary algorithm differentiates between low-speed flow and signals attributed to tissue motion so that microvessel vasculature can be examined. Our objectives were to describe the low-velocity Doppler mode principles, to assess the bone thermal index (TIb) safety parameter in obstetric ultrasound scans and to evaluate adherence to professional guidelines. To achieve the latter goals, we retrospectively reviewed prospectively collected ultrasound images and video clips from pregnancy ultrasound scans at >10 wk of gestation over 4 mo. We used a custom-built optical character recognition-based software to automatically identify all images and video clips using this technology and extract the TIb. Overall, a total of 185 ultrasound scans performed by three fetal medicine physicians were included, of which 60, 54 and 71 scans were first-, second- and third-trimester scans, respectively. The mean (highest recorded) TIb values were 0.32 (0.70), 0.23 (0.70) and 0.32 (0.60) in the first, second, and third trimesters, respectively. Thermal index values were within recommended values set by the World Federation for Ultrasound in Medicine and Biology American Institute of Ultrasound in Medicine and British Medical Ultrasound Society in all scans.


Asunto(s)
Obstetricia , Femenino , Humanos , Embarazo , Tercer Trimestre del Embarazo , Estudios Retrospectivos , Ultrasonografía Doppler , Ultrasonografía Prenatal/métodos , Estados Unidos
2.
Med Image Comput Comput Assist Interv ; 2022: 104-114, 2022 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-37223131

RESUMEN

Ultrasound (US)-probe motion estimation is a fundamental problem in automated standard plane locating during obstetric US diagnosis. Most recent existing recent works employ deep neural network (DNN) to regress the probe motion. However, these deep regressionbased methods leverage the DNN to overfit on the specific training data, which is naturally lack of generalization ability for the clinical application. In this paper, we are back to generalized US feature learning rather than deep parameter regression. We propose a self-supervised learned local detector and descriptor, named USPoint, for US-probe motion estimation during the fine-adjustment phase of fetal plane acquisition. Specifically, a hybrid neural architecture is designed to simultaneously extract a local feature, and further estimate the probe motion. By embedding a differentiable USPoint-based motion estimation inside the proposed network architecture, the USPoint learns the keypoint detector, scores and descriptors from motion error alone, which doesn't require expensive human-annotation of local features. The two tasks, local feature learning and motion estimation, are jointly learned in a unified framework to enable collaborative learning with the aim of mutual benefit. To the best of our knowledge, it is the first learned local detector and descriptor tailored for the US image. Experimental evaluation on real clinical data demonstrates the resultant performance improvement on feature matching and motion estimation for potential clinical value. A video demo can be found online: https://youtu.be/JGzHuTQVlBs.

3.
Sci Rep ; 11(1): 14109, 2021 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-34238950

RESUMEN

Ultrasound is the primary modality for obstetric imaging and is highly sonographer dependent. Long training period, insufficient recruitment and poor retention of sonographers are among the global challenges in the expansion of ultrasound use. For the past several decades, technical advancements in clinical obstetric ultrasound scanning have largely concerned improving image quality and processing speed. By contrast, sonographers have been acquiring ultrasound images in a similar fashion for several decades. The PULSE (Perception Ultrasound by Learning Sonographer Experience) project is an interdisciplinary multi-modal imaging study aiming to offer clinical sonography insights and transform the process of obstetric ultrasound acquisition and image analysis by applying deep learning to large-scale multi-modal clinical data. A key novelty of the study is that we record full-length ultrasound video with concurrent tracking of the sonographer's eyes, voice and the transducer while performing routine obstetric scans on pregnant women. We provide a detailed description of the novel acquisition system and illustrate how our data can be used to describe clinical ultrasound. Being able to measure different sonographer actions or model tasks will lead to a better understanding of several topics including how to effectively train new sonographers, monitor the learning progress, and enhance the scanning workflow of experts.

4.
Med Image Anal ; 69: 101973, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33550004

RESUMEN

Ultrasound is a widely used imaging modality, yet it is well-known that scanning can be highly operator-dependent and difficult to perform, which limits its wider use in clinical practice. The literature on understanding what makes clinical sonography hard to learn and how sonography varies in the field is sparse, restricted to small-scale studies on the effectiveness of ultrasound training schemes, the role of ultrasound simulation in training, and the effect of introducing scanning guidelines and standards on diagnostic image quality. The Big Data era, and the recent and rapid emergence of machine learning as a more mainstream large-scale data analysis technique, presents a fresh opportunity to study sonography in the field at scale for the first time. Large-scale analysis of video recordings of full-length routine fetal ultrasound scans offers the potential to characterise differences between the scanning proficiency of experts and trainees that would be tedious and time-consuming to do manually due to the vast amounts of data. Such research would be informative to better understand operator clinical workflow when conducting ultrasound scans to support skills training, optimise scan times, and inform building better user-machine interfaces. This paper is to our knowledge the first to address sonography data science, which we consider in the context of second-trimester fetal sonography screening. Specifically, we present a fully-automatic framework to analyse operator clinical workflow solely from full-length routine second-trimester fetal ultrasound scan videos. An ultrasound video dataset containing more than 200 hours of scan recordings was generated for this study. We developed an original deep learning method to temporally segment the ultrasound video into semantically meaningful segments (the video description). The resulting semantic annotation was then used to depict operator clinical workflow (the knowledge representation). Machine learning was applied to the knowledge representation to characterise operator skills and assess operator variability. For video description, our best-performing deep spatio-temporal network shows favourable results in cross-validation (accuracy: 91.7%), statistical analysis (correlation: 0.98, p < 0.05) and retrospective manual validation (accuracy: 76.4%). For knowledge representation of operator clinical workflow, a three-level abstraction scheme consisting of a Subject-specific Timeline Model (STM), Summary of Timeline Features (STF), and an Operator Graph Model (OGM), was introduced that led to a significant decrease in dimensionality and computational complexity compared to raw video data. The workflow representations were learnt to discriminate between operator skills, where a proposed convolutional neural network-based model showed most promising performance (cross-validation accuracy: 98.5%, accuracy on unseen operators: 76.9%). These were further used to derive operator-specific scanning signatures and operator variability in terms of type, order and time distribution of constituent tasks.


Asunto(s)
Redes Neurales de la Computación , Ultrasonografía Prenatal , Simulación por Computador , Femenino , Humanos , Embarazo , Estudios Retrospectivos , Flujo de Trabajo
5.
Med Image Comput Comput Assist Interv ; 12908: 670-679, 2021 Sep 21.
Artículo en Inglés | MEDLINE | ID: mdl-35373220

RESUMEN

Automated ultrasound (US)-probe movement guidance is desirable to assist inexperienced human operators during obstetric US scanning. In this paper, we present a new visual-assisted probe movement technique using automated landmark retrieval for assistive obstetric US scanning. In a first step, a set of landmarks is constructed uniformly around a virtual 3D fetal model. Then, during obstetric scanning, a deep neural network (DNN) model locates the nearest landmark through descriptor search between the current observation and landmarks. The global position cues are visualised in real-time on a monitor to assist the human operator in probe movement. A Transformer-VLAD network is proposed to learn a global descriptor to represent each US image. This method abandons the need for deep parameter regression to enhance the generalization ability of the network. To avoid prohibitively expensive human annotation, anchor-positive-negative US image-pairs are automatically constructed through a KD-tree search of 3D probe positions. This leads to an end-to-end network trained in a self-supervised way through contrastive learning.

6.
Med Image Comput Comput Assist Interv ; 12263: 583-592, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33103163

RESUMEN

We present the first system that provides real-time probe movement guidance for acquiring standard planes in routine freehand obstetric ultrasound scanning. Such a system can contribute to the world-wide deployment of obstetric ultrasound scanning by lowering the required level of operator expertise. The system employs an artificial neural network that receives the ultrasound video signal and the motion signal of an inertial measurement unit (IMU) that is attached to the probe, and predicts a guidance signal. The network termed US-GuideNet predicts either the movement towards the standard plane position (goal prediction), or the next movement that an expert sonographer would perform (action prediction). While existing models for other ultrasound applications are trained with simulations or phantoms, we train our model with real-world ultrasound video and probe motion data from 464 routine clinical scans by 17 accredited sonographers. Evaluations for 3 standard plane types show that the model provides a useful guidance signal with an accuracy of 88.8 % for goal prediction and 90.9 % for action prediction.

7.
Artículo en Inglés | MEDLINE | ID: mdl-33103166

RESUMEN

In this paper, we consider differentiating operator skill during fetal ultrasound scanning using probe motion tracking. We present a novel convolutional neural network-based deep learning framework to model ultrasound probe motion in order to classify operator skill levels, that is invariant to operators' personal scanning styles. In this study, probe motion data during routine second-trimester fetal ultrasound scanning was acquired by operators of known experience levels (2 newly-qualified operators and 10 expert operators). The results demonstrate that the proposed model can successfully learn underlying probe motion features that distinguish operator skill levels during routine fetal ultrasound with 95% accuracy.

8.
Med Image Anal ; 65: 101762, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32623278

RESUMEN

We present a novel multi-task neural network called Temporal SonoEyeNet (TSEN) with a primary task to describe the visual navigation process of sonographers by learning to generate visual attention maps of ultrasound images around standard biometry planes of the fetal abdomen, head (trans-ventricular plane) and femur. TSEN has three components: a feature extractor, a temporal attention module (TAM), and an auxiliary video classification module (VCM). A soft dynamic time warping (sDTW) loss function is used to improve visual attention modelling. Variants of the model are trained on a dataset of 280 video clips, each containing one of the three biometry planes and lasting 3-7 seconds, with corresponding real-time recorded gaze tracking data of an experienced sonographer. We report the performances of the different variants of TSEN for visual attention prediction at standard biometry plane detection. The best model performance is achieved using bi-directional convolutional long-short term memory (biCLSTM) in both TAM and VCM, and it outperforms a previous spatial model on all static and dynamic saliency metrics. As an auxiliary task to validate the clinical relevance of the visual attention modelling, the predicted visual attention maps were used to guide standard biometry plane detection in consecutive US video frames. All spatio-temporal TSEN models achieve higher scores compared to a spatial-only baseline; the best performing TSEN model achieves F1 scores on these standard biometry planes of 83.7%, 89.9% and 81.1%, respectively.


Asunto(s)
Biometría , Redes Neurales de la Computación , Cabeza , Humanos , Ultrasonografía
9.
Proc IEEE Int Symp Biomed Imaging ; 2020: 1847-1850, 2020 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-32489519

RESUMEN

Recent advances in deep learning have achieved promising performance for medical image analysis, while in most cases ground-truth annotations from human experts are necessary to train the deep model. In practice, such annotations are expensive to collect and can be scarce for medical imaging applications. Therefore, there is significant interest in learning representations from unlabelled raw data. In this paper, we propose a self-supervised learning approach to learn meaningful and transferable representations from medical imaging video without any type of human annotation. We assume that in order to learn such a representation, the model should identify anatomical structures from the unlabelled data. Therefore we force the model to address anatomy-aware tasks with free supervision from the data itself. Specifically, the model is designed to correct the order of a reshuffled video clip and at the same time predict the geometric transformation applied to the video clip. Experiments on fetal ultrasound video show that the proposed approach can effectively learn meaningful and strong representations, which transfer well to downstream tasks like standard plane detection and saliency prediction.

10.
Ultraschall Med ; 41(2): 138-145, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32107757

RESUMEN

PURPOSE: To analyze bioeffect safety indices and assess how often operators look at these indices during routine obstetric ultrasound. MATERIALS AND METHODS: Automated analysis of prospectively collected data including video recordings of full-length ultrasound scans coupled with operator eye tracking was performed. Using optical recognition, we extracted the Mechanical Index (MI), Thermal Index in soft tissue (TIs), and Thermal Index in bone (TIb) values and ultrasound mode. This allowed us to report the bioeffect safety indices during routine obstetric scans and assess adherence to professional organization recommendations. Eye-tracking analysis allowed us to assess how often operators look at the displayed bioeffect safety indices. RESULTS: A total of 637 ultrasound scans performed by 17 operators were included, of which 178, 216, and 243 scans were first, second, and third-trimester scans, respectively. During live scanning, the mean and range were 0.14 (0.1 to 3.0) for TIb, 0.2 (0.1 to 1.2) for TIs, and 0.9 (0.1 to 1.3) for MI. The mean and standard deviation of TIb were 0.15 ±â€Š0.03, 0.23 ±â€Š0.09, 0.32 ±â€Š0.24 in the first, second, and third trimester, respectively. For B-mode, the highest TIb was 0.8 in all trimesters. The highest TIb was recorded for pulsed-wave Doppler mode in all trimesters. The recommended exposure times were maintained in all scans. Analysis of eye tracking suggested that operators looked at bioeffect safety indices in only 27 (4.2 %) of the scans. CONCLUSION: In this study, recommended bioeffect indices were adhered to in all routine scans. However, eye tracking showed that operators rarely assessed safety indices during scanning.


Asunto(s)
Seguridad del Paciente , Ultrasonografía Prenatal , Femenino , Humanos , Embarazo , Ultrasonografía
11.
Inf Process Med Imaging ; 26: 592-604, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-31992944

RESUMEN

Image representations are commonly learned from class labels, which are a simplistic approximation of human image understanding. In this paper we demonstrate that transferable representations of images can be learned without manual annotations by modeling human visual attention. The basis of our analyses is a unique gaze tracking dataset of sonographers performing routine clinical fetal anomaly screenings. Models of sonographer visual attention are learned by training a convolutional neural network (CNN) to predict gaze on ultrasound video frames through visual saliency prediction or gaze-point regression. We evaluate the transferability of the learned representations to the task of ultrasound standard plane detection in two contexts. Firstly, we perform transfer learning by fine-tuning the CNN with a limited number of labeled standard plane images. We find that fine-tuning the saliency predictor is superior to training from random initialization, with an average F1-score improvement of 9.6% overall and 15.3% for the cardiac planes. Secondly, we train a simple softmax regression on the feature activations of each CNN layer in order to evaluate the representations independently of transfer learning hyper-parameters. We find that the attention models derive strong representations, approaching the precision of a fully-supervised baseline model for all but the last layer.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...