Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Med Image Comput Comput Assist Interv ; 2022: 104-114, 2022 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-37223131

RESUMO

Ultrasound (US)-probe motion estimation is a fundamental problem in automated standard plane locating during obstetric US diagnosis. Most recent existing recent works employ deep neural network (DNN) to regress the probe motion. However, these deep regressionbased methods leverage the DNN to overfit on the specific training data, which is naturally lack of generalization ability for the clinical application. In this paper, we are back to generalized US feature learning rather than deep parameter regression. We propose a self-supervised learned local detector and descriptor, named USPoint, for US-probe motion estimation during the fine-adjustment phase of fetal plane acquisition. Specifically, a hybrid neural architecture is designed to simultaneously extract a local feature, and further estimate the probe motion. By embedding a differentiable USPoint-based motion estimation inside the proposed network architecture, the USPoint learns the keypoint detector, scores and descriptors from motion error alone, which doesn't require expensive human-annotation of local features. The two tasks, local feature learning and motion estimation, are jointly learned in a unified framework to enable collaborative learning with the aim of mutual benefit. To the best of our knowledge, it is the first learned local detector and descriptor tailored for the US image. Experimental evaluation on real clinical data demonstrates the resultant performance improvement on feature matching and motion estimation for potential clinical value. A video demo can be found online: https://youtu.be/JGzHuTQVlBs.

2.
Proc IEEE Int Symp Biomed Imaging ; 2021: 1646-1649, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34413933

RESUMO

This paper presents a novel multi-modal learning approach for automated skill characterization of obstetric ultrasound operators using heterogeneous spatio-temporal sensory cues, namely, scan video, eye-tracking data, and pupillometric data, acquired in the clinical environment. We address pertinent challenges such as combining heterogeneous, small-scale and variable-length sequential datasets, to learn deep convolutional neural networks in real-world scenarios. We propose spatial encoding for multi-modal analysis using sonography standard plane images, spatial gaze maps, gaze trajectory images, and pupillary response images. We present and compare five multi-modal learning network architectures using late, intermediate, hybrid, and tensor fusion. We build models for the Heart and the Brain scanning tasks, and performance evaluation suggests that multi-modal learning networks outperform uni-modal networks, with the best-performing model achieving accuracies of 82.4% (Brain task) and 76.4% (Heart task) for the operator skill classification problem.

3.
Simpl Med Ultrasound (2021) ; 12967: 129-138, 2021 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-35368447

RESUMO

We present a method for classifying tasks in fetal ultrasound scans using the eye-tracking data of sonographers. The visual attention of a sonographer captured by eye-tracking data over time is defined by a scanpath. In routine fetal ultrasound, the captured standard imaging planes are visually inconsistent due to fetal position, movements, and sonographer scanning experience. To address this challenge, we propose a scale and position invariant task classification method using normalised visual scanpaths. We describe a normalisation method that uses bounding boxes to provide the gaze with a reference to the position and scale of the imaging plane and use the normalised scanpath sequences to train machine learning models for discriminating between ultrasound tasks. We compare the proposed method to existing work considering raw eyetracking data. The best performing model achieves the F1-score of 84% and outperforms existing models.

4.
Sci Rep ; 10(1): 5251, 2020 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-32251309

RESUMO

We studied neurodevelopmental outcomes and behaviours in healthy 2-year old children (N = 1306) from Brazil, India, Italy, Kenya and the UK participating in the INTERGROWTH-21st Project. There was a positive independent relationship of duration of exclusive breastfeeding (EBF) and age at weaning with gross motor development, vision and autonomic physical activities, most evident if children were exclusively breastfed for ≥7 months or weaned at ≥7 months. There was no association with cognition, language or behaviour. Children exclusively breastfed from birth to <5 months or weaned at >6 months had, in a dose-effect pattern, adjusting for confounding factors, higher scores for "emotional reactivity". The positive effect of EBF and age at weaning on gross motor, running and climbing scores was strongest among children with the highest scores in maternal closeness proxy indicators. EBF, late weaning and maternal closeness, associated with advanced motor and vision maturation, independently influence autonomous behaviours in healthy children.


Assuntos
Desenvolvimento Infantil , Mães , Reforço Psicológico , Desmame , Brasil , Aleitamento Materno , Pré-Escolar , Feminino , Humanos , Índia , Lactente , Recém-Nascido , Itália , Quênia , Desenvolvimento da Linguagem , Masculino , Destreza Motora
5.
Phys Med Biol ; 64(18): 185010, 2019 09 17.
Artigo em Inglês | MEDLINE | ID: mdl-31408850

RESUMO

The first trimester fetal ultrasound scan is important to confirm fetal viability, to estimate the gestational age of the fetus, and to detect fetal anomalies early in pregnancy. First trimester ultrasound images have a different appearance than for the second trimester scan, reflecting the different stage of fetal development. There is limited literature on automation of image-based assessment for this earlier trimester, and most of the literature is focused on one specific fetal anatomy. In this paper, we consider automation to support first trimester fetal assessment of multiple fetal anatomies including both visualization and the measurements from a single 3D ultrasound scan. We present a deep learning and image processing solution (i) to perform semantic segmentation of the whole fetus, (ii) to estimate plane orientation for standard biometry views, (iii) to localize and automatically estimate biometry, and (iv) to detect fetal limbs from a 3D first trimester volume. Computational analysis methods were built using a real-world dataset (n = 44 volumes). An evaluation on a further independent clinical dataset (n = 21 volumes) showed that the automated methods approached human expert assessment of a 3D volume.


Assuntos
Desenvolvimento Fetal , Feto/diagnóstico por imagem , Idade Gestacional , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Ultrassonografia Pré-Natal/métodos , Abdome/diagnóstico por imagem , Algoritmos , Feminino , Cabeça/diagnóstico por imagem , Humanos , Gravidez , Primeiro Trimestre da Gravidez
6.
Med Image Anal ; 47: 127-139, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29715691

RESUMO

Three-dimensional (3D) fetal neurosonography is used clinically to detect cerebral abnormalities and to assess growth in the developing brain. However, manual identification of key brain structures in 3D ultrasound images requires expertise to perform and even then is tedious. Inspired by how sonographers view and interact with volumes during real-time clinical scanning, we propose an efficient automatic method to simultaneously localize multiple brain structures in 3D fetal neurosonography. The proposed View-based Projection Networks (VP-Nets), uses three view-based Convolutional Neural Networks (CNNs), to simplify 3D localizations by directly predicting 2D projections of the key structures onto three anatomical views. While designed for efficient use of data and GPU memory, the proposed VP-Nets allows for full-resolution 3D prediction. We investigated parameters that influence the performance of VP-Nets, e.g. depth and number of feature channels. Moreover, we demonstrate that the model can pinpoint the structure in 3D space by visualizing the trained VP-Nets, despite only 2D supervision being provided for a single stream during training. For comparison, we implemented two other baseline solutions based on Random Forest and 3D U-Nets. In the reported experiments, VP-Nets consistently outperformed other methods on localization. To test the importance of loss function, two identical models are trained with binary corss-entropy and dice coefficient loss respectively. Our best VP-Net model achieved prediction center deviation: 1.8 ±â€¯1.4 mm, size difference: 1.9 ±â€¯1.5 mm, and 3D Intersection Over Union (IOU): 63.2 ±â€¯14.7% when compared to the ground truth. To make the whole pipeline intervention free, we also implement a skull-stripping tool using 3D CNN, which achieves high segmentation accuracy. As a result, the proposed processing pipeline takes a raw ultrasound brain image as input, and output a skull-stripped image with five detected key brain structures.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Neuroimagem/métodos , Ultrassonografia Pré-Natal/métodos , Algoritmos , Feminino , Humanos , Gravidez
7.
Med Image Anal ; 33: 33-37, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27503078

RESUMO

Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.


Assuntos
Ultrassonografia/história , Ultrassonografia/tendências , História do Século XX , História do Século XXI , Humanos , Imageamento Tridimensional , Aprendizado de Máquina , Ultrassonografia/economia , Ultrassonografia/instrumentação
8.
Med Image Anal ; 21(1): 72-86, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25624045

RESUMO

We propose an automated framework for predicting gestational age (GA) and neurodevelopmental maturation of a fetus based on 3D ultrasound (US) brain image appearance. Our method capitalizes on age-related sonographic image patterns in conjunction with clinical measurements to develop, for the first time, a predictive age model which improves on the GA-prediction potential of US images. The framework benefits from a manifold surface representation of the fetal head which delineates the inner skull boundary and serves as a common coordinate system based on cranial position. This allows for fast and efficient sampling of anatomically-corresponding brain regions to achieve like-for-like structural comparison of different developmental stages. We develop bespoke features which capture neurosonographic patterns in 3D images, and using a regression forest classifier, we characterize structural brain development both spatially and temporally to capture the natural variation existing in a healthy population (N=447) over an age range of active brain maturation (18-34weeks). On a routine clinical dataset (N=187) our age prediction results strongly correlate with true GA (r=0.98,accurate within±6.10days), confirming the link between maturational progression and neurosonographic activity observable across gestation. Our model also outperforms current clinical methods by ±4.57 days in the third trimester-a period complicated by biological variations in the fetal population. Through feature selection, the model successfully identified the most age-discriminating anatomies over this age range as being the Sylvian fissure, cingulate, and callosal sulci.


Assuntos
Inteligência Artificial , Encéfalo/embriologia , Ecoencefalografia/métodos , Idade Gestacional , Interpretação de Imagem Assistida por Computador/métodos , Ultrassonografia Pré-Natal/métodos , Algoritmos , Estatura Cabeça-Cóccix , Feminino , Humanos , Aumento da Imagem/métodos , Masculino , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA