Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Ultrasound Med Biol ; 50(6): 805-816, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38467521

ABSTRACT

OBJECTIVE: Automated medical image analysis solutions should closely mimic complete human actions to be useful in clinical practice. However, more often an automated image analysis solution represents only part of a human task, which restricts its practical utility. In the case of ultrasound-based fetal biometry, an automated solution should ideally recognize key fetal structures in freehand video guidance, select a standard plane from a video stream and perform biometry. A complete automated solution should automate all three subactions. METHODS: In this article, we consider how to automate the complete human action of first-trimester biometry measurement from real-world freehand ultrasound. In the proposed hybrid convolutional neural network (CNN) architecture design, a classification regression-based guidance model detects and tracks fetal anatomical structures (using visual cues) in the ultrasound video. Several high-quality standard planes that contain the mid-sagittal view of the fetus are sampled at multiple time stamps (using a custom-designed confident-frame detector) based on the estimated probability values associated with predicted anatomical structures that define the biometry plane. Automated semantic segmentation is performed on the selected frames to extract fetal anatomical landmarks. A crown-rump length (CRL) estimate is calculated as the mean CRL from these multiple frames. RESULTS: Our fully automated method has a high correlation with clinical expert CRL measurement (Pearson's p = 0.92, R-squared [R2] = 0.84) and a low mean absolute error of 0.834 (weeks) for fetal age estimation on a test data set of 42 videos. CONCLUSION: A novel algorithm for standard plane detection employs a quality detection mechanism defined by clinical standards, ensuring precise biometric measurements.


Subject(s)
Biometry , Pregnancy Trimester, First , Ultrasonography, Prenatal , Humans , Ultrasonography, Prenatal/methods , Female , Pregnancy , Biometry/methods , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Fetus/diagnostic imaging , Fetus/anatomy & histology
2.
Med Image Anal ; 90: 102977, 2023 12.
Article in English | MEDLINE | ID: mdl-37778101

ABSTRACT

In obstetric sonography, the quality of acquisition of ultrasound scan video is crucial for accurate (manual or automated) biometric measurement and fetal health assessment. However, the nature of fetal ultrasound involves free-hand probe manipulation and this can make it challenging to capture high-quality videos for fetal biometry, especially for the less-experienced sonographer. Manually checking the quality of acquired videos would be time-consuming, subjective and requires a comprehensive understanding of fetal anatomy. Thus, it would be advantageous to develop an automatic quality assessment method to support video standardization and improve diagnostic accuracy of video-based analysis. In this paper, we propose a general and purely data-driven video-based quality assessment framework which directly learns a distinguishable feature representation from high-quality ultrasound videos alone, without anatomical annotations. Our solution effectively utilizes both spatial and temporal information of ultrasound videos. The spatio-temporal representation is learned by a bi-directional reconstruction between the video space and the feature space, enhanced by a key-query memory module proposed in the feature space. To further improve performance, two additional modalities are introduced in training which are the sonographer gaze and optical flow derived from the video. Two different clinical quality assessment tasks in fetal ultrasound are considered in our experiments, i.e., measurement of the fetal head circumference and cerebellar diameter; in both of these, low-quality videos are detected by the large reconstruction error in the feature space. Extensive experimental evaluation demonstrates the merits of our approach.


Subject(s)
Fetus , Ultrasonography, Prenatal , Pregnancy , Female , Humans , Ultrasonography, Prenatal/methods , Fetus/diagnostic imaging , Ultrasonography
3.
IEEE Trans Med Imaging ; 42(5): 1301-1313, 2023 05.
Article in English | MEDLINE | ID: mdl-36455084

ABSTRACT

Obstetric ultrasound assessment of fetal anatomy in the first trimester of pregnancy is one of the less explored fields in obstetric sonography because of the paucity of guidelines on anatomical screening and availability of data. This paper, for the first time, examines imaging proficiency and practices of first trimester ultrasound scanning through analysis of full-length ultrasound video scans. Findings from this study provide insights to inform the development of more effective user-machine interfaces, of targeted assistive technologies, as well as improvements in workflow protocols for first trimester scanning. Specifically, this paper presents an automated framework to model operator clinical workflow from full-length routine first-trimester fetal ultrasound scan videos. The 2D+t convolutional neural network-based architecture proposed for video annotation incorporates transfer learning and spatio-temporal (2D+t) modelling to automatically partition an ultrasound video into semantically meaningful temporal segments based on the fetal anatomy detected in the video. The model results in a cross-validation A1 accuracy of 96.10% , F1=0.95 , precision =0.94 and recall =0.95 . Automated semantic partitioning of unlabelled video scans (n=250) achieves a high correlation with expert annotations ( ρ = 0.95, p=0.06 ). Clinical workflow patterns, operator skill and its variability can be derived from the resulting representation using the detected anatomy labels, order, and distribution. It is shown that nuchal translucency (NT) is the toughest standard plane to acquire and most operators struggle to localize high-quality frames. Furthermore, it is found that newly qualified operators spend 25.56% more time on key biometry tasks than experienced operators.


Subject(s)
Nuchal Translucency Measurement , Ultrasonography, Prenatal , Pregnancy , Female , Humans , Pregnancy Trimester, First , Workflow , Ultrasonography, Prenatal/methods , Nuchal Translucency Measurement/methods , Machine Learning
4.
Diagnostics (Basel) ; 13(1)2022 Dec 29.
Article in English | MEDLINE | ID: mdl-36611396

ABSTRACT

Medical image analysis methods for mammograms, ultrasound, and magnetic resonance imaging (MRI) cannot provide the underline features on the cellular level to understand the cancer microenvironment which makes them unsuitable for breast cancer subtype classification study. In this paper, we propose a convolutional neural network (CNN)-based breast cancer classification method for hematoxylin and eosin (H&E) whole slide images (WSIs). The proposed method incorporates fused mobile inverted bottleneck convolutions (FMB-Conv) and mobile inverted bottleneck convolutions (MBConv) with a dual squeeze and excitation (DSE) network to accurately classify breast cancer tissue into binary (benign and malignant) and eight subtypes using histopathology images. For that, a pre-trained EfficientNetV2 network is used as a backbone with a modified DSE block that combines the spatial and channel-wise squeeze and excitation layers to highlight important low-level and high-level abstract features. Our method outperformed ResNet101, InceptionResNetV2, and EfficientNetV2 networks on the publicly available BreakHis dataset for the binary and multi-class breast cancer classification in terms of precision, recall, and F1-score on multiple magnification levels.

5.
Comput Vis ECCV ; 2022: 422-436, 2022 Oct.
Article in English | MEDLINE | ID: mdl-37250853

ABSTRACT

Self-supervised contrastive representation learning offers the advantage of learning meaningful visual representations from unlabeled medical datasets for transfer learning. However, applying current contrastive learning approaches to medical data without considering its domain-specific anatomical characteristics may lead to visual representations that are inconsistent in appearance and semantics. In this paper, we propose to improve visual representations of medical images via anatomy-aware contrastive learning (AWCL), which incorporates anatomy information to augment the positive/negative pair sampling in a contrastive learning manner. The proposed approach is demonstrated for automated fetal ultrasound imaging tasks, enabling the positive pairs from the same or different ultrasound scans that are anatomically similar to be pulled together and thus improving the representation learning. We empirically investigate the effect of inclusion of anatomy information with coarse- and fine-grained granularity, for contrastive learning and find that learning with fine-grained anatomy information which preserves intra-class difference is more effective than its counterpart. We also analyze the impact of anatomy ratio on our AWCL framework and find that using more distinct but anatomically similar samples to compose positive pairs results in better quality representations. Extensive experiments on a large-scale fetal ultrasound dataset demonstrate that our approach is effective for learning representations that transfer well to three clinical downstream tasks, and achieves superior performance compared to ImageNet supervised and the current state-of-the-art contrastive learning methods. In particular, AWCL outperforms ImageNet supervised method by 13.8% and state-of-the-art contrastive-based method by 7.1% on a cross-domain segmentation task. The code is available at https://github.com/JianboJiao/AWCL.

6.
Article in English | MEDLINE | ID: mdl-36643819

ABSTRACT

This study presents a novel approach to automatic detection and segmentation of the Crown Rump Length (CRL) and Nuchal Translucency (NT), two essential measurements in the first trimester US scan. The proposed method automatically localises a standard plane within a video clip as defined by the UK Fetal Abnormality Screening Programme. A Nested Hourglass (NHG) based network performs semantic pixel-wise segmentation to extract NT and CRL structures. Our results show that the NHG network is faster (19.52% < GFlops than FCN32) and offers high pixel agreement (mean-IoU=80.74) with expert manual annotations.

7.
Med Image Comput Comput Assist Interv ; 13434: 228-237, 2022 Sep 16.
Article in English | MEDLINE | ID: mdl-36649384

ABSTRACT

Video quality assurance is an important topic in obstetric ultrasound imaging to ensure that captured videos are suitable for biometry and fetal health assessment. Previously, one successful objective approach to automated ultrasound image quality assurance has considered it as a supervised learning task of detecting anatomical structures defined by a clinical protocol. In this paper, we propose an alternative and purely data-driven approach that makes effective use of both spatial and temporal information and the model learns from high-quality videos without any anatomy-specific annotations. This makes it attractive for potentially scalable generalisation. In the proposed model, a 3D encoder and decoder pair bi-directionally learns a spatio-temporal representation between the video space and the feature space. A zoom-in module is introduced to encourage the model to focus on the main object in a frame. A further design novelty is the introduction of two additional modalities in model training (sonographer gaze and optical flow derived from the video). Finally, our approach is applied to identify high-quality videos for fetal head circumference measurement in freehand second-trimester ultrasound scans. Extensive experiments are conducted, and the results demonstrate the effectiveness of our approach with an AUC of 0.911.

8.
Gigascience ; 8(11)2019 11 01.
Article in English | MEDLINE | ID: mdl-31702012

ABSTRACT

BACKGROUND: In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. RESULTS: We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. CONCLUSIONS: We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Plant Roots/anatomy & histology , Plant Roots/growth & development
SELECTION OF CITATIONS
SEARCH DETAIL
...