Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Front Digit Health ; 6: 1324511, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38384738

RESUMEN

In recent years the healthcare industry has had increased difficulty seeing all low-risk patients, including but not limited to suspected osteoarthritis (OA) patients. To help address the increased waiting lists and shortages of staff, we propose a novel method of automated biomarker identification and quantification for the monitoring of treatment or disease progression through the analysis of clinical motion data captured from a standard RGB video camera. The proposed method allows for the measurement of biomechanics information and analysis of their clinical significance, in both a cheap and sensitive alternative to the traditional motion capture techniques. These methods and results validate the capabilities of standard RGB cameras in clinical environments to capture clinically relevant motion data. Our method focuses on generating 3D human shape and pose from 2D video data via adversarial training in a deep neural network with a self-attention mechanism to encode both spatial and temporal information. Biomarker identification using Principal Component Analysis (PCA) allows the production of representative features from motion data and uses these to generate a clinical report automatically. These new biomarkers can then be used to assess the success of treatment and track the progress of rehabilitation or to monitor the progression of the disease. These methods have been validated with a small clinical study, by administering a local anaesthetic to a small population with knee pain, this allows these new representative biomarkers to be validated as statistically significant (p-value <0.05). These significant biomarkers include the cumulative acceleration of elbow flexion/extension in a sit-to-stand, as well as the smoothness of the knee and elbow flexion/extension in both a squat and sit-to-stand.

2.
Sensors (Basel) ; 23(23)2023 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-38067907

RESUMEN

This paper presents a spatiotemporal deep learning approach for mouse behavioral classification in the home-cage. Using a series of dual-stream architectures with assorted modifications for optimal performance, we introduce a novel feature sharing approach that jointly processes the streams at regular intervals throughout the network. The dataset in focus is an annotated, publicly available dataset of a singly-housed mouse. We achieved even better classification accuracy by ensembling the best performing models; an Inception-based network and an attention-based network, both of which utilize this feature sharing attribute. Furthermore, we demonstrate through ablation studies that for all models, the feature sharing architectures consistently outperform the conventional dual-stream having standalone streams. In particular, the inception-based architectures showed higher feature sharing gains with their increase in accuracy anywhere between 6.59% and 15.19%. The best-performing models were also further evaluated on other mouse behavioral datasets.


Asunto(s)
Aprendizaje Profundo , Animales , Ratones
3.
J Med Imaging (Bellingham) ; 8(3): 034002, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-34179218

RESUMEN

Purpose: Echocardiography is the most commonly used modality for assessing the heart in clinical practice. In an echocardiographic exam, an ultrasound probe samples the heart from different orientations and positions, thereby creating different viewpoints for assessing the cardiac function. The determination of the probe viewpoint forms an essential step in automatic echocardiographic image analysis. Approach: In this study, convolutional neural networks are used for the automated identification of 14 different anatomical echocardiographic views (larger than any previous study) in a dataset of 8732 videos acquired from 374 patients. Differentiable architecture search approach was utilized to design small neural network architectures for rapid inference while maintaining high accuracy. The impact of the image quality and resolution, size of the training dataset, and number of echocardiographic view classes on the efficacy of the models were also investigated. Results: In contrast to the deeper classification architectures, the proposed models had significantly lower number of trainable parameters (up to 99.9% reduction), achieved comparable classification performance (accuracy 88.4% to 96%, precision 87.8% to 95.2%, recall 87.1% to 95.1%) and real-time performance with inference time per image of 3.6 to 12.6 ms. Conclusion: Compared with the standard classification neural network architectures, the proposed models are faster and achieve comparable classification performance. They also require less training data. Such models can be used for real-time detection of the standard views.

4.
IEEE J Biomed Health Inform ; 25(1): 131-142, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32750901

RESUMEN

Esophageal cancer is categorized as a type of disease with a high mortality rate. Early detection of esophageal abnormalities (i.e. precancerous and early cancerous) can improve the survival rate of the patients. Recent deep learning-based methods for selected types of esophageal abnormality detection from endoscopic images have been proposed. However, no methods have been introduced in the literature to cover the detection from endoscopic videos, detection from challenging frames and detection of more than one esophageal abnormality type. In this paper, we present an efficient method to automatically detect different types of esophageal abnormalities from endoscopic videos. We propose a novel 3D Sequential DenseConvLstm network that extracts spatiotemporal features from the input video. Our network incorporates 3D Convolutional Neural Network (3DCNN) and Convolutional Lstm (ConvLstm) to efficiently learn short and long term spatiotemporal features. The generated feature map is utilized by a region proposal network and ROI pooling layer to produce a bounding box that detects abnormality regions in each frame throughout the video. Finally, we investigate a post-processing method named Frame Search Conditional Random Field (FS-CRF) that improves the overall performance of the model by recovering the missing regions in neighborhood frames within the same clip. We extensively validate our model on an endoscopic video dataset that includes a variety of esophageal abnormalities. Our model achieved high performance using different evaluation metrics showing 93.7% recall, 92.7% precision, and 93.2% F-measure. Moreover, as no results have been reported in the literature for the esophageal abnormality detection from endoscopic videos, to validate the robustness of our model, we have tested the model on a publicly available colonoscopy video dataset, achieving the polyp detection performance in a recall of 81.18%, precision of 96.45% and F-measure 88.16%, compared to the state-of-the-art results of 78.84% recall, 90.51% precision and 84.27% F-measure using the same dataset. This demonstrates that the proposed method can be adapted to different gastrointestinal endoscopic video applications with a promising performance.


Asunto(s)
Detección Precoz del Cáncer , Redes Neurales de la Computación , Colonoscopía , Humanos , Instrumentos Quirúrgicos
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 2019-2022, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018400

RESUMEN

Echocardiography is the modality of choice for the assessment of left ventricle function. Left ventricle is responsible for pumping blood rich in oxygen to all body parts. Segmentation of this chamber from echocardiographic images is a challenging task, due to the ambiguous boundary and inhomogeneous intensity distribution. In this paper we propose a novel deep learning model named ResDUnet. The model is based on U-net incorporated with dilated convolution, where residual blocks are employed instead of the basic U-net units to ease the training process. Each block is enriched with squeeze and excitation unit for channel-wise attention and adaptive feature re-calibration. To tackle the problem of left ventricle shape and size variability, we chose to enrich the process of feature concatenation in U-net by integrating feature maps generated by cascaded dilation. Cascaded dilation broadens the receptive field size in comparison with traditional convolution, which allows the generation of multi-scale information which in turn results in a more robust segmentation. Performance measures were evaluated on a publicly available dataset of 500 patients with large variability in terms of quality and patients pathology. The proposed model shows a dice similarity increase of 8.4% when compared to deeplabv3 and 1.2% when compared to the basic U-net architecture. Experimental results demonstrate the potential use in clinical domain.


Asunto(s)
Ecocardiografía , Ventrículos Cardíacos , Ventrículos Cardíacos/diagnóstico por imagen , Humanos , Manejo de Especímenes
6.
Med Biol Eng Comput ; 58(6): 1309-1323, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32253607

RESUMEN

Speckle tracking is the most prominent technique used to estimate the regional movement of the heart based on echocardiograms. In this study, we propose an optimised-based block matching algorithm to perform speckle tracking iteratively. The proposed technique was evaluated using a publicly available synthetic echocardiographic dataset with known ground-truth from several major vendors and for healthy/ischaemic cases. The results were compared with the results from the classic (standard) two-dimensional block matching. The proposed method presented an average displacement error of 0.57 pixels, while classic block matching provided an average error of 1.15 pixels. When estimating the segmental/regional longitudinal strain in healthy cases, the proposed method, with an average of 0.32 ± 0.53, outperformed the classic counterpart, with an average of 3.43 ± 2.84. A similar superior performance was observed in ischaemic cases. This method does not require any additional ad hoc filtering process. Therefore, it can potentially help to reduce the variability in the strain measurements caused by various post-processing techniques applied by different implementations of the speckle tracking. Graphical Abstract Standard block matching versus proposed iterative block matching approach.


Asunto(s)
Diagnóstico por Computador/métodos , Ecocardiografía/métodos , Isquemia Miocárdica/diagnóstico , Algoritmos , Bases de Datos Factuales , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
7.
J Med Imaging (Bellingham) ; 6(2): 024001, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31001568

RESUMEN

Segmentation of skin lesions is an important step in computer-aided diagnosis of melanoma; it is also a very challenging task due to fuzzy lesion boundaries and heterogeneous lesion textures. We present a fully automatic method for skin lesion segmentation based on deep fully convolutional networks (FCNs). We investigate a shallow encoding network to model clinically valuable prior knowledge, in which spatial filters simulating simple cell receptive fields function in the primary visual cortex (V1) is considered. An effective fusing strategy using skip connections and convolution operators is then leveraged to couple prior knowledge encoded via shallow network with hierarchical data-driven features learned from the FCNs for detailed segmentation of the skin lesions. To our best knowledge, this is the first time the domain-specific hand craft features have been built into a deep network trained in an end-to-end manner for skin lesion segmentation. The method has been evaluated on both ISBI 2016 and ISBI 2017 skin lesion challenge datasets. We provide comparative evidence to demonstrate that our newly designed network can gain accuracy for lesion segmentation by coupling the prior knowledge encoded by the shallow network with the deep FCNs. Our method is robust without the need for data augmentation or comprehensive parameter tuning, and the experimental results show great promise of the method with effective model generalization compared to other state-of-the-art-methods.

8.
Sensors (Basel) ; 19(5)2019 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-30857169

RESUMEN

Individual pig detection and tracking is an important requirement in many video-based pig monitoring applications. However, it still remains a challenging task in complex scenes, due to problems of light fluctuation, similar appearances of pigs, shape deformations, and occlusions. In order to tackle these problems, we propose a robust on-line multiple pig detection and tracking method which does not require manual marking or physical identification of the pigs and works under both daylight and infrared (nighttime) light conditions. Our method couples a CNN-based detector and a correlation filter-based tracker via a novel hierarchical data association algorithm. In our method, the detector gains the best accuracy/speed trade-off by using the features derived from multiple layers at different scales in a one-stage prediction network. We define a tag-box for each pig as the tracking target, from which features with a more local scope are extracted for learning, and the multiple object tracking is conducted in a key-points tracking manner using learned correlation filters. Under challenging conditions, the tracking failures are modelled based on the relations between responses of the detector and tracker, and the data association algorithm allows the detection hypotheses to be refined; meanwhile the drifted tracks can be corrected by probing the tracking failures followed by the re-initialization of tracking. As a result, the optimal tracklets can sequentially grow with on-line refined detections, and tracking fragments are correctly integrated into respective tracks while keeping the original identifications. Experiments with a dataset captured from a commercial farm show that our method can robustly detect and track multiple pigs under challenging conditions. The promising performance of the proposed method also demonstrates the feasibility of long-term individual pig tracking in a complex environment and thus promises commercial potential.


Asunto(s)
Algoritmos , Granjas , Animales , Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador , Porcinos , Grabación en Video
9.
J Med Imaging (Bellingham) ; 6(1): 014502, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30840732

RESUMEN

Barrett's esophagus (BE) is a premalignant condition that has an increased risk to turn into esophageal adenocarcinoma. Classification and staging of the different changes (BE in particular) in the esophageal mucosa are challenging since they have a very similar appearance. Confocal laser endomicroscopy (CLE) is one of the newest endoscopy tools that is commonly used to identify the pathology type of the suspected area of the esophageal mucosa. However, it requires a well-trained physician to classify the image obtained from CLE. An automatic stage classification of esophageal mucosa is presented. The proposed model enhances the internal features of CLE images using an image filter that combines fractional integration with differentiation. Various features are then extracted on a multiscale level, to classify the mucosal tissue into one of its four types: normal squamous (NS), gastric metaplasia (GM), intestinal metaplasia (IM or BE), and neoplasia. These sets of features are used to train two conventional classifiers: support vector machine (SVM) and random forest. The proposed method was evaluated on a dataset of 96 patients with 557 images of different histopathology types. The SVM classifier achieved the best performance with 96.05% accuracy based on a leave-one-patient-out cross-validation. Additionally, the dataset was divided into 60% training and 40% testing; the model achieved an accuracy of 93.72% for the testing data using the SVM. The presented model showed superior performance when compared with four state-of-the-art methods. Accurate classification is essential for the intestinal metaplasia grade, which most likely develops into esophageal cancer. Not only does our method come to the aid of physicians for more accurate diagnosis by acting as a second opinion, but it also acts as a training method for junior physicians who need practice in using CLE. Consequently, this work contributes to an automatic classification that facilitates early intervention and decreases samples of required biopsy.

10.
Endosc Int Open ; 7(1): E9-E14, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30648134

RESUMEN

Background and study aims Video-colonoscopy, despite being the gold-standard for diagnosis of colorectal lesions, has limitations including patient discomfort and risk of complications. This study assessed training characteristics and acceptability in operators of a new robotic colonoscope (RC). Materials and methods Participants (n = 9) with varying degrees of skill and background knowledge in colonoscopy performed colonoscopies with a RC on a simulation-based training model. Quantitative procedure-related and qualitative operator-related parameters were recorded. Results Polyp detection rate was highest in the novice group (91.67 %) followed by experts (86.11 %), then equally, trainees and video gamers (79.17 %). Four participants repeated the procedure at a follow-up session. Each participant improved cecal intubation time and had the same or higher polyp detection rate. The potential role for RC was identified for an out-of-hospital environment and as a novel diagnostic tool. Conclusions Results from this pilot suggest that operators at all skill levels found the RC acceptable and potentially useful as a diagnostic tool. Acquisition of skills with RC seems to improve rapidly to a clinically relevant level with simulation-based training.

11.
Neuroimage Clin ; 21: 101648, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30630760

RESUMEN

PURPOSE: To develop a statistical method of combining multimodal MRI (mMRI) of adult glial brain tumours to generate tissue heterogeneity maps that indicate tumour grade and infiltration margins. MATERIALS AND METHODS: We performed a retrospective analysis of mMRI from patients with histological diagnosis of glioma (n = 25). 1H Magnetic Resonance Spectroscopic Imaging (MRSI) was used to label regions of "pure" low- or high-grade tumour across image types. Normal brain and oedema characteristics were defined from healthy controls (n = 10) and brain metastasis patients (n = 10) respectively. Probability density distributions (PDD) for each tissue type were extracted from intensity normalised proton density and T2-weighted images, and p and q diffusion maps. Superpixel segmentation and Bayesian inference was used to produce whole-brain tissue-type maps. RESULTS: Total lesion volumes derived automatically from tissue-type maps correlated with those from manual delineation (p < 0.001, r = 0.87). Large high-grade volumes were determined in all grade III & IV (n = 16) tumours, in grade II gemistocytic rich astrocytomas (n = 3) and one astrocytoma with a histological diagnosis of grade II. For patients with known outcome (n = 20), patients with survival time < 2 years (3 grade II, 2 grade III and 10 grade IV) had a high-grade volume significantly greater than zero (Wilcoxon signed rank p < 0.0001) and also significantly greater high grade volume than the 5 grade II patients with survival >2 years (Mann Witney p = 0.0001). Regions classified from mMRI as oedema had non-tumour-like 1H MRS characteristics. CONCLUSIONS: 1H MRSI can label tumour tissue types to enable development of a mMRI tissue type mapping algorithm, with potential to aid management of patients with glial tumours.


Asunto(s)
Neoplasias Encefálicas/patología , Encéfalo/patología , Glioma/patología , Oligodendroglioma/patología , Adulto , Anciano , Algoritmos , Teorema de Bayes , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Espectroscopía de Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Clasificación del Tumor/métodos , Estudios Retrospectivos
12.
Int J Comput Assist Radiol Surg ; 14(4): 611-621, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30666547

RESUMEN

PURPOSE: This study aims to adapt and evaluate the performance of different state-of-the-art deep learning object detection methods to automatically identify esophageal adenocarcinoma (EAC) regions from high-definition white light endoscopy (HD-WLE) images. METHOD: Several state-of-the-art object detection methods using Convolutional Neural Networks (CNNs) were adapted to automatically detect abnormal regions in the esophagus HD-WLE images, utilizing VGG'16 as the backbone architecture for feature extraction. Those methods are Regional-based Convolutional Neural Network (R-CNN), Fast R-CNN, Faster R-CNN and Single-Shot Multibox Detector (SSD). For the evaluation of the different methods, 100 images from 39 patients that have been manually annotated by five experienced clinicians as ground truth have been tested. RESULTS: Experimental results illustrate that the SSD and Faster R-CNN networks show promising results, and the SSD outperforms other methods achieving a sensitivity of 0.96, specificity of 0.92 and F-measure of 0.94. Additionally, the Average Recall Rate of the Faster R-CNN in locating the EAC region accurately is 0.83. CONCLUSION: In this paper, recent deep learning object detection methods are adapted to detect esophageal abnormalities automatically. The evaluation of the methods proved its ability to locate abnormal regions in the esophagus from endoscopic images. The automatic detection is a crucial step that may help early detection and treatment of EAC and also can improve automatic tumor segmentation to monitor its growth and treatment outcome.


Asunto(s)
Adenocarcinoma/diagnóstico , Aprendizaje Profundo , Diagnóstico Precoz , Neoplasias Esofágicas/diagnóstico , Redes Neurales de la Computación , Humanos , Reproducibilidad de los Resultados
13.
IEEE Trans Med Imaging ; 37(6): 1310-1321, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29870361

RESUMEN

Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist-Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.


Asunto(s)
Compresión de Datos/métodos , Aprendizaje Profundo , Imagen por Resonancia Magnética/métodos , Algoritmos , Humanos
14.
Comput Methods Programs Biomed ; 157: 69-84, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29477436

RESUMEN

BACKGROUND: Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. METHODS: We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. RESULTS: The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. CONCLUSION: The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Imagen de Difusión Tensora/métodos , Imagen por Resonancia Magnética/métodos , Imagen Multimodal/métodos , Aprendizaje Automático Supervisado , Algoritmos , Neoplasias Encefálicas/patología , Conjuntos de Datos como Asunto , Humanos , Clasificación del Tumor
15.
Med Phys ; 45(4): 1562-1576, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29480931

RESUMEN

PURPOSE: Atrial fibrillation (AF) is the most common heart rhythm disorder and causes considerable morbidity and mortality, resulting in a large public health burden that is increasing as the population ages. It is associated with atrial fibrosis, the amount and distribution of which can be used to stratify patients and to guide subsequent electrophysiology ablation treatment. Atrial fibrosis may be assessed noninvasively using late gadolinium-enhanced (LGE) magnetic resonance imaging (MRI) where scar tissue is visualized as a region of signal enhancement. However, manual segmentation of the heart chambers and of the atrial scar tissue is time consuming and subject to interoperator variability, particularly as image quality in AF is often poor. In this study, we propose a novel fully automatic pipeline to achieve accurate and objective segmentation of the heart (from MRI Roadmap data) and of scar tissue within the heart (from LGE MRI data) acquired in patients with AF. METHODS: Our fully automatic pipeline uniquely combines: (a) a multiatlas-based whole heart segmentation (MA-WHS) to determine the cardiac anatomy from an MRI Roadmap acquisition which is then mapped to LGE MRI, and (b) a super-pixel and supervised learning based approach to delineate the distribution and extent of atrial scarring in LGE MRI. We compared the accuracy of the automatic analysis to manual ground truth segmentations in 37 patients with persistent long-standing AF. RESULTS: Both our MA-WHS and atrial scarring segmentations showed accurate delineations of cardiac anatomy (mean Dice = 89%) and atrial scarring (mean Dice = 79%), respectively, compared to the established ground truth from manual segmentation. In addition, compared to the ground truth, we obtained 88% segmentation accuracy, with 90% sensitivity and 79% specificity. Receiver operating characteristic analysis achieved an average area under the curve of 0.91. CONCLUSION: Compared with previously studied methods with manual interventions, our innovative pipeline demonstrated comparable results, but was computed fully automatically. The proposed segmentation methods allow LGE MRI to be used as an objective assessment tool for localization, visualization, and quantitation of atrial scarring and to guide ablation treatment.


Asunto(s)
Fibrilación Atrial/patología , Cicatriz/diagnóstico por imagen , Medios de Contraste , Gadolinio , Atrios Cardíacos/patología , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Fibrilación Atrial/diagnóstico por imagen , Automatización , Atrios Cardíacos/diagnóstico por imagen , Humanos
16.
J Med Imaging (Bellingham) ; 4(2): 024001, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28439522

RESUMEN

Owing to the inconsistent image quality existing in routine obstetric ultrasound (US) scans that leads to a large intraobserver and interobserver variability, the aim of this study is to develop a quality-assured, fully automated US fetal head measurement system. A texton-based fetal head segmentation is used as a prerequisite step to obtain the head region. Textons are calculated using a filter bank designed specific for US fetal head structure. Both shape- and anatomic-based features calculated from the segmented head region are then fed into a random forest classifier to determine the quality of the image (e.g., whether the image is acquired from a correct imaging plane), from which fetal head measurements [biparietal diameter (BPD), occipital-frontal diameter (OFD), and head circumference (HC)] are derived. The experimental results show a good performance of our method for US quality assessment and fetal head measurements. The overall precision for automatic image quality assessment is 95.24% with 87.5% sensitivity and 100% specificity, while segmentation performance shows 99.27% ([Formula: see text]) of accuracy, 97.07% ([Formula: see text]) of sensitivity, 2.23 mm ([Formula: see text]) of the maximum symmetric contour distance, and 0.84 mm ([Formula: see text]) of the average symmetric contour distance. The statistical analysis results using paired [Formula: see text]-test and Bland-Altman plots analysis indicate that the 95% limits of agreement for inter observer variability between the automated measurements and the senior expert measurements are 2.7 mm of BPD, 5.8 mm of OFD, and 10.4 mm of HC, whereas the mean differences are [Formula: see text], [Formula: see text], and [Formula: see text], respectively. These narrow 95% limits of agreements indicate a good level of consistency between the automated and the senior expert's measurements.

17.
Int J Comput Assist Radiol Surg ; 12(2): 183-203, 2017 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27651330

RESUMEN

PURPOSE: We propose a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from Fluid- Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Imaging (MRI). METHODS: The method is based on superpixel technique and classification of each superpixel. A number of novel image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomized trees (ERT) classifier is compared with support vector machine (SVM) to classify each superpixel into tumour and non-tumour. RESULTS: The proposed method is evaluated on two datasets: (1) Our own clinical dataset: 19 MRI FLAIR images of patients with gliomas of grade II to IV, and (2) BRATS 2012 dataset: 30 FLAIR images with 10 low-grade and 20 high-grade gliomas. The experimental results demonstrate the high detection and segmentation performance of the proposed method using ERT classifier. For our own cohort, the average detection sensitivity, balanced error rate and the Dice overlap measure for the segmented tumour against the ground truth are 89.48 %, 6 % and 0.91, respectively, while, for the BRATS dataset, the corresponding evaluation results are 88.09 %, 6 % and 0.88, respectively. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Glioma/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Adulto , Anciano , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Máquina de Vectores de Soporte , Adulto Joven
18.
Phys Med Biol ; 61(3): 1095-115, 2016 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-26758386

RESUMEN

This paper presents a supervised texton based approach for the accurate segmentation and measurement of ultrasound fetal head (BPD, OFD, HC) and femur (FL). The method consists of several steps. First, a non-linear diffusion technique is utilized to reduce the speckle noise. Then, based on the assumption that cross sectional intensity profiles of skull and femur can be approximated by Gaussian-like curves, a multi-scale and multi-orientation filter bank is designed to extract texton features specific to ultrasound fetal anatomic structure. The extracted texton cues, together with multi-scale local brightness, are then built into a unified framework for boundary detection of ultrasound fetal head and femur. Finally, for fetal head, a direct least square ellipse fitting method is used to construct a closed head contour, whilst, for fetal femur a closed contour is produced by connecting the detected femur boundaries. The presented method is demonstrated to be promising for clinical applications. Overall the evaluation results of fetal head segmentation and measurement from our method are comparable with the inter-observer difference of experts, with the best average precision of 96.85%, the maximum symmetric contour distance (MSD) of 1.46 mm, average symmetric contour distance (ASD) of 0.53 mm; while for fetal femur, the overall performance of our method is better than the inter-observer difference of experts, with the average precision of 84.37%, MSD of 2.72 mm and ASD of 0.31 mm.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía Prenatal/métodos , Femenino , Fémur/diagnóstico por imagen , Cabeza/diagnóstico por imagen , Humanos , Embarazo , Relación Señal-Ruido
19.
IEEE Trans Biomed Eng ; 62(3): 948-59, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25438299

RESUMEN

Accurate and automatic colon segmentation from CT images is a crucial step of many clinical applications in CT colonography, including computer-aided detection (CAD) of colon polyps, 3-D virtual flythrough of the colon, and prone/supine registration. However, the existence of adjacent air-filled organs such as the lung, stomach, and small intestine, and the collapse of the colon due to poor insufflation, render accurate segmentation of the colon a difficult problem. Extra-colonic components can be categorized into two types based on their 3-D connection to the colon: detached and attached extracolonic components (DEC and AEC, respectively). In this paper, we propose graph inference methods to remove extracolonic components to achieve a high quality segmentation. We first decompose each 3-D air-filled object into a set of 3-D regions. A classifier trained with region-level features can be used to identify the colon regions from noncolon regions. After removing obvious DEC, we remove the remaining DEC by modeling the global anatomic structure with an a priori topological constraint and solving a graph inference problem using semantic information provided by a multiclass classifier. Finally, we remove AEC by modeling regions within each 3-D object with a hierarchical conditional random field, solved by graph cut. Experimental results demonstrate that our method outperforms a purely discriminative learning method in detecting true colon regions, while decreasing extra-colonic components in challenging clinical data that includes collapsed cases.


Asunto(s)
Colonografía Tomográfica Computarizada/métodos , Imagenología Tridimensional/métodos , Algoritmos , Colon/diagnóstico por imagen , Humanos , Semántica
20.
IEEE Trans Image Process ; 22(3): 884-97, 2013 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23008258

RESUMEN

Blind motion deblurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. Although significant progress has been made on tackling this problem, existing methods, when applied to highly diverse natural images, are still far from stable. This paper focuses on the robustness of blind motion deblurring methods toward image diversity-a critical problem that has been previously neglected for years. We classify the existing methods into two schemes and analyze their robustness using an image set consisting of 1.2 million natural images. The first scheme is edge-specific, as it relies on the detection and prediction of large-scale step edges. This scheme is sensitive to the diversity of the image edges in natural images. The second scheme is nonedge-specific and explores various image statistics, such as the prior distributions. This scheme is sensitive to statistical variation over different images. Based on the analysis, we address the robustness by proposing a novel nonedge-specific adaptive scheme (NEAS), which features a new prior that is adaptive to the variety of textures in natural images. By comparing the performance of NEAS against the existing methods on a very large image set, we demonstrate its advance beyond the state-of-the-art.


Asunto(s)
Algoritmos , Artefactos , Inteligencia Artificial , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Movimiento (Física) , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...