Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Más filtros

Base de datos
Tipo del documento
Intervalo de año de publicación
1.
Med Phys ; 51(8): 5457-5467, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38477634

RESUMEN

BACKGROUND: Accurate measurement of ureteral diameters plays a pivotal role in diagnosing and monitoring urinary tract obstruction (UTO). While three-dimensional magnetic resonance urography (3D MRU) represents a significant advancement in imaging, the traditional manual methods for assessing ureteral diameters are characterized by labor-intensive procedures and inherent variability. In the realm of medical image analysis, deep learning has led to a paradigm shift, yet the development of a comprehensive automated tool for the precise segmentation and measurement of ureters in MR images is an unaddressed challenge. PURPOSE: The ureter was quantitatively measured on 3D MRU images using a deep learning model. METHODS: A retrospective cohort of 445 3D MRU scans (443 patients, 52 ± 18 years; 217 female patients) was collected and split into training, validation, and internal testing cohorts. A 3D V-Net model was trained for urinary tract segmentation, and a post-processing algorithm was developed for ureteral measurements. The accuracy of the segmentation was evaluated using the Dice similarity coefficient (DSC) and volume intraclass correlation coefficient (ICC), with ground truth segmentations provided by experienced radiologists. The external cohort comprised 50 scans (50 patients, 55 ± 21 years; 30 female patients), and the model-predicted ureteral diameter measurements were compared with manual measurements to assess system performance. The various diameter parameters of ureter among the different measurement methods (ground truth, auto-segmentation with automatic diameter extraction, and manual segmentation with automatic diameter extraction) were assessed with Friedman tests and post hoc Dunn test. The effectiveness of the UTO diagnosis was assessed by receiver operating characteristic (ROC) curves and their respective areas under the curve (AUC) between different methods. RESULTS: In both the internal test and external cohorts, the mean DSC values for bilateral ureters exceeded 0.70. The ICCs for the bilateral ureter volume obtained by comparing the model and manual segmentation were all greater than 0.96 (p  < â€¯0.05), except for the right ureter in the internal test cohort, for which the ICC was 0.773 (p  < â€¯0.05). The mean DSCs for interobserver and intraobserver reliability were all above 0.97. The maximum diameter of the ureter exhibited no statistically significant differences either in the dilated (p = 0.08) or in the non-dilated (p = 0.32) ureters across the three measurement methods. The AUCs of ground truth, auto-segmentation with automatic diameter extraction, and manual segmentation with automatic diameter extraction in diagnosing UTO were 0.988 (95% CI: 0.934, 1.000), 0.961 (95% CI: 0.893, 0.991), and 0.979 (95% CI: 0.919, 0.998), respectively. There was no statistical difference between AUCs of the different methods (p > 0.05). CONCLUSION: The proposed deep learning model and post-processing algorithm provide an effective means for the quantitative evaluation of urinary diseases using 3D MRU images.


Asunto(s)
Aprendizaje Profundo , Imagenología Tridimensional , Imagen por Resonancia Magnética , Uréter , Urografía , Humanos , Uréter/diagnóstico por imagen , Femenino , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Persona de Mediana Edad , Masculino , Urografía/métodos , Estudios Retrospectivos , Adulto , Anciano
2.
J Appl Clin Med Phys ; 25(6): e14331, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38478388

RESUMEN

BACKGROUND: Accurate segmentation of lung nodules can help doctors get more accurate results and protocols in early lung cancer diagnosis and treatment planning, so that patients can be better detected and treated at an early stage, and the mortality rate of lung cancer can be reduced. PURPOSE: Currently, the improvement of lung nodule segmentation accuracy has been limited by his heterogeneous performance in the lungs, the imbalance between segmentation targets and background pixels, and other factors. We propose a new 2.5D lung nodule segmentation network model for lung nodule segmentation. This network model can well improve the extraction of edge information of lung nodules, and fuses intra-slice and inter-slice features, which makes good use of the three-dimensional structural information of lung nodules and can more effectively improve the accuracy of lung nodule segmentation. METHODS: Our approach is based on a typical encoding-decoding network structure for improvement. The improved model captures the features of multiple nodules in both 3-D and 2-D CT images, complements the information of the segmentation target's features and enhances the texture features at the edges of the pulmonary nodules through the dual-branch feature fusion module (DFFM) and the reverse attention context module (RACM), and employs central pooling instead of the maximal pooling operation, which is used to preserve the features around the target and to eliminate the edge-irrelevant features, to further improve the performance of the segmentation of the pulmonary nodules. RESULTS: We evaluated this method on a wide range of 1186 nodules from the LUNA16 dataset, and averaging the results of ten cross-validated, the proposed method achieved the mean dice similarity coefficient (mDSC) of 84.57%, the mean overlapping error (mOE) of 18.73% and average processing of a case is about 2.07 s. Moreover, our results were compared with inter-radiologist agreement on the LUNA16 dataset, and the average difference was 0.74%. CONCLUSION: The experimental results show that our method improves the accuracy of pulmonary nodules segmentation and also takes less time than more 3-D segmentation methods in terms of time.


Asunto(s)
Algoritmos , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Imagenología Tridimensional/métodos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Planificación de la Radioterapia Asistida por Computador/métodos
3.
J Magn Reson Imaging ; 60(3): 1165-1175, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38149750

RESUMEN

BACKGROUND: Cerebral microbleeds (CMB) are indicators of severe cerebral small vessel disease (CSVD) that can be identified through hemosiderin-sensitive sequences in MRI. Specifically, quantitative susceptibility mapping (QSM) and deep learning were applied to detect CMBs in MRI. PURPOSE: To automatically detect CMB on QSM, we proposed a two-stage deep learning pipeline. STUDY TYPE: Retrospective. SUBJECTS: A total number of 1843 CMBs from 393 patients (69 ± 12) with cerebral small vessel disease were included in this study. Seventy-eight subjects (70 ± 13) were used as external testing. FIELD STRENGTH/SEQUENCE: 3 T/QSM. ASSESSMENT: The proposed pipeline consisted of two stages. In stage I, 2.5D fast radial symmetry transform (FRST) algorithm along with a one-layer convolutional network was used to identify CMB candidate regions in QSM images. In stage II, the V-Net was utilized to reduce false positives. The V-Net was trained using CMB and non CMB labels, which allowed for high-level feature extraction and differentiation between CMBs and CMB mimics like vessels. The location of CMB was assessed according to the microbleeds anatomical rating scale (MARS) system. STATISTICAL TESTS: The sensitivity and positive predicative value (PPV) were reported to evaluate the performance of the model. The number of false positive per subject was presented. RESULTS: Our pipeline demonstrated high sensitivities of up to 94.9% at stage I and 93.5% at stage II. The overall sensitivity was 88.9%, and the false positive rate per subject was 2.87. With respect to MARS, sensitivities of above 85% were observed for nine different brain regions. DATA CONCLUSION: We have presented a deep learning pipeline for detecting CMB in the CSVD cohort, along with a semi-automated MARS scoring system using the proposed method. Our results demonstrated the successful application of deep learning for CMB detection on QSM and outperformed previous handcrafted methods. LEVEL OF EVIDENCE: 2 TECHNICAL EFFICACY: Stage 2.


Asunto(s)
Hemorragia Cerebral , Enfermedades de los Pequeños Vasos Cerebrales , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Enfermedades de los Pequeños Vasos Cerebrales/diagnóstico por imagen , Masculino , Femenino , Imagen por Resonancia Magnética/métodos , Anciano , Estudios Retrospectivos , Hemorragia Cerebral/diagnóstico por imagen , Persona de Mediana Edad , Algoritmos , Encéfalo/diagnóstico por imagen , Sensibilidad y Especificidad , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
4.
Acta Radiol ; 64(12): 3015-3023, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37787110

RESUMEN

BACKGROUND: Automatic segmentation has emerged as a promising technique for the diagnosis of spinal conditions. PURPOSE: To design and evaluate a deep convolution network for segmenting the intervertebral disc, spinal canal, facet joint, and herniated disk on magnetic resonance imaging (MRI) scans. MATERIAL AND METHODS: MRI scans of 70 patients with disc herniation were gathered and manually annotated by radiologists. A novel deep neural network was developed, comprising 3D squeeze-and-excitation blocks and multi-scale feature extraction blocks for automated segmentation of spinal structure and lesion. To address the issue of class imbalance, a weighted cross-entropy loss was introduced for training. In addition, semi-supervision segmentation was accomplished to reduce annotation labor cost. RESULTS: The proposed model achieved 77.67% mean intersection over union, with 9.56% and 11.11% gains over typical V-Net and U-Net respectively, outperforming the other models in ablation experiments. In addition, the semi-supervision segmentation method was proven to work. CONCLUSION: The 3D multi-scale feature extraction and recalibration network achieved an excellent segmentation performance of intervertebral disc, spinal canal, facet joint, and herniated disk, outperforming typical encoder-decoder networks.


Asunto(s)
Desplazamiento del Disco Intervertebral , Enfermedades de la Columna Vertebral , Humanos , Desplazamiento del Disco Intervertebral/diagnóstico por imagen , Radiólogos , Pérdida de Peso , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética
5.
Magn Reson Imaging ; 103: 145-155, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37406744

RESUMEN

RATIONALE AND OBJECTIVES: Quantification of 129Xe MRI relies on accurate segmentation of the thoracic cavity, typically performed manually using a combination of 1H and 129Xe scans. This can be accelerated by using Convolutional Neural Networks (CNNs) that segment only the 129Xe scan. However, this task is complicated by peripheral ventilation defects, which requires training CNNs with large, diverse datasets. Here, we accelerate the creation of training data by synthesizing 129Xe images with a variety of defects. We use this to train a 3D model to provide thoracic cavity segmentation from 129Xe ventilation MRI alone. MATERIALS AND METHODS: Training and testing data consisted of 22 and 33 3D 129Xe ventilation images. Training data were expanded to 484 using Template-based augmentation while an additional 298 images were synthesized using the Pix2Pix model. This data was used to train both a 2D U-net and 3D V-net-based segmentation model using a combination of Dice-Focal and Anatomical Constraint loss functions. Segmentation performance was compared using Dice coefficients calculated over the entire lung and within ventilation defects. RESULTS: Performance of both U-net and 3D segmentation was improved by including synthetic training data. The 3D models performed significantly better than U-net, and the 3D model trained with synthetic 129Xe images exhibited the highest overall Dice score of 0.929. Moreover, addition of synthetic training data improved the Dice score in ventilation defect regions from 0.545 to 0.588 for U-net and 0.739 to 0.765 for the 3D model. CONCLUSION: It is feasible to obtain high-quality segmentations from 129Xe scan alone using 3D models trained with additional synthetic images.


Asunto(s)
Protones , Cavidad Torácica , Redes Neurales de la Computación , Imagen por Resonancia Magnética , Pulmón/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
6.
Comput Biol Med ; 160: 106954, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37130501

RESUMEN

Accurate segmentation of the left ventricle (LV) is crucial for evaluating myocardial perfusion SPECT (MPS) and assessing LV functions. In this study, a novel method combining deep learning with shape priors was developed and validated to extract the LV myocardium and automatically measure LV functional parameters. The method integrates a three-dimensional (3D) V-Net with a shape deformation module that incorporates shape priors generated by a dynamic programming (DP) algorithm to guide its output during training. A retrospective analysis was performed on an MPS dataset comprising 31 subjects without or with mild ischemia, 32 subjects with moderate ischemia, and 12 subjects with severe ischemia. Myocardial contours were manually annotated as the ground truth. A 5-fold stratified cross-validation was used to train and validate the models. The clinical performance was evaluated by measuring LV end-systolic volume (ESV), end-diastolic volume (EDV), left ventricular ejection fraction (LVEF), and scar burden from the extracted myocardial contours. There were excellent agreements between segmentation results by our proposed model and those from the ground truth, with a Dice similarity coefficient (DSC) of 0.9573 ± 0.0244, 0.9821 ± 0.0137, and 0.9903 ± 0.0041, as well as Hausdorff distances (HD) of 6.7529 ± 2.7334 mm, 7.2507 ± 3.1952 mm, and 7.6121 ± 3.0134 mm in extracting the LV endocardium, myocardium, and epicardium, respectively. Furthermore, the correlation coefficients between LVEF, ESV, EDV, stress scar burden, and rest scar burden measured from our model results and the ground truth were 0.92, 0.958, 0.952, 0.972, and 0.958, respectively. The proposed method achieved a high accuracy in extracting LV myocardial contours and assessing LV functions.


Asunto(s)
Aprendizaje Profundo , Ventrículos Cardíacos , Humanos , Volumen Sistólico , Estudios Retrospectivos , Ventrículos Cardíacos/diagnóstico por imagen , Ventrículos Cardíacos/patología , Cicatriz , Función Ventricular Izquierda , Isquemia , Tomografía Computarizada de Emisión de Fotón Único/métodos , Perfusión
7.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 40(2): 226-233, 2023 Apr 25.
Artículo en Chino | MEDLINE | ID: mdl-37139752

RESUMEN

Magnetic resonance (MR) imaging is an important tool for prostate cancer diagnosis, and accurate segmentation of MR prostate regions by computer-aided diagnostic techniques is important for the diagnosis of prostate cancer. In this paper, we propose an improved end-to-end three-dimensional image segmentation network using a deep learning approach to the traditional V-Net network (V-Net) network in order to provide more accurate image segmentation results. Firstly, we fused the soft attention mechanism into the traditional V-Net's jump connection, and combined short jump connection and small convolutional kernel to further improve the network segmentation accuracy. Then the prostate region was segmented using the Prostate MR Image Segmentation 2012 (PROMISE 12) challenge dataset, and the model was evaluated using the dice similarity coefficient (DSC) and Hausdorff distance (HD). The DSC and HD values of the segmented model could reach 0.903 and 3.912 mm, respectively. The experimental results show that the algorithm in this paper can provide more accurate three-dimensional segmentation results, which can accurately and efficiently segment prostate MR images and provide a reliable basis for clinical diagnosis and treatment.


Asunto(s)
Imagen por Resonancia Magnética , Enfermedades de la Próstata , Humanos , Masculino , Imagen por Resonancia Magnética/métodos , Enfermedades de la Próstata/diagnóstico por imagen
8.
Diagnostics (Basel) ; 13(4)2023 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-36832138

RESUMEN

Brain tumors have been the subject of research for many years. Brain tumors are typically classified into two main groups: benign and malignant tumors. The most common tumor type among malignant brain tumors is known as glioma. In the diagnosis of glioma, different imaging technologies could be used. Among these techniques, MRI is the most preferred imaging technology due to its high-resolution image data. However, the detection of gliomas from a huge set of MRI data could be challenging for the practitioners. In order to solve this concern, many Deep Learning (DL) models based on Convolutional Neural Networks (CNNs) have been proposed to be used in detecting glioma. However, understanding which CNN architecture would work efficiently under various conditions including development environment or programming aspects as well as performance analysis has not been studied so far. In this research work, therefore, the purpose is to investigate the impact of two major programming environments (namely, MATLAB and Python) on the accuracy of CNN-based glioma detection from Magnetic Resonance Imaging (MRI) images. To this end, experiments on the Brain Tumor Segmentation (BraTS) dataset (2016 and 2017) consisting of multiparametric magnetic MRI images are performed by implementing two popular CNN architectures, the three-dimensional (3D) U-Net and the V-Net in the programming environments. From the results, it is concluded that the use of Python with Google Colaboratory (Colab) might be highly useful in the implementation of CNN-based models for glioma detection. Moreover, the 3D U-Net model is found to perform better, attaining a high accuracy on the dataset. The authors believe that the results achieved from this study would provide useful information to the research community in their appropriate implementation of DL approaches for brain tumor detection.

9.
Adv Exp Med Biol ; 1395: 165-170, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36527632

RESUMEN

Near-infrared optical tomography (NIROT), a promising imaging modality for early detection of oxygenation in the brain of preterm infants, requires data acquisition at the tissue surface and thus an image reconstruction adaptable to cephalometric variations and surface topologies. Widely used model-based reconstruction methods come with the drawback of huge computational cost. Neural networks move this computational load to an offline training phase, allowing much faster reconstruction. Our aim is a data-driven volumetric image reconstruction that generalises well to different surfaces, increases reconstruction speed, localisation accuracy and image quality. We propose a hybrid convolutional neural network (hCNN) based on the well-known V-net architecture to learn inclusion localisation and absorption coefficients of heterogenous arbitrary shapes via a joint cost function. We achieved an average reconstruction time of 30.45 s, a time reduction of 89% and inclusion detection with an average Dice score of 0.61. The CNN is flexible to surface topologies and compares well in quantitative metrics with the traditional model-based (MB) approach and state-of-the-art neuronal networks for NIROT. The proposed hCNN was successfully trained, validated and tested on in-silico data, excels MB methods in localisation accuracy and provides a remarkable increase in reconstruction speed.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Óptica , Recién Nacido , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Recien Nacido Prematuro , Redes Neurales de la Computación , Algoritmos
10.
Magn Reson Med ; 88(6): 2694-2708, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35942977

RESUMEN

PURPOSE: To introduce a dual-domain reconstruction network with V-Net and K-Net for accurate MR image reconstruction from undersampled k-space data. METHODS: Most state-of-the-art reconstruction methods apply U-Net or cascaded U-Nets in the image domain and/or k-space domain. Nevertheless, these methods have the following problems: (1) directly applying U-Net in the k-space domain is not optimal for extracting features; (2) classical image-domain-oriented U-Net is heavyweighted and hence inefficient when cascaded many times to yield good reconstruction accuracy; (3) classical image-domain-oriented U-Net does not make full use of information of the encoder network for extracting features in the decoder network; and (4) existing methods are ineffective in simultaneously extracting and fusing features in the image domain and its dual k-space domain. To tackle these problems, we present 3 different methods: (1) V-Net, an image-domain encoder-decoder subnetwork that is more lightweight for cascading and effective in fully utilizing features in the encoder for decoding; (2) K-Net, a k-space domain subnetwork that is more suitable for extracting hierarchical features in the k-space domain, and (3) KV-Net, a dual-domain reconstruction network in which V-Nets and K-Nets are effectively combined and cascaded. RESULTS: Extensive experimental results on the fastMRI dataset demonstrate that the proposed KV-Net can reconstruct high-quality images and outperform state-of-the-art approaches with fewer parameters. CONCLUSIONS: To reconstruct images effectively and efficiently from incomplete k-space data, we have presented a dual-domain KV-Net to combine K-Nets and V-Nets. The KV-Net achieves better results with 9% and 5% parameters than comparable methods (XPD-Net and i-RIM).


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos
11.
Comput Biol Med ; 144: 105340, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35305504

RESUMEN

The outbreak of COVID-19 has caused a severe shortage of healthcare resources. Ground Glass Opacity (GGO) and consolidation of chest CT scans have been an essential basis for imaging diagnosis since 2020. The similarity of imaging features between COVID-19 and other pneumonia makes it challenging to distinguish between them and affects radiologists' diagnosis. Recently, deep learning in COVID-19 has been mainly divided into disease classification and lesion segmentation, yet little work has focused on the feature correlation between the two tasks. To address these issues, in this study, we propose MultiR-Net, a 3D deep learning model for combined COVID-19 classification and lesion segmentation, to achieve real-time and interpretable COVID-19 chest CT diagnosis. Precisely, the proposed network consists of two subnets: a multi-scale feature fusion UNet-like subnet for lesion segmentation and a classification subnet for disease diagnosis. The features between the two subnets are fused by the reverse attention mechanism and the iterable training strategy. Meanwhile, we proposed a loss function to enhance the interaction between the two subnets. Individual metrics can not wholly reflect network effectiveness. Thus we quantify the segmentation results with various evaluation metrics such as average surface distance, volume Dice, and test on the dataset. We employ a dataset containing 275 3D CT scans for classifying COVID-19, Community-acquired Pneumonia (CAP), and healthy people and segmented lesions in pneumonia patients. We split the dataset into 70% and 30% for training and testing. Extensive experiments showed that our multi-task model framework obtained an average recall of 93.323%, an average precision of 94.005% on the classification test set, and a 69.95% Volume Dice score on the segmentation test set of our dataset.


Asunto(s)
COVID-19 , Neumonía , COVID-19/diagnóstico por imagen , Humanos , Tomografía Computarizada por Rayos X/métodos
12.
Animals (Basel) ; 12(4)2022 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-35203228

RESUMEN

Monitoring reproductive outputs of sea turtles is difficult, as it requires a large number of observers patrolling extended beaches every night throughout the breeding season with the risk of missing nesting individuals. We introduce the first automatic method to remotely record the reproductive outputs of green turtles (Chelonia mydas) using accelerometers. First, we trained a fully convolutional neural network, the V-net, to automatically identify the six behaviors shown during nesting. With an accuracy of 0.95, the V-net succeeded in detecting the Egg laying process with a precision of 0.97. Then, we estimated the number of laid eggs from the predicted Egg laying sequence and obtained the outputs with a mean relative error of 7% compared to the observed numbers in the field. Based on deployment of non-invasive and miniature loggers, the proposed method should help researchers monitor nesting sea turtle populations. Furthermore, its use can be coupled with the deployment of accelerometers at sea during the intra-nesting period, from which behaviors can also be estimated. The knowledge of the behavior of sea turtle on land and at sea during the entire reproduction period is essential to improve our knowledge of this threatened species.

13.
Ultrasound Med Biol ; 48(3): 469-479, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34872788

RESUMEN

Ultrasound imaging has been established as an effective method for measuring the thickness of the intima-media, the thickening of which, along with carotid plaque, is an indicator of cerebrovascular diseases. Here, a 2-D V-Net model that can automatically segment the intima-media in carotid artery ultrasound images is proposed. Moreover, a plaque recognition algorithm that automatically identifies plaque-affected areas is described. Performance tests to determine the average accuracy of the intima-media segmentation yielded the following results (expressed as lumen-intima boundary/media-adventitia boundary): intersection over union (IOU) of 0.752/0.813, pixel accuracy of 0.813/0.885 and Dice loss of 0.858/0.897. Finally, average IOU of 0.785, pixel accuracy of 0.825 and Dice loss of 0.866 were obtained for plaque recognition. These results satisfy the threshold for clinical application and indicate that the proposed model can assist doctors in making more efficient and accurate diagnoses.


Asunto(s)
Enfermedades de las Arterias Carótidas , Placa Aterosclerótica , Algoritmos , Arterias Carótidas/diagnóstico por imagen , Enfermedades de las Arterias Carótidas/diagnóstico por imagen , Grosor Intima-Media Carotídeo , Humanos , Placa Aterosclerótica/diagnóstico por imagen , Ultrasonografía/métodos , Ultrasonografía Doppler
14.
Front Oncol ; 11: 700210, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34604036

RESUMEN

OBJECTIVE: To develop a deep learning-based model using esophageal thickness to detect esophageal cancer from unenhanced chest CT images. METHODS: We retrospectively identified 141 patients with esophageal cancer and 273 patients negative for esophageal cancer (at the time of imaging) for model training. Unenhanced chest CT images were collected and used to build a convolutional neural network (CNN) model for diagnosing esophageal cancer. The CNN is a VB-Net segmentation network that segments the esophagus and automatically quantifies the thickness of the esophageal wall and detect positions of esophageal lesions. To validate this model, 52 false negatives and 48 normal cases were collected further as the second dataset. The average performance of three radiologists and that of the same radiologists aided by the model were compared. RESULTS: The sensitivity and specificity of the esophageal cancer detection model were 88.8% and 90.9%, respectively, for the validation dataset set. Of the 52 missed esophageal cancer cases and the 48 normal cases, the sensitivity, specificity, and accuracy of the deep learning esophageal cancer detection model were 69%, 61%, and 65%, respectively. The independent results of the radiologists had a sensitivity of 25%, 31%, and 27%; specificity of 78%, 75%, and 75%; and accuracy of 53%, 54%, and 53%. With the aid of the model, the results of the radiologists were improved to a sensitivity of 77%, 81%, and 75%; specificity of 75%, 74%, and 74%; and accuracy of 76%, 77%, and 75%, respectively. CONCLUSIONS: Deep learning-based model can effectively detect esophageal cancer in unenhanced chest CT scans to improve the incidental detection of esophageal cancer.

15.
Math Biosci Eng ; 18(4): 4327-4340, 2021 05 18.
Artículo en Inglés | MEDLINE | ID: mdl-34198439

RESUMEN

Segmentation and visualization of liver vessel is a key task in preoperative planning and computer-aided diagnosis of liver diseases. Due to the irregular structure of liver vessel, accurate liver vessel segmentation is difficult. This paper proposes a method of liver vessel segmentation based on an improved V-Net network. Firstly, a dilated convolution is introduced into the network to make the network can still enlarge the receptive field without reducing down-sampling and save detailed spatial information. Secondly, a 3D deep supervision mechanism is introduced into the network to speed up the convergence of the network and help the network learn semantic features better. Finally, inter-scale dense connections are designed in the decoder of the network to prevent the loss of high-level semantic information during the decoding process and effectively integrate multi-scale feature information. The public datasets 3Dircadb were used to perform liver vessel segmentation experiments. The average dice and sensitivity of the proposed method reached 71.6 and 75.4%, respectively, which are higher than those of the original network. The experimental results show that the improved V-Net network can automatically and accurately segment labeled or even other unlabeled liver vessels from the CT images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Hígado/diagnóstico por imagen
16.
Pattern Recognit ; 119: 108071, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34092815

RESUMEN

This paper aims to develop an automatic method to segment pulmonary parenchyma in chest CT images and analyze texture features from the segmented pulmonary parenchyma regions to assist radiologists in COVID-19 diagnosis. A new segmentation method, which integrates a three-dimensional (3D) V-Net with a shape deformation module implemented using a spatial transform network (STN), was proposed to segment pulmonary parenchyma in chest CT images. The 3D V-Net was adopted to perform an end-to-end lung extraction while the deformation module was utilized to refine the V-Net output according to the prior shape knowledge. The proposed segmentation method was validated against the manual annotation generated by experienced operators. The radiomic features measured from our segmentation results were further analyzed by sophisticated statistical models with high interpretability to discover significant independent features and detect COVID-19 infection. Experimental results demonstrated that compared with the manual annotation, the proposed segmentation method achieved a Dice similarity coefficient of 0.9796, a sensitivity of 0.9840, a specificity of 0.9954, and a mean surface distance error of 0.0318 mm. Furthermore, our COVID-19 classification model achieved an area under curve (AUC) of 0.9470, a sensitivity of 0.9670, and a specificity of 0.9270 when discriminating lung infection with COVID-19 from community-acquired pneumonia and healthy controls using statistically significant radiomic features. The significant features measured from our segmentation results agreed well with those from the manual annotation. Our approach has great promise for clinical use in facilitating automatic diagnosis of COVID-19 infection on chest CT images.

17.
J Med Eng Technol ; 45(5): 337-343, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33843414

RESUMEN

Lung segmentation of chest CT scan is utilised to identify lung cancer and this step is also critical in other diagnostic pathways. Therefore, powerful algorithms to accomplish this accurate segmentation task are highly needed in the medical imaging domain, where the tumours are required to be segmented with the lung parenchyma. Also, the lung parenchyma needs to be detached from the tumour regions that are often confused with the lung tissue. Recently, lung semantic segmentation is more suitable to allocate each pixel in the image to a predefined class based on fully convolutional networks (FCNs). In this paper, CT cancer scans from the Task06_Lung database were applied to FCN that was inspired by V.Net architecture for efficiently selecting a region of interest (ROI) using the 3D segmentation. This lung database is segregated into 64 training images and 32 testing images. The proposed system is generalised by three steps including data preprocessing, data augmentation and neural network based on the V-Net model. Then, it was evaluated by dice score coefficient (DSC) to calculate the ratio of the segmented image and the ground truth image. This proposed system outperformed other previous schemes for 3D lung segmentation with an average DCS of 80% for ROI and 98% for surrounding lung tissues. Moreover, this system demonstrated that 3D views of lung tumours in CT images precisely carried tumour estimation and robust lung segmentation.


Asunto(s)
Imagenología Tridimensional , Neoplasias Pulmonares , Humanos , Procesamiento de Imagen Asistido por Computador , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
18.
BMC Cancer ; 21(1): 243, 2021 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-33685404

RESUMEN

BACKGROUND: It is very important to accurately delineate the CTV on the patient's three-dimensional CT image in the radiotherapy process. Limited to the scarcity of clinical samples and the difficulty of automatic delineation, the research of automatic delineation of cervical cancer CTV based on CT images for new patients is slow. This study aimed to assess the value of Dense-Fully Connected Convolution Network (Dense V-Net) in predicting Clinical Target Volume (CTV) pre-delineation in cervical cancer patients for radiotherapy. METHODS: In this study, we used Dense V-Net, a dense and fully connected convolutional network with suitable feature learning in small samples to automatically pre-delineate the CTV of cervical cancer patients based on computed tomography (CT) images and then we assessed the outcome. The CT data of 133 patients with stage IB and IIA postoperative cervical cancer with a comparable delineation scope was enrolled in this study. One hundred and thirteen patients were randomly designated as the training set to adjust the model parameters. Twenty cases were used as the test set to assess the network performance. The 8 most representative parameters were also used to assess the pre-sketching accuracy from 3 aspects: sketching similarity, sketching offset, and sketching volume difference. RESULTS: The results presented that the DSC, DC/mm, HD/cm, MAD/mm, ∆V, SI, IncI and JD of CTV were 0.82 ± 0.03, 4.28 ± 2.35, 1.86 ± 0.48, 2.52 ± 0.40, 0.09 ± 0.05, 0.84 ± 0.04, 0.80 ± 0.05, and 0.30 ± 0.04, respectively, and the results were greater than those with a single network. CONCLUSIONS: Dense V-Net can correctly predict CTV pre-delineation of cervical cancer patients and can be applied in clinical practice after completing simple modifications.


Asunto(s)
Cuello del Útero/diagnóstico por imagen , Imagenología Tridimensional , Redes Neurales de la Computación , Planificación de la Radioterapia Asistida por Computador/métodos , Neoplasias del Cuello Uterino/terapia , Cuello del Útero/patología , Cuello del Útero/cirugía , Femenino , Humanos , Estadificación de Neoplasias , Radioterapia Adyuvante/métodos , Tomografía Computarizada por Rayos X , Neoplasias del Cuello Uterino/diagnóstico , Neoplasias del Cuello Uterino/patología
19.
IEEE Access ; 9: 60396-60408, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35024261

RESUMEN

Advances in three-dimensional microscopy and tissue clearing are enabling whole-organ imaging with single-cell resolution. Fast and reliable image processing tools are needed to analyze the resulting image volumes, including automated cell detection, cell counting and cell analytics. Deep learning approaches have shown promising results in two- and three-dimensional nuclei detection tasks, however detecting overlapping or non-spherical nuclei of different sizes and shapes in the presence of a blurring point spread function remains challenging and often leads to incorrect nuclei merging and splitting. Here we present a new regression-based fully convolutional network that located a thousand nuclei centroids with high accuracy in under a minute when combined with V-net, a popular three-dimensional semantic-segmentation architecture. High nuclei detection F1-scores of 95.3% and 92.5% were obtained in two different whole quail embryonic hearts, a tissue type difficult to segment because of its high cell density, and heterogeneous and elliptical nuclei. Similar high scores were obtained in the mouse brain stem, demonstrating that this approach is highly transferable to nuclei of different shapes and intensities. Finally, spatial statistics were performed on the resulting centroids. The spatial distribution of nuclei obtained by our approach most resembles the spatial distribution of manually identified nuclei, indicating that this approach could serve in future spatial analyses of cell organization.

20.
Int J Comput Assist Radiol Surg ; 15(9): 1457-1465, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32676871

RESUMEN

PURPOSE: The analysis of the maxillary sinus (MS) can provide an assessment for many clinical diagnoses, so accurate CT image segmentation of the MS is essential. However, common segmentation methods are mainly done by experienced doctors manually, and there are some challenges such as low efficiency and precision. As for automatic methods, the initial seed points and adjustment of various parameters are required, which will affect the segmentation efficiency. Thus, accurate, efficient, and automatic segmentation method of MS is critical to promote the clinical application. METHODS: This paper proposed an automatic CT image segmentation method of MS based on VGG network and improved V-Net. The VGG network was established to classify CT slices, which can avoid the failure of CT slice segmentation without MS. Then, we proposed the improved V-Net based on edge supervision for segmenting MS regions more effectively. The edge loss was integrated into the loss of the improved V-Net, which could reduce region misjudgment and improve the automatic segmentation performance. RESULTS: For the classification of CT slices with MS and without MS, the VGG network had a classification accuracy of 97.04 ± 2.03%. In the segmentation, our method obtained a better result, in which the segmentation Dice reached 94.40 ± 2.07%, the Iou (intersection over union) was 90.05 ± 3.26%, and the precision was 94.72 ± 2.64%. Compared with U-Net and V-Net, it reduced region misjudgment significantly and improved the segmentation accuracy. By analyzing the error map of 3D reconstruction, it was mainly distributed in ± 1 mm, which demonstrated that our result was quite close to the ground truth. CONCLUSION: The segmentation of the MS can be realized efficiently, accurately, and automatically by our method. Meanwhile, it not only has a better segmentation result, but also improves the doctor's work efficiency, which will have significant impact on clinical applications in the future.


Asunto(s)
Diagnóstico por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Seno Maxilar/diagnóstico por imagen , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas , Tomografía Computarizada por Rayos X , Algoritmos , Reacciones Falso Positivas , Humanos , Imagenología Tridimensional/métodos , Modelos Estadísticos , Reproducibilidad de los Resultados , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA