Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Heliyon ; 10(14): e34583, 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39130473

RESUMEN

Background: Three-dimensional cephalometric analysis is crucial in craniomaxillofacial assessment, with landmarks detection in craniomaxillofacial (CMF) CT scans being a key component. However, creating robust deep learning models for this task typically requires extensive CMF CT datasets annotated by experienced medical professionals, a process that is time-consuming and labor-intensive. Conversely, acquiring large volume of unlabeled CMF CT data is relatively straightforward. Thus, semi-supervised learning (SSL), leveraging limited labeled data supplemented by sufficient unlabeled dataset, could be a viable solution to this challenge. Method: We developed an SSL model, named CephaloMatch, based on a strong-weak perturbation consistency framework. The proposed SSL model incorporates a head position rectification technique through coarse detection to enhance consistency between labeled and unlabeled datasets and a multilayers perturbation method which is employed to expand the perturbation space. The proposed SSL model was assessed using 362 CMF CT scans, divided into a training set (60 scans), a validation set (14 scans), and an unlabeled set (288 scans). Result: The proposed SSL model attained a detection error of 1.60 ± 0.87 mm, significantly surpassing the performance of conventional fully supervised learning model (1.94 ± 1.12 mm). Notably, the proposed SSL model achieved equivalent detection accuracy (1.91 ± 1.00 mm) with only half the labeled dataset, compared to the fully supervised learning model. Conclusions: The proposed SSL model demonstrated exceptional performance in landmarks detection using a limited labeled CMF CT dataset, significantly reducing the workload of medical professionals and enhances the accuracy of 3D cephalometric analysis.

2.
Hum Pathol ; 131: 26-37, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36481204

RESUMEN

Lymphovascular invasion, specifically lymph-blood vessel invasion (LBVI), is a risk factor for metastases in breast invasive ductal carcinoma (IDC) and is routinely screened using hematoxylin-eosin histopathological images. However, routine reports only describe whether LBVI is present and does not provide other potential prognostic information of LBVI. This study aims to evaluate the clinical significance of LBVI in 685 IDC cases and explore the added predictive value of LBVI on lymph node metastases (LNM) via supervised deep learning (DL), an expert-experience embedded knowledge transfer learning (EEKT) model in 40 LBVI-positive cases signed by the routine report. Multivariate logistic regression and propensity score matching analysis demonstrated that LBVI (OR 4.203, 95% CI 2.809-6.290, P < 0.001) was a significant risk factor for LNM. Then, the EEKT model trained on 5780 image patches automatically segmented LBVI with a patch-wise Dice similarity coefficient of 0.930 in the test set and output counts, location, and morphometric features of the LBVIs. Some morphometric features were beneficial for further stratification within the 40 LBVI-positive cases. The results showed that LBVI in cases with LNM had a higher short-to-long side ratio of the minimum rectangle (MR) (0.686 vs. 0.480, P = 0.001), LBVI-to-MR area ratio (0.774 vs. 0.702, P = 0.002), and solidity (0.983 vs. 0.934, P = 0.029) compared to LBVI in cases without LNM. The results highlight the potential of DL to assist pathologists in quantifying LBVI and, more importantly, in exploring added prognostic information from LBVI.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Linfoma , Humanos , Femenino , Metástasis Linfática/patología , Neoplasias de la Mama/patología , Mama , Pronóstico , Linfoma/patología , Ganglios Linfáticos/patología , Estudios Retrospectivos
3.
Med Phys ; 49(11): 7222-7236, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35689486

RESUMEN

PURPOSE: Many deep learning methods have been developed for pulmonary lesion detection in chest computed tomography (CT) images. However, these methods generally target one particular lesion type, that is, pulmonary nodules. In this work, we intend to develop and evaluate a novel deep learning method for a more challenging task, detecting various benign and malignant mediastinal lesions with wide variations in sizes, shapes, intensities, and locations in chest CT images. METHODS: Our method for mediastinal lesion detection contains two main stages: (a) size-adaptive lesion candidate detection followed by (b) false-positive (FP) reduction and benign-malignant classification. For candidate detection, an anchor-free and one-stage detector, namely 3D-CenterNet is designed to locate suspicious regions (i.e., candidates with various sizes) within the mediastinum. Then, a 3D-SEResNet-based classifier is used to differentiate FPs, benign lesions, and malignant lesions from the candidates. RESULTS: We evaluate the proposed method by conducting five-fold cross-validation on a relatively large-scale dataset, which consists of data collected on 1136 patients from a grade A tertiary hospital. The method can achieve sensitivity scores of 84.3% ± 1.9%, 90.2% ± 1.4%, 93.2% ± 0.8%, and 93.9% ± 1.1%, respectively, in finding all benign and malignant lesions at 1/8, 1/4, ½, and 1 FPs per scan, and the accuracy of benign-malignant classification can reach up to 78.7% ± 2.5%. CONCLUSIONS: The proposed method can effectively detect mediastinal lesions with various sizes, shapes, and locations in chest CT images. It can be integrated into most existing pulmonary lesion detection systems to promote their clinical applications. The method can also be readily extended to other similar 3D lesion detection tasks.


Asunto(s)
Aprendizaje Profundo , Humanos , Proyectos de Investigación , Tomografía , Tomografía Computarizada por Rayos X
4.
Biomed Opt Express ; 13(4): 2018-2034, 2022 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-35519267

RESUMEN

Convolutional neural networks (CNNs) are commonly used in glaucoma detection. Due to the various data distribution shift, however, a well-behaved model may be plummeting in performance when deployed in a new environment. On the other hand, the most straightforward method, data collection, is costly and even unrealistic in practice. To address these challenges, we propose a new method named data augmentation-based (DA) feature alignment (DAFA) to improve the out-of-distribution (OOD) generalization with a single dataset, which is based on the principle of feature alignment to learn the invariant features and eliminate the effect of data distribution shifts. DAFA creates two views of a sample by data augmentation and performs the feature alignment between that augmented views through latent feature recalibration and semantic representation alignment. Latent feature recalibration is normalizing the middle features to the same distribution by instance normalization (IN) layers. Semantic representation alignment is conducted by minimizing the Topk NT-Xent loss and the maximum mean discrepancy (MMD), which maximize the semantic agreement across augmented views from individual and population levels. Furthermore, a benchmark is established with seven glaucoma detection datasets and a new metric named mean of clean area under curve (mcAUC) for a comprehensive evaluation of the model performance. Experimental results of five-fold cross-validation demonstrate that DAFA can consistently and significantly improve the out-of-distribution generalization (up to +16.3% mcAUC) regardless of the training data, network architectures, and augmentation policies and outperform lots of state-of-the-art methods.

5.
Transl Lung Cancer Res ; 11(3): 393-403, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35399565

RESUMEN

Background: Percutaneous transthoracic lung biopsy is customarily conducted under computed tomography (CT) guidance, which primarily depends on the conductors' experience and inevitably contributes to long procedural duration and radiation exposure. Novel technique facilitating lung biopsy is currently demanded. Methods: Based on the reconstructed anatomical information of CT scans, a three-dimensionally printed navigational template was customized to guide fine-needle aspiration (FNA). The needle insertion site and angle could be indicated by the template after proper placement according to the reference landmarks. From June 2020 to August 2020, patients with peripheral indeterminate lung lesions ≥30 mm in diameter were enrolled in a pilot trial. Cases were considered successful when the virtual line indicated by the template in the first CT scan was pointing at the target, and the rate of success was recorded. The insertion deviation, procedural duration, radiation exposure, biopsy-related complications, and diagnostic yield were documented as well. Results: A total of 20 patients consented to participate, and 2 withdrew. The remaining 18 participants consisting of 11 men and 7 women with a median age of 63 [inter-quartile range (IQR), 50-68] years and a median body mass index (BMI) of 23.5 (IQR, 20.8-25.8) kg/m2 received template-guided FNA. The median nodule size of the patients was 41.2 (IQR, 36.2-51.9) mm and 17 lesions were successfully targeted (success rate, 94.4%). One lesion was not reached through the designed trajectory due to an unpredictable alteration of the lesion's location resulting from pleural effusion. The median deviation between the actual position of the needle tip and the designed route was 9.4 (IQR, 6.8-11.7) mm. The median procedural duration was 10.7 (IQR, 9.7-11.8) min, and the median radiation exposure was 220.9 (IQR, 198.6-249.5) mGy×cm. No major biopsy-related complication was encountered. Definitive diagnosis of malignancy was reached in 13 of the 17 (76.5%) participants. Conclusions: The feasibility and safety of navigational template-guided FNA were preliminarily validated in lung biopsy cohort. Nonetheless, patients with pleural effusion were not recommended to undergo FNA guided by such technique. Trial Registration: This study was registered with ClinicalTrials.gov (identifier: NCT03325907).

6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2769-2772, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891823

RESUMEN

Karyotyping is an important process for finding chromosome abnormalities that could cause genetic disorders. This process first requires cytogeneticists to arrange each chromosome from the metaphase image to generate the karyogram. In this process, chromosome segmentation plays an important role and it is directly related to whether the karyotyping can be achieved. The key to achieving accurate chromosome segmentation is to effectively segment the multiple touching and overlapping chromosomes at the same time identify the isolated chromosomes. This paper proposes a method named Enhanced Rotated Mask R-CNN for automatic chromosome segmentation and classification. The Enhanced Rotated Mask R-CNN method can not only accurately segment and classify the isolated chromosomes in metaphase images but also effectively alleviate the problem of inaccurate segmentation for touching and overlapping chromosomes. Experiments show that the proposed approach achieves competitive performances with 49.52 AP on multi-class evaluation and 69.96 AP on binary-class evaluation for chromosome segmentation.


Asunto(s)
Cromosomas , Procesamiento de Imagen Asistido por Computador , Aberraciones Cromosómicas , Humanos , Cariotipificación , Metafase
7.
IEEE J Biomed Health Inform ; 25(8): 3240-3251, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33630742

RESUMEN

Karyotyping is the gold standard in the detection of chromosomal abnormalities. To facilitate the diagnostic process, in this paper, a method for chromosome classification and straightening based on an interleaved and multi-task network is proposed. This method consists of three stages. In the first stage, multi-scale features are learned via an interleaved network. In the second stage, high-resolution features from the first stage are input to a convolution neural subnetwork for chromosome joint detection, and other features are fused and fed to two multi-layer perceptron subnetworks for chromosome type and polarity classification. In the third stage, the bent chromosome is straightened with the help of detected joints by two steps: first the chromosome is separated, rotated and assembled according to the detected joints; then the areas around the bending points are recovered by replacing the gaps formed in the first step with the sampled intensities from the bent chromosome. The classification of type and polarity can expedite the process of producing karyograms, which is an important step for chromosome diagnosis in clinical practice. Straightening makes the banding information of the chromosome easier to read. Classification results of the 5-fold cross validation on our dataset with 32 810 chromosomes achieve average accuracy of 98.1% for type classification and 99.8% for polarity classification. The straightening results show consistency in intensity and length of the chromosome before and after straightening.


Asunto(s)
Algoritmos , Cromosomas , Cariotipificación , Redes Neurales de la Computación
8.
Med Phys ; 48(12): 7913-7929, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34674280

RESUMEN

PURPOSE: Feature maps created from deep convolutional neural networks (DCNNs) have been widely used for visual explanation of DCNN-based classification tasks. However, many clinical applications such as benign-malignant classification of lung nodules normally require quantitative and objective interpretability, rather than just visualization. In this paper, we propose a novel interpretable multi-task attention learning network named IMAL-Net for early invasive adenocarcinoma screening in chest computed tomography images, which takes advantage of segmentation prior to assist interpretable classification. METHODS: Two sub-ResNets are firstly integrated together via a prior-attention mechanism for simultaneous nodule segmentation and invasiveness classification. Then, numerous radiomic features from the segmentation results are concatenated with high-level semantic features from the classification subnetwork by FC layers to achieve superior performance. Meanwhile, an end-to-end feature selection mechanism (named FSM) is designed to quantify crucial radiomic features greatly affecting the prediction of each sample, and thus it can provide clinically applicable interpretability to the prediction result. RESULTS: Nodule samples from a total of 1626 patients were collected from two grade-A hospitals for large-scale verification. Five-fold cross validation demonstrated that the proposed IMAL-Net can achieve an AUC score of 93.8% ± 1.1% and a recall score of 93.8% ± 2.8% for identification of invasive lung adenocarcinoma. CONCLUSIONS: It can be concluded that fusing semantic features and radiomic features can achieve obvious improvements in the invasiveness classification task. Moreover, by learning more fine-grained semantic features and highlighting the most important radiomics features, the proposed attention and FSM mechanisms not only can further improve the performance but also can be used for both visual explanations and objective analysis of the classification results.


Asunto(s)
Adenocarcinoma del Pulmón , Adenocarcinoma , Neoplasias Pulmonares , Adenocarcinoma/diagnóstico por imagen , Adenocarcinoma del Pulmón/diagnóstico por imagen , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
9.
NPJ Digit Med ; 4(1): 124, 2021 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-34400751

RESUMEN

Most prior studies focused on developing models for the severity or mortality prediction of COVID-19 patients. However, effective models for recovery-time prediction are still lacking. Here, we present a deep learning solution named iCOVID that can successfully predict the recovery-time of COVID-19 patients based on predefined treatment schemes and heterogeneous multimodal patient information collected within 48 hours after admission. Meanwhile, an interpretable mechanism termed FSR is integrated into iCOVID to reveal the features greatly affecting the prediction of each patient. Data from a total of 3008 patients were collected from three hospitals in Wuhan, China, for large-scale verification. The experiments demonstrate that iCOVID can achieve a time-dependent concordance index of 74.9% (95% CI: 73.6-76.3%) and an average day error of 4.4 days (95% CI: 4.2-4.6 days). Our study reveals that treatment schemes, age, symptoms, comorbidities, and biomarkers are highly related to recovery-time predictions.

10.
IEEE Trans Med Imaging ; 39(8): 2572-2583, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32730210

RESUMEN

We propose a conceptually simple framework for fast COVID-19 screening in 3D chest CT images. The framework can efficiently predict whether or not a CT scan contains pneumonia while simultaneously identifying pneumonia types between COVID-19 and Interstitial Lung Disease (ILD) caused by other viruses. In the proposed method, two 3D-ResNets are coupled together into a single model for the two above-mentioned tasks via a novel prior-attention strategy. We extend residual learning with the proposed prior-attention mechanism and design a new so-called prior-attention residual learning (PARL) block. The model can be easily built by stacking the PARL blocks and trained end-to-end using multi-task losses. More specifically, one 3D-ResNet branch is trained as a binary classifier using lung images with and without pneumonia so that it can highlight the lesion areas within the lungs. Simultaneously, inside the PARL blocks, prior-attention maps are generated from this branch and used to guide another branch to learn more discriminative representations for the pneumonia-type classification. Experimental results demonstrate that the proposed framework can significantly improve the performance of COVID-19 screening. Compared to other methods, it achieves a state-of-the-art result. Moreover, the proposed method can be easily extended to other similar clinical applications such as computer-aided detection and diagnosis of pulmonary nodules in CT images, glaucoma lesions in Retina fundus images, etc.


Asunto(s)
Infecciones por Coronavirus/diagnóstico por imagen , Aprendizaje Profundo , Neumonía Viral/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Adulto , Betacoronavirus , COVID-19 , Humanos , Imagenología Tridimensional , Pulmón/diagnóstico por imagen , Persona de Mediana Edad , Pandemias , Radiografía Torácica , SARS-CoV-2
11.
Front Aging Neurosci ; 11: 167, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31379555

RESUMEN

Introduction: The loss of nigrosome-1, which is also referred to as the swallow tail sign (STS) in T2*-weighted iron-sensitive magnetic resonance imaging (MRI), has recently emerged as a new biomarker for idiopathic Parkinson's disease (IPD). However, consistent recognition of the STS is difficult due to individual variations and different imaging parameters. Radiomics might have the potential to overcome these shortcomings. Therefore, we chose to explore whether radiomic features of nigrosome-1 of substantia nigra (SN) based on quantitative susceptibility mapping (QSM) could help to differentiate IPD patients from healthy controls (HCs). Methods: Three-dimensional multi-echo gradient-recalled echo images (0.86 × 0.86 × 1.00 mm3) were obtained at 3.0-T MRI for QSM in 87 IPD patients and 77 HCs. Regions of interest (ROIs) of the SN below the red nucleus were manually drawn on both sides, and subsequently, volumes of interest (VOIs) were segmented (these ROIs included four 1-mm slices). Then, 105 radiomic features (including 18 first-order features, 13 shape features, and 74 texture features) of bilateral VOIs in the two groups were extracted. Forty features were selected according to the ensemble feature selection method, which combined analysis of variance, random forest, and recursive feature elimination. The selected features were further utilized to distinguish IPD patients from HC using the SVM classifier with 10 rounds of 3-fold cross-validation. Finally, the representative features were analyzed using an unpaired t-test with Bonferroni correction and correlated with the UPDRS-III scores. Results: The classification results from SVM were as follows: area under curve (AUC): 0.96 ± 0.02; accuracy: 0.88 ± 0.03; sensitivity: 0.89 ± 0.06; and specificity: 0.87 ± 0.07. Five representative features were selected to show their detailed difference between IPD patients and HCs: 10th percentile and median in IPD patients were higher than those in HCs (all p < 0.00125), while Gray Level Run Length Matrix (GLRLM)-Long Run Low Gray Level Emphasis, Gray Level Size Zone Matrix (GLSZM)-Gray Level Non-Uniformity, and volume (all p < 0.00125) in IPD patients were lower than those in HCs. The 10th percentile was positively correlated with UPDRS-III score (r = 0.35, p = 0.001). Conclusion: Radiomic features of the nigrosome-1 region of SN based on QSM could be useful in the diagnosis of IPD and could serve as a surrogate marker for the STS.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA