Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 293
Filtrar
Mais filtros

País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Hum Brain Mapp ; 45(11): e26803, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39119860

RESUMO

Accurate segmentation of chronic stroke lesions from mono-spectral magnetic resonance imaging scans (e.g., T1-weighted images) is a difficult task due to the arbitrary shape, complex texture, variable size and intensities, and varied locations of the lesions. Due to this inherent spatial heterogeneity, existing machine learning methods have shown moderate performance for chronic lesion delineation. In this study, we introduced: (1) a method that integrates transformers' deformable feature attention mechanism with convolutional deep learning architecture to improve the accuracy and generalizability of stroke lesion segmentation, and (2) an ecological data augmentation technique based on inserting real lesions into intact brain regions. Our combination of these two approaches resulted in a significant increase in segmentation performance, with a Dice index of 0.82 (±0.39), outperforming the existing methods trained and tested on the same Anatomical Tracings of Lesions After Stroke (ATLAS) 2022 dataset. Our method performed relatively well even for cases with small stroke lesions. We validated the robustness of our method through an ablation study and by testing it on new unseen brain scans from the Ischemic Stroke Lesion Segmentation (ISLES) 2015 dataset. Overall, our proposed approach of transformers with ecological data augmentation offers a robust way to delineate chronic stroke lesions with clinically relevant accuracy. Our method can be extended to other challenging tasks that require automated detection and segmentation of diverse brain abnormalities from clinical scans.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Acidente Vascular Cerebral , Humanos , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/normas , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/patologia , Neuroimagem/métodos , Neuroimagem/normas , AVC Isquêmico/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Idoso , Encéfalo/diagnóstico por imagem , Encéfalo/patologia
2.
NMR Biomed ; : e5235, 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39086258

RESUMO

The purpose of this study is to demonstrate that T2-weighted imaging with very long echo time (TE > 300 ms) can provide relevant information in neurodegenerative/inflammatory disorder. Twenty patients affected by relapsing-remitting multiple sclerosis with stable disease course underwent 1.5 T 3D FLAIR, 3D T1-weighted, and a multi-echo sequence with 32 echoes (TE = 10-320 ms). Focal lesions (FL) were identified on FLAIR. T1-images were processed to segment deep gray matter (dGM), white matter (WM), FL sub-volumes with T1 hypo-intensity (T1FL), and dGM volumes (atrophy). Clinical-radiological parameters included Expanded Disability Status Scale (EDSS), disease duration, patient age, T1FL, and dGM atrophy. Correlation analysis was performed between the mean signal intensity (SI) computed on the non-lesional dGM and WM at different TE versus the clinical-radiological parameters. Multivariable linear regressions were fitted to the data to assess the association between the dependent variable EDSS and the independent variables obtained by T1FL lesion load and the mean SI of dGM and WM at the different TE. A clear trend is observed, with a systematic strengthening of the significance of the correlation at longer TE for all the relationships with the clinical-radiological parameters, becoming significant (p < 0.05) for EDSS, T1FL volumes, and dGM atrophy. Multivariable linear regressions show that at shorter TE, the SI of the T2-weighted sequences is not relevant for describing the EDSS variability while the T1FL volumes are relevant, and vice versa, at very-long TEs (around 300 ms); the SI of the T2-weighted sequences significantly (p < 0.05) describes the EDSS variability. By very long TE, the SI primarily originates from water with a T2 longer than 250 ms and/or free water, which may be arising from the perivascular space (PVS). Very-long T2-weighting might detect dilated PVS and represent an unexplored MR approach in neurofluid imaging of neurodegenerative/inflammatory diseases.

3.
Skin Res Technol ; 30(8): e13783, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39113617

RESUMO

BACKGROUND: In recent years, the increasing prevalence of skin cancers, particularly malignant melanoma, has become a major concern for public health. The development of accurate automated segmentation techniques for skin lesions holds immense potential in alleviating the burden on medical professionals. It is of substantial clinical importance for the early identification and intervention of skin cancer. Nevertheless, the irregular shape, uneven color, and noise interference of the skin lesions have presented significant challenges to the precise segmentation. Therefore, it is crucial to develop a high-precision and intelligent skin lesion segmentation framework for clinical treatment. METHODS: A precision-driven segmentation model for skin cancer images is proposed based on the Transformer U-Net, called BiADATU-Net, which integrates the deformable attention Transformer and bidirectional attention blocks into the U-Net. The encoder part utilizes deformable attention Transformer with dual attention block, allowing adaptive learning of global and local features. The decoder part incorporates specifically tailored scSE attention modules within skip connection layers to capture image-specific context information for strong feature fusion. Additionally, deformable convolution is aggregated into two different attention blocks to learn irregular lesion features for high-precision prediction. RESULTS: A series of experiments are conducted on four skin cancer image datasets (i.e., ISIC2016, ISIC2017, ISIC2018, and PH2). The findings show that our model exhibits satisfactory segmentation performance, all achieving an accuracy rate of over 96%. CONCLUSION: Our experiment results validate the proposed BiADATU-Net achieves competitive performance supremacy compared to some state-of-the-art methods. It is potential and valuable in the field of skin lesion segmentation.


Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Melanoma/diagnóstico por imagem , Melanoma/patologia , Algoritmos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Dermoscopia/métodos , Aprendizado Profundo
4.
J Med Internet Res ; 26: e59711, 2024 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-39255472

RESUMO

BACKGROUND: Stroke is a leading cause of death and disability worldwide. Rapid and accurate diagnosis is crucial for minimizing brain damage and optimizing treatment plans. OBJECTIVE: This review aims to summarize the methods of artificial intelligence (AI)-assisted stroke diagnosis over the past 25 years, providing an overview of performance metrics and algorithm development trends. It also delves into existing issues and future prospects, intending to offer a comprehensive reference for clinical practice. METHODS: A total of 50 representative articles published between 1999 and 2024 on using AI technology for stroke prevention and diagnosis were systematically selected and analyzed in detail. RESULTS: AI-assisted stroke diagnosis has made significant advances in stroke lesion segmentation and classification, stroke risk prediction, and stroke prognosis. Before 2012, research mainly focused on segmentation using traditional thresholding and heuristic techniques. From 2012 to 2016, the focus shifted to machine learning (ML)-based approaches. After 2016, the emphasis moved to deep learning (DL), which brought significant improvements in accuracy. In stroke lesion segmentation and classification as well as stroke risk prediction, DL has shown superiority over ML. In stroke prognosis, both DL and ML have shown good performance. CONCLUSIONS: Over the past 25 years, AI technology has shown promising performance in stroke diagnosis.


Assuntos
Inteligência Artificial , Acidente Vascular Cerebral , Humanos , Inteligência Artificial/história , Aprendizado de Máquina , Prognóstico , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/prevenção & controle , História do Século XX , História do Século XXI
5.
BMC Med Inform Decis Mak ; 24(1): 265, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39334181

RESUMO

BACKGROUND: Segmentation of skin lesions remains essential in histological diagnosis and skin cancer surveillance. Recent advances in deep learning have paved the way for greater improvements in medical imaging. The Hybrid Residual Networks (ResUNet) model, supplemented with Ant Colony Optimization (ACO), represents the synergy of these improvements aimed at improving the efficiency and effectiveness of skin lesion diagnosis. OBJECTIVE: This paper seeks to evaluate the effectiveness of the Hybrid ResUNet model for skin lesion classification and assess its impact on optimizing ACO performance to bridge the gap between computational efficiency and clinical utility. METHODS: The study used a deep learning design on a complex dataset that included a variety of skin lesions. The method includes training a Hybrid ResUNet model with standard parameters and fine-tuning using ACO for hyperparameter optimization. Performance was evaluated using traditional metrics such as accuracy, dice coefficient, and Jaccard index compared with existing models such as residual network (ResNet) and U-Net. RESULTS: The proposed hybrid ResUNet model exhibited excellent classification accuracy, reflected in the noticeable improvement in all evaluated metrics. His ability to describe complex lesions was particularly outstanding, improving diagnostic accuracy. Our experimental results demonstrate that the proposed Hybrid ResUNet model outperforms existing state-of-the-art methods, achieving an accuracy of 95.8%, a Dice coefficient of 93.1%, and a Jaccard index of 87.5. CONCLUSION: The addition of ResUNet to ACO in the proposed Hybrid ResUNet model significantly improves the classification of skin lesions. This integration goes beyond traditional paradigms and demonstrates a viable strategy for deploying AI-powered tools in clinical settings. FUTURE WORK: Future investigations will focus on increasing the version's abilities by using multi-modal imaging information, experimenting with alternative optimization algorithms, and comparing real-world medical applicability. There is also a promising scope for enhancing computational performance and exploring the model's interpretability for more clinical adoption.


Assuntos
Aprendizado Profundo , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Dermatopatias/diagnóstico por imagem
6.
Sensors (Basel) ; 24(16)2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39205066

RESUMO

Automated segmentation algorithms for dermoscopic images serve as effective tools that assist dermatologists in clinical diagnosis. While existing deep learning-based skin lesion segmentation algorithms have achieved certain success, challenges remain in accurately delineating the boundaries of lesion regions in dermoscopic images with irregular shapes, blurry edges, and occlusions by artifacts. To address these issues, a multi-attention codec network with selective and dynamic fusion (MASDF-Net) is proposed for skin lesion segmentation in this study. In this network, we use the pyramid vision transformer as the encoder to model the long-range dependencies between features, and we innovatively designed three modules to further enhance the performance of the network. Specifically, the multi-attention fusion (MAF) module allows for attention to be focused on high-level features from various perspectives, thereby capturing more global contextual information. The selective information gathering (SIG) module improves the existing skip-connection structure by eliminating the redundant information in low-level features. The multi-scale cascade fusion (MSCF) module dynamically fuses features from different levels of the decoder part, further refining the segmentation boundaries. We conducted comprehensive experiments on the ISIC 2016, ISIC 2017, ISIC 2018, and PH2 datasets. The experimental results demonstrate the superiority of our approach over existing state-of-the-art methods.


Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Aprendizado Profundo , Dermoscopia/métodos , Processamento de Imagem Assistida por Computador/métodos , Pele/diagnóstico por imagem , Pele/patologia , Interpretação de Imagem Assistida por Computador/métodos
7.
Sensors (Basel) ; 24(13)2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-39001081

RESUMO

In clinical conditions limited by equipment, attaining lightweight skin lesion segmentation is pivotal as it facilitates the integration of the model into diverse medical devices, thereby enhancing operational efficiency. However, the lightweight design of the model may face accuracy degradation, especially when dealing with complex images such as skin lesion images with irregular regions, blurred boundaries, and oversized boundaries. To address these challenges, we propose an efficient lightweight attention network (ELANet) for the skin lesion segmentation task. In ELANet, two different attention mechanisms of the bilateral residual module (BRM) can achieve complementary information, which enhances the sensitivity to features in spatial and channel dimensions, respectively, and then multiple BRMs are stacked for efficient feature extraction of the input information. In addition, the network acquires global information and improves segmentation accuracy by putting feature maps of different scales through multi-scale attention fusion (MAF) operations. Finally, we evaluate the performance of ELANet on three publicly available datasets, ISIC2016, ISIC2017, and ISIC2018, and the experimental results show that our algorithm can achieve 89.87%, 81.85%, and 82.87% of the mIoU on the three datasets with a parametric of 0.459 M, which is an excellent balance between accuracy and lightness and is superior to many existing segmentation methods.


Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pele/diagnóstico por imagem , Pele/patologia
8.
Sensors (Basel) ; 24(6)2024 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-38544244

RESUMO

Heavily imbalanced datasets are common in lesion segmentation. Specifically, the lesions usually comprise less than 5% of the whole image volume when dealing with brain MRI. A common solution when training with a limited dataset is the use of specific loss functions that rebalance the effect of background and foreground voxels. These approaches are usually evaluated running a single cross-validation split without taking into account other possible random aspects that might affect the true improvement of the final metric (i.e., random weight initialisation or random shuffling). Furthermore, the evolution of the effect of the loss on the heavily imbalanced class is usually not analysed during the training phase. In this work, we present an analysis of different common loss metrics during training on public datasets dealing with brain lesion segmentation in heavy imbalanced datasets. In order to limit the effect of hyperparameter tuning and architecture, we chose a 3D Unet architecture due to its ability to provide good performance on different segmentation applications. We evaluated this framework on two public datasets and we observed that weighted losses have a similar performance on average, even though heavily weighting the gradient of the foreground class gives better performance in terms of true positive segmentation.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Processamento de Imagem Assistida por Computador/métodos
9.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 237-245, 2024 Apr 25.
Artigo em Zh | MEDLINE | ID: mdl-38686403

RESUMO

The PET/CT imaging technology combining positron emission tomography (PET) and computed tomography (CT) is the most advanced imaging examination method currently, and is mainly used for tumor screening, differential diagnosis of benign and malignant tumors, staging and grading. This paper proposes a method for breast cancer lesion segmentation based on PET/CT bimodal images, and designs a dual-path U-Net framework, which mainly includes three modules: encoder module, feature fusion module and decoder module. Among them, the encoder module uses traditional convolution for feature extraction of single mode image; The feature fusion module adopts collaborative learning feature fusion technology and uses Transformer to extract the global features of the fusion image; The decoder module mainly uses multi-layer perceptron to achieve lesion segmentation. This experiment uses actual clinical PET/CT data to evaluate the effectiveness of the algorithm. The experimental results show that the accuracy, recall and accuracy of breast cancer lesion segmentation are 95.67%, 97.58% and 96.16%, respectively, which are better than the baseline algorithm. Therefore, it proves the rationality of the single and bimodal feature extraction method combining convolution and Transformer in the experimental design of this article, and provides reference for feature extraction methods for tasks such as multimodal medical image segmentation or classification.


Assuntos
Algoritmos , Neoplasias da Mama , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Aprendizado de Máquina , Interpretação de Imagem Assistida por Computador/métodos
10.
Neuroimage ; 271: 120041, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-36933626

RESUMO

Brain lesion segmentation provides a valuable tool for clinical diagnosis and research, and convolutional neural networks (CNNs) have achieved unprecedented success in the segmentation task. Data augmentation is a widely used strategy to improve the training of CNNs. In particular, data augmentation approaches that mix pairs of annotated training images have been developed. These methods are easy to implement and have achieved promising results in various image processing tasks. However, existing data augmentation approaches based on image mixing are not designed for brain lesions and may not perform well for brain lesion segmentation. Thus, the design of this type of simple data augmentation method for brain lesion segmentation is still an open problem. In this work, we propose a simple yet effective data augmentation approach, dubbed as CarveMix, for CNN-based brain lesion segmentation. Like other mixing-based methods, CarveMix stochastically combines two existing annotated images (annotated for brain lesions only) to obtain new labeled samples. To make our method more suitable for brain lesion segmentation, CarveMix is lesion-aware, where the image combination is performed with a focus on the lesions and preserves the lesion information. Specifically, from one annotated image we carve a region of interest (ROI) according to the lesion location and geometry with a variable ROI size. The carved ROI then replaces the corresponding voxels in a second annotated image to synthesize new labeled images for network training, and additional harmonization steps are applied for heterogeneous data where the two annotated images can originate from different sources. Besides, we further propose to model the mass effect that is unique to whole brain tumor segmentation during image mixing. To evaluate the proposed method, experiments were performed on multiple publicly available or private datasets, and the results show that our method improves the accuracy of brain lesion segmentation. The code of the proposed method is available at https://github.com/ZhangxinruBIT/CarveMix.git.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Encéfalo
11.
Hum Brain Mapp ; 44(14): 4893-4913, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37530598

RESUMO

In this work we present BIANCA-MS, a novel tool for brain white matter lesion segmentation in multiple sclerosis (MS), able to generalize across both the wide spectrum of MRI acquisition protocols and the heterogeneity of manually labeled data. BIANCA-MS is based on the original version of BIANCA and implements two innovative elements: a harmonized setting, tested under different MRI protocols, which avoids the need to further tune algorithm parameters to each dataset; and a cleaning step developed to improve consistency in automated and manual segmentations, thus reducing unwanted variability in output segmentations and validation data. BIANCA-MS was tested on three datasets, acquired with different MRI protocols. First, we compared BIANCA-MS to other widely used tools. Second, we tested how BIANCA-MS performs in separate datasets. Finally, we evaluated BIANCA-MS performance on a pooled dataset where all MRI data were merged. We calculated the overlap using the DICE spatial similarity index (SI) as well as the number of false positive/negative clusters (nFPC/nFNC) in comparison to the manual masks processed with the cleaning step. BIANCA-MS clearly outperformed other available tools in both high- and low-resolution images and provided comparable performance across different scanning protocols, sets of modalities and image resolutions. BIANCA-MS performance on the pooled dataset (SI: 0.72 ± 0.25, nFPC: 13 ± 11, nFNC: 4 ± 8) were comparable to those achieved on each individual dataset (median across datasets SI: 0.72 ± 0.28, nFPC: 14 ± 11, nFNC: 4 ± 8). Our findings suggest that BIANCA-MS is a robust and accurate approach for automated MS lesion segmentation.


Assuntos
Esclerose Múltipla , Substância Branca , Humanos , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/patologia , Imageamento por Ressonância Magnética/métodos , Substância Branca/diagnóstico por imagem , Substância Branca/patologia , Algoritmos
12.
Eur J Nucl Med Mol Imaging ; 50(8): 2441-2452, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36933075

RESUMO

PURPOSE: The aim of this study was to develop a convolutional neural network (CNN) for the automatic detection and segmentation of gliomas using [18F]fluoroethyl-L-tyrosine ([18F]FET) PET. METHODS: Ninety-three patients (84 in-house/7 external) who underwent a 20-40-min static [18F]FET PET scan were retrospectively included. Lesions and background regions were defined by two nuclear medicine physicians using the MIM software, such that delineations by one expert reader served as ground truth for training and testing the CNN model, while delineations by the second expert reader were used to evaluate inter-reader agreement. A multi-label CNN was developed to segment the lesion and background region while a single-label CNN was implemented for a lesion-only segmentation. Lesion detectability was evaluated by classifying [18F]FET PET scans as negative when no tumor was segmented and vice versa, while segmentation performance was assessed using the dice similarity coefficient (DSC) and segmented tumor volume. The quantitative accuracy was evaluated using the maximal and mean tumor to mean background uptake ratio (TBRmax/TBRmean). CNN models were trained and tested by a threefold cross-validation (CV) using the in-house data, while the external data was used for an independent evaluation to assess the generalizability of the two CNN models. RESULTS: Based on the threefold CV, the multi-label CNN model achieved 88.9% sensitivity and 96.5% precision for discriminating between positive and negative [18F]FET PET scans compared to a 35.3% sensitivity and 83.1% precision obtained with the single-label CNN model. In addition, the multi-label CNN allowed an accurate estimation of the maximal/mean lesion and mean background uptake, resulting in an accurate TBRmax/TBRmean estimation compared to a semi-automatic approach. In terms of lesion segmentation, the multi-label CNN model (DSC = 74.6 ± 23.1%) demonstrated equal performance as the single-label CNN model (DSC = 73.7 ± 23.2%) with tumor volumes estimated by the single-label and multi-label model (22.9 ± 23.6 ml and 23.1 ± 24.3 ml, respectively) closely approximating the tumor volumes estimated by the expert reader (24.1 ± 24.4 ml). DSCs of both CNN models were in line with the DSCs by the second expert reader compared with the lesion segmentations by the first expert reader, while detection and segmentation performance of both CNN models as determined with the in-house data were confirmed by the independent evaluation using external data. CONCLUSION: The proposed multi-label CNN model detected positive [18F]FET PET scans with high sensitivity and precision. Once detected, an accurate tumor segmentation and estimation of background activity was achieved resulting in an automatic and accurate TBRmax/TBRmean estimation, such that user interaction and potential inter-reader variability can be minimized.


Assuntos
Glioma , Humanos , Estudos Retrospectivos , Glioma/diagnóstico por imagem , Glioma/patologia , Tomografia por Emissão de Pósitrons/métodos , Tirosina , Redes Neurais de Computação
13.
J Magn Reson Imaging ; 58(3): 864-876, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36708267

RESUMO

BACKGROUND: Detecting new and enlarged lesions in multiple sclerosis (MS) patients is needed to determine their disease activity. LeMan-PV is a software embedded in the scanner reconstruction system of one vendor, which automatically assesses new and enlarged white matter lesions (NELs) in the follow-up of MS patients; however, multicenter validation studies are lacking. PURPOSE: To assess the accuracy of LeMan-PV for the longitudinal detection NEL white-matter MS lesions in a multicenter clinical setting. STUDY TYPE: Retrospective, longitudinal. SUBJECTS: A total of 206 patients with a definitive MS diagnosis and at least two follow-up MRI studies from five centers participating in the Swiss Multiple Sclerosis Cohort study. Mean age at first follow-up = 45.2 years (range: 36.9-52.8 years); 70 males. FIELD STRENGTH/SEQUENCE: Fluid attenuated inversion recovery (FLAIR) and T1-weighted magnetization prepared rapid gradient echo (T1-MPRAGE) sequences at 1.5 T and 3 T. ASSESSMENT: The study included 313 MRI pairs of datasets. Data were analyzed with LeMan-PV and compared with a manual "reference standard" provided by a neuroradiologist. A second rater (neurologist) performed the same analysis in a subset of MRI pairs to evaluate the rating-accuracy. The Sensitivity (Se), Specificity (Sp), Accuracy (Acc), F1-score, lesion-wise False-Positive-Rate (aFPR), and other measures were used to assess LeMan-PV performance for the detection of NEL at 1.5 T and 3 T. The performance was also evaluated in the subgroup of 123 MRI pairs at 3 T. STATISTICAL TESTS: Intraclass correlation coefficient (ICC) and Cohen's kappa (CK) were used to evaluate the agreement between readers. RESULTS: The interreader agreement was high for detecting new lesions (ICC = 0.97, Pvalue < 10-20 , CK = 0.82, P value = 0) and good (ICC = 0.75, P value < 10-12 , CK = 0.68, P value = 0) for detecting enlarged lesions. Across all centers, scanner field strengths (1.5 T, 3 T), and for NEL, LeMan-PV achieved: Acc = 61%, Se = 65%, Sp = 60%, F1-score = 0.44, aFPR = 1.31. When both follow-ups were acquired at 3 T, LeMan-PV accuracy was higher (Acc = 66%, Se = 66%, Sp = 66%, F1-score = 0.28, aFPR = 3.03). DATA CONCLUSION: In this multicenter study using clinical data settings acquired at 1.5 T and 3 T, and variations in MRI protocols, LeMan-PV showed similar sensitivity in detecting NEL with respect to other recent 3 T multicentric studies based on neural networks. While LeMan-PV performance is not optimal, its main advantage is that it provides automated clinical decision support integrated into the radiological-routine flow. EVIDENCE LEVEL: 4 TECHNICAL EFFICACY: Stage 2.


Assuntos
Esclerose Múltipla , Substância Branca , Masculino , Humanos , Adulto , Pessoa de Meia-Idade , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/patologia , Substância Branca/diagnóstico por imagem , Substância Branca/patologia , Estudos de Coortes , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia
14.
Methods ; 205: 200-209, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35817338

RESUMO

BACKGROUND: Lesion segmentation is a critical step in medical image analysis, and methods to identify pathology without time-intensive manual labeling of data are of utmost importance during a pandemic and in resource-constrained healthcare settings. Here, we describe a method for fully automated segmentation and quantification of pathological COVID-19 lung tissue on chest Computed Tomography (CT) scans without the need for manually segmented training data. METHODS: We trained a cycle-consistent generative adversarial network (CycleGAN) to convert images of COVID-19 scans into their generated healthy equivalents. Subtraction of the generated healthy images from their corresponding original CT scans yielded maps of pathological tissue, without background lung parenchyma, fissures, airways, or vessels. We then used these maps to construct three-dimensional lesion segmentations. Using a validation dataset, Dice scores were computed for our lesion segmentations and other published segmentation networks using ground truth segmentations reviewed by radiologists. RESULTS: The COVID-to-Healthy generator eliminated high Hounsfield unit (HU) voxels within pulmonary lesions and replaced them with lower HU voxels. The generator did not distort normal anatomy such as vessels, airways, or fissures. The generated healthy images had higher gas content (2.45 ± 0.93 vs 3.01 ± 0.84 L, P < 0.001) and lower tissue density (1.27 ± 0.40 vs 0.73 ± 0.29 Kg, P < 0.001) than their corresponding original COVID-19 images, and they were not significantly different from those of the healthy images (P < 0.001). Using the validation dataset, lesion segmentations scored an average Dice score of 55.9, comparable to other weakly supervised networks that do require manual segmentations. CONCLUSION: Our CycleGAN model successfully segmented pulmonary lesions in mild and severe COVID-19 cases. Our model's performance was comparable to other published models; however, our model is unique in its ability to segment lesions without the need for manual segmentations.


Assuntos
COVID-19 , Processamento de Imagem Assistida por Computador , COVID-19/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
15.
Methods ; 202: 88-102, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-33610692

RESUMO

Skin cancer is one of the most common and dangerous cancer that exists worldwide. Malignant melanoma is one of the most dangerous skin cancer types has a high mortality rate. An estimated 196,060 melanoma cases will be diagnosed in 2020 in the USA. Many computerized techniques are presented in the past to diagnose skin lesions, but they are still failing to achieve significant accuracy. To improve the existing accuracy, we proposed a hierarchical framework based on two-dimensional superpixels and deep learning. First, we enhance the contrast of original dermoscopy images by fusing local and global enhanced images. The entire enhanced images are utilized in the next step to segmentation skin lesions using three-step superpixel lesion segmentation. The segmented lesions are mapped over the whole enhanced dermoscopy images and obtained only segmented color images. Then, a deep learning model (ResNet-50) is applied to these mapped images and learned features through transfer learning. The extracted features are further optimized using an improved grasshopper optimization algorithm, which is later classified through the Naïve Bayes classifier. The proposed hierarchical method has been evaluated on three datasets (Ph2, ISBI2016, and HAM1000), consisting of three, two, and seven skin cancer classes. On these datasets, our method achieved an accuracy of 95.40%, 91.1%, and 85.50%, respectively. The results show that this method can be helpful for the classification of skin cancer with improved accuracy.


Assuntos
Aprendizado Profundo , Melanoma , Dermatopatias , Neoplasias Cutâneas , Algoritmos , Teorema de Bayes , Dermoscopia/métodos , Humanos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia
16.
BMC Med Inform Decis Mak ; 23(1): 192, 2023 09 26.
Artigo em Inglês | MEDLINE | ID: mdl-37752508

RESUMO

BACKGROUND: Accurate segmentation of stroke lesions on MRI images is very important for neurologists in the planning of post-stroke care. Segmentation helps clinicians to better diagnose and evaluation of any treatment risks. However, manual segmentation of brain lesions relies on the experience of neurologists and is also a very tedious and time-consuming process. So, in this study, we proposed a novel deep convolutional neural network (CNN-Res) that automatically performs the segmentation of ischemic stroke lesions from multimodal MRIs. METHODS: CNN-Res used a U-shaped structure, so the network has encryption and decryption paths. The residual units are embedded in the encoder path. In this model, to reduce gradient descent, the residual units were used, and to extract more complex information in images, multimodal MRI data were applied. In the link between the encryption and decryption subnets, the bottleneck strategy was used, which reduced the number of parameters and training time compared to similar research. RESULTS: CNN-Res was evaluated on two distinct datasets. First, it was examined on a dataset collected from the Neuroscience Center of Tabriz University of Medical Sciences, where the average Dice coefficient was equal to 85.43%. Then, to compare the efficiency and performance of the model with other similar works, CNN-Res was evaluated on the popular SPES 2015 competition dataset where the average Dice coefficient was 79.23%. CONCLUSION: This study presented a new and accurate method for the segmentation of MRI medical images using a deep convolutional neural network called CNN-Res, which directly predicts segment maps from raw input pixels.


Assuntos
Aprendizado Profundo , AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Acidente Vascular Cerebral/diagnóstico por imagem , Imageamento por Ressonância Magnética , Redes Neurais de Computação
17.
J Appl Clin Med Phys ; 24(4): e13927, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36800255

RESUMO

Lesion segmentation is critical for clinicians to accurately stage the disease and determine treatment strategy. Deep learning based automatic segmentation can improve both the segmentation efficiency and accuracy. However, training a robust deep learning segmentation model requires sufficient training examples with sufficient diversity in lesion location and lesion size. This study is to develop a deep learning framework for generation of synthetic lesions with various locations and sizes that can be included in the training dataset to enhance the lesion segmentation performance. The lesion synthesis network is a modified generative adversarial network (GAN). Specifically, we innovated a partial convolution strategy to construct a U-Net-like generator. The discriminator is designed using Wasserstein GAN with gradient penalty and spectral normalization. A mask generation method based on principal component analysis (PCA) was developed to model various lesion shapes. The generated masks are then converted into liver lesions through a lesion synthesis network. The lesion synthesis framework was evaluated for lesion textures, and the synthetic lesions were used to train a lesion segmentation network to further validate the effectiveness of the lesion synthesis framework. All the networks are trained and tested on the LITS public dataset. Our experiments demonstrate that the synthetic lesions generated by our approach have very similar distributions for the two parameters, GLCM-energy and GLCM-correlation. Including the synthetic lesions in the segmentation network improved the segmentation dice performance from 67.3% to 71.4%. Meanwhile, the precision and sensitivity for lesion segmentation were improved from 74.6% to 76.0% and 66.1% to 70.9%, respectively. The proposed lesion synthesis approach outperforms the other two existing approaches. Including the synthetic lesion data into the training dataset significantly improves the segmentation performance.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/radioterapia
18.
Sensors (Basel) ; 23(6)2023 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-36991777

RESUMO

At present, convolutional neural networks (CNNs) have been widely applied to the task of skin disease image segmentation due to the fact of their powerful information discrimination abilities and have achieved good results. However, it is difficult for CNNs to capture the connection between long-range contexts when extracting deep semantic features of lesion images, and the resulting semantic gap leads to the problem of segmentation blur in skin lesion image segmentation. In order to solve the above problems, we designed a hybrid encoder network based on transformer and fully connected neural network (MLP) architecture, and we call this approach HMT-Net. In the HMT-Net network, we use the attention mechanism of the CTrans module to learn the global relevance of the feature map to improve the network's ability to understand the overall foreground information of the lesion. On the other hand, we use the TokMLP module to effectively enhance the network's ability to learn the boundary features of lesion images. In the TokMLP module, the tokenized MLP axial displacement operation strengthens the connection between pixels to facilitate the extraction of local feature information by our network. In order to verify the superiority of our network in segmentation tasks, we conducted extensive experiments on the proposed HMT-Net network and several newly proposed Transformer and MLP networks on three public datasets (ISIC2018, ISBI2017, and ISBI2016) and obtained the following results. Our method achieves 82.39%, 75.53%, and 83.98% on the Dice index and 89.35%, 84.93%, and 91.33% on the IOU. Compared with the latest skin disease segmentation network, FAC-Net, our method improves the Dice index by 1.99%, 1.68%, and 1.6%, respectively. In addition, the IOU indicators have increased by 0.45%, 2.36%, and 1.13%, respectively. The experimental results show that our designed HMT-Net achieves state-of-the-art performance superior to other segmentation methods.


Assuntos
Fontes de Energia Elétrica , Dermatopatias , Humanos , Aprendizagem , Redes Neurais de Computação , Registros , Dermatopatias/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
19.
Sensors (Basel) ; 23(5)2023 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-36904754

RESUMO

Medical images are used as an important basis for diagnosing diseases, among which CT images are seen as an important tool for diagnosing lung lesions. However, manual segmentation of infected areas in CT images is time-consuming and laborious. With its excellent feature extraction capabilities, a deep learning-based method has been widely used for automatic lesion segmentation of COVID-19 CT images. However, the segmentation accuracy of these methods is still limited. To effectively quantify the severity of lung infections, we propose a Sobel operator combined with multi-attention networks for COVID-19 lesion segmentation (SMA-Net). In our SMA-Net method, an edge feature fusion module uses the Sobel operator to add edge detail information to the input image. To guide the network to focus on key regions, SMA-Net introduces a self-attentive channel attention mechanism and a spatial linear attention mechanism. In addition, the Tversky loss function is adopted for the segmentation network for small lesions. Comparative experiments on COVID-19 public datasets show that the average Dice similarity coefficient (DSC) and joint intersection over union (IOU) of the proposed SMA-Net model are 86.1% and 77.8%, respectively, which are better than those in most existing segmentation networks.


Assuntos
COVID-19 , Trabalho de Parto , Gravidez , Feminino , Humanos , Processamento de Imagem Assistida por Computador
20.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36850436

RESUMO

Breast cancer is the most prevalent cancer in the world and the fifth-leading cause of cancer-related death. Treatment is effective in the early stages. Thus, a need to screen considerable portions of the population is crucial. When the screening procedure uncovers a suspect lesion, a biopsy is performed to assess its potential for malignancy. This procedure is usually performed using real-time Ultrasound (US) imaging. This work proposes a visualization system for US breast biopsy. It consists of an application running on AR glasses that interact with a computer application. The AR glasses track the position of QR codes mounted on an US probe and a biopsy needle. US images are shown in the user's field of view with enhanced lesion visualization and needle trajectory. To validate the system, latency of the transmission of US images was evaluated. Usability assessment compared our proposed prototype with a traditional approach with different users. It showed that needle alignment was more precise, with 92.67 ± 2.32° in our prototype versus 89.99 ± 37.49° in a traditional system. The users also reached the lesion more accurately. Overall, the proposed solution presents promising results, and the use of AR glasses as a tracking and visualization device exhibited good performance.


Assuntos
Realidade Aumentada , Feminino , Humanos , Interface Usuário-Computador , Ultrassonografia Mamária , Ultrassonografia , Biópsia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA