Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 2.344
Filtrar
Más filtros

Publication year range
1.
Brief Bioinform ; 25(5)2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39179248

RESUMEN

Advancements in imaging technologies have revolutionized our ability to deeply profile pathological tissue architectures, generating large volumes of imaging data with unparalleled spatial resolution. This type of data collection, namely, spatial proteomics, offers invaluable insights into various human diseases. Simultaneously, computational algorithms have evolved to manage the increasing dimensionality of spatial proteomics inherent in this progress. Numerous imaging-based computational frameworks, such as computational pathology, have been proposed for research and clinical applications. However, the development of these fields demands diverse domain expertise, creating barriers to their integration and further application. This review seeks to bridge this divide by presenting a comprehensive guideline. We consolidate prevailing computational methods and outline a roadmap from image processing to data-driven, statistics-informed biomarker discovery. Additionally, we explore future perspectives as the field moves toward interfacing with other quantitative domains, holding significant promise for precision care in immuno-oncology.


Asunto(s)
Biología Computacional , Proteómica , Humanos , Proteómica/métodos , Biología Computacional/métodos , Biomarcadores de Tumor/metabolismo , Neoplasias/metabolismo , Neoplasias/inmunología , Algoritmos , Biomarcadores , Procesamiento de Imagen Asistido por Computador/métodos
2.
Neuroimage ; 292: 120608, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38626817

RESUMEN

The morphological analysis and volume measurement of the hippocampus are crucial to the study of many brain diseases. Therefore, an accurate hippocampal segmentation method is beneficial for the development of clinical research in brain diseases. U-Net and its variants have become prevalent in hippocampus segmentation of Magnetic Resonance Imaging (MRI) due to their effectiveness, and the architecture based on Transformer has also received some attention. However, some existing methods focus too much on the shape and volume of the hippocampus rather than its spatial information, and the extracted information is independent of each other, ignoring the correlation between local and global features. In addition, many methods cannot be effectively applied to practical medical image segmentation due to many parameters and high computational complexity. To this end, we combined the advantages of CNNs and ViTs (Vision Transformer) and proposed a simple and lightweight model: Light3DHS for the segmentation of the 3D hippocampus. In order to obtain richer local contextual features, the encoder first utilizes a multi-scale convolutional attention module (MCA) to learn the spatial information of the hippocampus. Considering the importance of local features and global semantics for 3D segmentation, we used a lightweight ViT to learn high-level features of scale invariance and further fuse local-to-global representation. To evaluate the effectiveness of encoder feature representation, we designed three decoders of different complexity to generate segmentation maps. Experiments on three common hippocampal datasets demonstrate that the network achieves more accurate hippocampus segmentation with fewer parameters. Light3DHS performs better than other state-of-the-art algorithms.


Asunto(s)
Hipocampo , Imagenología Tridimensional , Imagen por Resonancia Magnética , Hipocampo/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos , Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Aprendizaje Profundo , Algoritmos
3.
Hum Brain Mapp ; 45(13): e70014, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39230009

RESUMEN

Pelizaeus-Merzbacher disease (PMD) is a rare childhood hypomyelinating leukodystrophy. Quantification of the pronounced myelin deficit and delineation of subtle myelination processes are of high clinical interest. Quantitative magnetic resonance imaging (qMRI) techniques can provide in vivo insights into myelination status, its spatial distribution, and dynamics during brain maturation. They may serve as potential biomarkers to assess the efficacy of myelin-modulating therapies. However, registration techniques for image quantification and statistical comparison of affected pediatric brains, especially those of low or deviant image tissue contrast, with healthy controls are not yet established. This study aimed first to develop and compare postprocessing pipelines for atlas-based quantification of qMRI data in pediatric patients with PMD and evaluate their registration accuracy. Second, to apply an optimized pipeline to investigate spatial myelin deficiency using myelin water imaging (MWI) data from patients with PMD and healthy controls. This retrospective single-center study included five patients with PMD (mean age, 6 years ± 3.8) who underwent conventional brain MRI and diffusion tensor imaging (DTI), with MWI data available for a subset of patients. Three methods of registering PMD images to a pediatric template were investigated. These were based on (a) T1-weighted (T1w) images, (b) fractional anisotropy (FA) maps, and (c) a combination of T1w, T2-weighted, and FA images in a multimodal approach. Registration accuracy was determined by visual inspection and calculated using the structural similarity index method (SSIM). SSIM values for the registration approaches were compared using a t test. Myelin water fraction (MWF) was quantified from MWI data as an assessment of relative myelination. Mean MWF was obtained from two PMDs (mean age, 3.1 years ± 0.3) within four major white matter (WM) pathways of a pediatric atlas and compared to seven healthy controls (mean age, 3 years ± 0.2) using a Mann-Whitney U test. Our results show that visual registration accuracy estimation and computed SSIM were highest for FA-based registration, followed by multimodal, and T1w-based registration (SSIMFA = 0.67 ± 0.04 vs. SSIMmultimodal = 0.60 ± 0.03 vs. SSIMT1 = 0.40 ± 0.14). Mean MWF of patients with PMD within the WM pathways was significantly lower than in healthy controls MWFPMD = 0.0267 ± 0.021 vs. MWFcontrols = 0.1299 ± 0.039. Specifically, MWF was measurable in brain structures known to be myelinated at birth (brainstem) or postnatally (projection fibers) but was scarcely detectable in other brain regions (commissural and association fibers). Taken together, our results indicate that registration accuracy was highest with an FA-based registration pipeline, providing an alternative to conventional T1w-based registration approaches in the case of hypomyelinating leukodystrophies missing normative intrinsic tissue contrasts. The applied atlas-based analysis of MWF data revealed that the extent of spatial myelin deficiency in patients with PMD was most pronounced in commissural and association and to a lesser degree in brainstem and projection pathways.


Asunto(s)
Atlas como Asunto , Imagen de Difusión Tensora , Vaina de Mielina , Enfermedad de Pelizaeus-Merzbacher , Humanos , Enfermedad de Pelizaeus-Merzbacher/diagnóstico por imagen , Enfermedad de Pelizaeus-Merzbacher/patología , Masculino , Niño , Femenino , Preescolar , Vaina de Mielina/patología , Imagen de Difusión Tensora/métodos , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Sustancia Blanca/diagnóstico por imagen , Sustancia Blanca/patología
4.
Neuropathol Appl Neurobiol ; 50(3): e12981, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38738494

RESUMEN

The convergence of digital pathology and artificial intelligence could assist histopathology image analysis by providing tools for rapid, automated morphological analysis. This systematic review explores the use of artificial intelligence for histopathological image analysis of digitised central nervous system (CNS) tumour slides. Comprehensive searches were conducted across EMBASE, Medline and the Cochrane Library up to June 2023 using relevant keywords. Sixty-eight suitable studies were identified and qualitatively analysed. The risk of bias was evaluated using the Prediction model Risk of Bias Assessment Tool (PROBAST) criteria. All the studies were retrospective and preclinical. Gliomas were the most frequently analysed tumour type. The majority of studies used convolutional neural networks or support vector machines, and the most common goal of the model was for tumour classification and/or grading from haematoxylin and eosin-stained slides. The majority of studies were conducted when legacy World Health Organisation (WHO) classifications were in place, which at the time relied predominantly on histological (morphological) features but have since been superseded by molecular advances. Overall, there was a high risk of bias in all studies analysed. Persistent issues included inadequate transparency in reporting the number of patients and/or images within the model development and testing cohorts, absence of external validation, and insufficient recognition of batch effects in multi-institutional datasets. Based on these findings, we outline practical recommendations for future work including a framework for clinical implementation, in particular, better informing the artificial intelligence community of the needs of the neuropathologist.


Asunto(s)
Inteligencia Artificial , Neoplasias del Sistema Nervioso Central , Humanos , Neoplasias del Sistema Nervioso Central/patología , Procesamiento de Imagen Asistido por Computador/métodos
5.
Artículo en Inglés | MEDLINE | ID: mdl-39060373

RESUMEN

PURPOSE: Generating polar map (PM) from [68Ga]Ga-DOTA-FAPI-04 PET images is challenging and inaccurate using existing automatic methods that rely on the myocardial anatomical integrity in PET images. This study aims to enhance the accuracy of PM generated from [68Ga]Ga-DOTA-FAPI-04 PET images and explore the potential value of PM in detecting reactive fibrosis after myocardial infarction and assessing its relationship with cardiac function. METHODS: We proposed a deep-learning-based method that fuses multi-modality images to compensate for the cardiac structural information lost in [68Ga]Ga-DOTA-FAPI-04 PET images and accurately generated PMs. We collected 133 pairs of [68Ga]Ga-DOTA-FAPI-04 PET/MR images from 87 ST-segment elevated myocardial infarction patients for training and evaluation purposes. Twenty-six patients were selected for longitudinal analysis, further examining the clinical value of PM-related imaging parameters. RESULTS: The quantitative comparison demonstrated that our method was comparable with the manual method and surpassed the commercially available software-PMOD in terms of accuracy in generating PMs for [68Ga]Ga-DOTA-FAPI-04 PET images. Clinical analysis revealed the effectiveness of [68Ga]Ga-DOTA-FAPI-04 PET PM in detecting reactive myocardial fibrosis. Significant correlations were demonstrated between the difference of baseline PM FAPI% and PM LGE%, and the change in cardiac function parameters (all p < 0.001), including LVESV% (r = 0.697), LVEDV% (r = 0.621) and LVEF% (r = -0.607). CONCLUSION: The [68Ga]Ga-DOTA-FAPI-04 PET PMs generated by our method are comparable to manually generated and sufficient for clinical use. The PMs generated by our method have potential value in detecting reactive fibrosis after myocardial infarction and were associated with cardiac function, suggesting the possibility of enhancing clinical diagnostic practices. TRIAL REGISTRATION: ClinicalTrials.gov (NCT04723953). Registered 26 January 2021.

6.
J Magn Reson Imaging ; 59(4): 1438-1453, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37382232

RESUMEN

BACKGROUND: Spine MR image segmentation is important foundation for computer-aided diagnostic (CAD) algorithms of spine disorders. Convolutional neural networks segment effectively, but require high computational costs. PURPOSE: To design a lightweight model based on dynamic level-set loss function for high segmentation performance. STUDY TYPE: Retrospective. POPULATION: Four hundred forty-eight subjects (3163 images) from two separate datasets. Dataset-1: 276 subjects/994 images (53.26% female, mean age 49.02 ± 14.09), all for disc degeneration screening, 188 had disc degeneration, 67 had herniated disc. Dataset-2: public dataset with 172 subjects/2169 images, 142 patients with vertebral degeneration, 163 patients with disc degeneration. FIELD STRENGTH/SEQUENCE: T2 weighted turbo spin echo sequences at 3T. ASSESSMENT: Dynamic Level-set Net (DLS-Net) was compared with four mainstream (including U-net++) and four lightweight models, and manual label made by five radiologists (vertebrae, discs, spinal fluid) used as segmentation evaluation standard. Five-fold cross-validation are used for all experiments. Based on segmentation, a CAD algorithm of lumbar disc was designed for assessing DLS-Net's practicality, and the text annotation (normal, bulging, or herniated) from medical history data were used as evaluation standard. STATISTICAL TESTS: All segmentation models were evaluated with DSC, accuracy, precision, and AUC. The pixel numbers of segmented results were compared with manual label using paired t-tests, with P < 0.05 indicating significance. The CAD algorithm was evaluated with accuracy of lumbar disc diagnosis. RESULTS: With only 1.48% parameters of U-net++, DLS-Net achieved similar accuracy in both datasets (Dataset-1: DSC 0.88 vs. 0.89, AUC 0.94 vs. 0.94; Dataset-2: DSC 0.86 vs. 0.86, AUC 0.93 vs. 0.93). The segmentation results of DLS-Net showed no significant differences with manual labels in pixel numbers for discs (Dataset-1: 1603.30 vs. 1588.77, P = 0.22; Dataset-2: 863.61 vs. 886.4, P = 0.14) and vertebrae (Dataset-1: 3984.28 vs. 3961.94, P = 0.38; Dataset-2: 4806.91 vs. 4732.85, P = 0.21). Based on DLS-Net's segmentation results, the CAD algorithm achieved higher accuracy than using non-cropped MR images (87.47% vs. 61.82%). DATA CONCLUSION: The proposed DLS-Net has fewer parameters but achieves similar accuracy to U-net++, helps CAD algorithm achieve higher accuracy, which facilitates wider application. EVIDENCE LEVEL: 2 TECHNICAL EFFICACY: Stage 1.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Degeneración del Disco Intervertebral , Humanos , Femenino , Adulto , Persona de Mediana Edad , Masculino , Procesamiento de Imagen Asistido por Computador/métodos , Estudios Retrospectivos , Degeneración del Disco Intervertebral/diagnóstico por imagen , Redes Neurales de la Computación , Columna Vertebral/diagnóstico por imagen
7.
Eur Radiol ; 34(10): 6940-6952, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38536464

RESUMEN

BACKGROUND: Accurate mortality risk quantification is crucial for the management of hepatocellular carcinoma (HCC); however, most scoring systems are subjective. PURPOSE: To develop and independently validate a machine learning mortality risk quantification method for HCC patients using standard-of-care clinical data and liver radiomics on baseline magnetic resonance imaging (MRI). METHODS: This retrospective study included all patients with multiphasic contrast-enhanced MRI at the time of diagnosis treated at our institution. Patients were censored at their last date of follow-up, end-of-observation, or liver transplantation date. The data were randomly sampled into independent cohorts, with 85% for development and 15% for independent validation. An automated liver segmentation framework was adopted for radiomic feature extraction. A random survival forest combined clinical and radiomic variables to predict overall survival (OS), and performance was evaluated using Harrell's C-index. RESULTS: A total of 555 treatment-naïve HCC patients (mean age, 63.8 years ± 8.9 [standard deviation]; 118 females) with MRI at the time of diagnosis were included, of which 287 (51.7%) died after a median time of 14.40 (interquartile range, 22.23) months, and had median followed up of 32.47 (interquartile range, 61.5) months. The developed risk prediction framework required 1.11 min on average and yielded C-indices of 0.8503 and 0.8234 in the development and independent validation cohorts, respectively, outperforming conventional clinical staging systems. Predicted risk scores were significantly associated with OS (p < .00001 in both cohorts). CONCLUSIONS: Machine learning reliably, rapidly, and reproducibly predicts mortality risk in patients with hepatocellular carcinoma from data routinely acquired in clinical practice. CLINICAL RELEVANCE STATEMENT: Precision mortality risk prediction using routinely available standard-of-care clinical data and automated MRI radiomic features could enable personalized follow-up strategies, guide management decisions, and improve clinical workflow efficiency in tumor boards. KEY POINTS: • Machine learning enables hepatocellular carcinoma mortality risk prediction using standard-of-care clinical data and automated radiomic features from multiphasic contrast-enhanced MRI. • Automated mortality risk prediction achieved state-of-the-art performances for mortality risk quantification and outperformed conventional clinical staging systems. • Patients were stratified into low, intermediate, and high-risk groups with significantly different survival times, generalizable to an independent evaluation cohort.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Aprendizaje Automático , Imagen por Resonancia Magnética , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/mortalidad , Femenino , Masculino , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/mortalidad , Persona de Mediana Edad , Estudios Retrospectivos , Pronóstico , Imagen por Resonancia Magnética/métodos , Medios de Contraste , Anciano , Medición de Riesgo/métodos
8.
Methods ; 218: 149-157, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37572767

RESUMEN

Deep convolutional neural networks (DCNNs) have shown remarkable performance in medical image segmentation tasks. However, medical images frequently exhibit distribution discrepancies due to variations in scanner vendors, operators, and image quality, which pose significant challenges to the robustness of trained models when applied to unseen clinical data. To address this issue, domain generalization methods have been developed to enhance the generalization ability of DCNNs. Feature space-based data augmentation methods have been proven effective in improving domain generalization, but they often rely on prior knowledge or assumptions, which can limit the diversity of source domain data. In this study, we propose a novel random feature augmentation (RFA) method to diversify source domain data at the feature level without prior knowledge. Specifically, our RFA method perturbs domain-specific information while preserving domain-invariant information, thereby adequately diversifying the source domain data. Furthermore, we propose a dual-branches invariant synergistic learning strategy to capture domain-invariant information from the augmented features of RFA, enabling DCNNs to learn a more generalized representation. We evaluate our proposed method on two challenging medical image segmentation tasks, optic cup/disc segmentation on fundus images and prostate segmentation on MRI images. Extensive experimental results demonstrate the superior performance of our method over state-of-the-art domain generalization methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Masculino , Humanos
9.
J Biomed Inform ; 150: 104583, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38191010

RESUMEN

OBJECTIVE: The primary objective of our study is to address the challenge of confidentially sharing medical images across different centers. This is often a critical necessity in both clinical and research environments, yet restrictions typically exist due to privacy concerns. Our aim is to design a privacy-preserving data-sharing mechanism that allows medical images to be stored as encoded and obfuscated representations in the public domain without revealing any useful or recoverable content from the images. In tandem, we aim to provide authorized users with compact private keys that could be used to reconstruct the corresponding images. METHOD: Our approach involves utilizing a neural auto-encoder. The convolutional filter outputs are passed through sparsifying transformations to produce multiple compact codes. Each code is responsible for reconstructing different attributes of the image. The key privacy-preserving element in this process is obfuscation through the use of specific pseudo-random noise. When applied to the codes, it becomes computationally infeasible for an attacker to guess the correct representation for all the codes, thereby preserving the privacy of the images. RESULTS: The proposed framework was implemented and evaluated using chest X-ray images for different medical image analysis tasks, including classification, segmentation, and texture analysis. Additionally, we thoroughly assessed the robustness of our method against various attacks using both supervised and unsupervised algorithms. CONCLUSION: This study provides a novel, optimized, and privacy-assured data-sharing mechanism for medical images, enabling multi-party sharing in a secure manner. While we have demonstrated its effectiveness with chest X-ray images, the mechanism can be utilized in other medical images modalities as well.


Asunto(s)
Algoritmos , Privacidad , Difusión de la Información
10.
Biomed Eng Online ; 23(1): 14, 2024 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-38310297

RESUMEN

PURPOSE: Convolution operator-based neural networks have shown great success in medical image segmentation over the past decade. The U-shaped network with a codec structure is one of the most widely used models. Transformer, a technology used in natural language processing, can capture long-distance dependencies and has been applied in Vision Transformer to achieve state-of-the-art performance on image classification tasks. Recently, researchers have extended transformer to medical image segmentation tasks, resulting in good models. METHODS: This review comprises publications selected through a Web of Science search. We focused on papers published since 2018 that applied the transformer architecture to medical image segmentation. We conducted a systematic analysis of these studies and summarized the results. RESULTS: To better comprehend the benefits of convolutional neural networks and transformers, the construction of the codec and transformer modules is first explained. Second, the medical image segmentation model based on transformer is summarized. The typically used assessment markers for medical image segmentation tasks are then listed. Finally, a large number of medical segmentation datasets are described. CONCLUSION: Even if there is a pure transformer model without any convolution operator, the sample size of medical picture segmentation still restricts the growth of the transformer, even though it can be relieved by a pretraining model. More often than not, researchers are still designing models using transformer and convolution operators.


Asunto(s)
Procesamiento de Lenguaje Natural , Redes Neurales de la Computación , Tecnología , Procesamiento de Imagen Asistido por Computador
11.
Biomed Eng Online ; 23(1): 39, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38566181

RESUMEN

BACKGROUND: Congenital heart disease (CHD) is one of the most common birth defects in the world. It is the leading cause of infant mortality, necessitating an early diagnosis for timely intervention. Prenatal screening using ultrasound is the primary method for CHD detection. However, its effectiveness is heavily reliant on the expertise of physicians, leading to subjective interpretations and potential underdiagnosis. Therefore, a method for automatic analysis of fetal cardiac ultrasound images is highly desired to assist an objective and effective CHD diagnosis. METHOD: In this study, we propose a deep learning-based framework for the identification and segmentation of the three vessels-the pulmonary artery, aorta, and superior vena cava-in the ultrasound three vessel view (3VV) of the fetal heart. In the first stage of the framework, the object detection model Yolov5 is employed to identify the three vessels and localize the Region of Interest (ROI) within the original full-sized ultrasound images. Subsequently, a modified Deeplabv3 equipped with our novel AMFF (Attentional Multi-scale Feature Fusion) module is applied in the second stage to segment the three vessels within the cropped ROI images. RESULTS: We evaluated our method with a dataset consisting of 511 fetal heart 3VV images. Compared to existing models, our framework exhibits superior performance in the segmentation of all the three vessels, demonstrating the Dice coefficients of 85.55%, 89.12%, and 77.54% for PA, Ao and SVC respectively. CONCLUSIONS: Our experimental results show that our proposed framework can automatically and accurately detect and segment the three vessels in fetal heart 3VV images. This method has the potential to assist sonographers in enhancing the precision of vessel assessment during fetal heart examinations.


Asunto(s)
Aprendizaje Profundo , Embarazo , Femenino , Humanos , Vena Cava Superior , Ultrasonografía , Ultrasonografía Prenatal/métodos , Corazón Fetal/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
12.
BMC Med Imaging ; 24(1): 38, 2024 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-38331800

RESUMEN

Deep learning recently achieved advancement in the segmentation of medical images. In this regard, U-Net is the most predominant deep neural network, and its architecture is the most prevalent in the medical imaging society. Experiments conducted on difficult datasets directed us to the conclusion that the traditional U-Net framework appears to be deficient in certain respects, despite its overall excellence in segmenting multimodal medical images. Therefore, we propose several modifications to the existing cutting-edge U-Net model. The technical approach involves applying a Multi-Dimensional U-Convolutional Neural Network to achieve accurate segmentation of multimodal biomedical images, enhancing precision and comprehensiveness in identifying and analyzing structures across diverse imaging modalities. As a result of the enhancements, we propose a novel framework called Multi-Dimensional U-Convolutional Neural Network (MDU-CNN) as a potential successor to the U-Net framework. On a large set of multimodal medical images, we compared our proposed framework, MDU-CNN, to the classical U-Net. There have been small changes in the case of perfect images, and a huge improvement is obtained in the case of difficult images. We tested our model on five distinct datasets, each of which presented unique challenges, and found that it has obtained a better performance of 1.32%, 5.19%, 4.50%, 10.23% and 0.87%, respectively.


Asunto(s)
Redes Neurales de la Computación , Sociedades Médicas , Humanos , Procesamiento de Imagen Asistido por Computador
13.
BMC Med Imaging ; 24(1): 47, 2024 Feb 19.
Artículo en Inglés | MEDLINE | ID: mdl-38373915

RESUMEN

BACKGROUND: Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) plays an important role in the diagnosis and treatment of breast cancer. However, obtaining complete eight temporal images of DCE-MRI requires a long scanning time, which causes patients' discomfort in the scanning process. Therefore, to reduce the time, the multi temporal feature fusing neural network with Co-attention (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables the acquisition of DCE-MRI images without scanning. In order to reduce the time, multi-temporal feature fusion cooperative attention mechanism neural network (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables DCE-MRI image acquisition without scanning. METHODS: In this paper, we propose multi temporal feature fusing neural network with Co-attention (MTFN) for DCE-MRI Synthesis, in which the Co-attention module can fully fuse the features of the first and third temporal image to obtain the hybrid features. The Co-attention explore long-range dependencies, not just relationships between pixels. Therefore, the hybrid features are more helpful to generate the eighth temporal images. RESULTS: We conduct experiments on the private breast DCE-MRI dataset from hospitals and the multi modal Brain Tumor Segmentation Challenge2018 dataset (BraTs2018). Compared with existing methods, the experimental results of our method show the improvement and our method can generate more realistic images. In the meanwhile, we also use synthetic images to classify the molecular typing of breast cancer that the accuracy on the original eighth time-series images and the generated images are 89.53% and 92.46%, which have been improved by about 3%, and the classification results verify the practicability of the synthetic images. CONCLUSIONS: The results of subjective evaluation and objective image quality evaluation indicators show the effectiveness of our method, which can obtain comprehensive and useful information. The improvement of classification accuracy proves that the images generated by our method are practical.


Asunto(s)
Algoritmos , Neoplasias de la Mama , Humanos , Femenino , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Mama/patología , Neoplasias de la Mama/patología , Procesamiento de Imagen Asistido por Computador
14.
BMC Med Imaging ; 24(1): 24, 2024 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-38267874

RESUMEN

With the rapid development of medical imaging technology and computer technology, the medical imaging artificial intelligence of computer-aided diagnosis based on machine learning has become an important part of modern medical diagnosis. With the application of medical image security technology, people realize that the difficulty of its development is the inherent defect of advanced image processing technology. This paper introduces the background of colorectal cancer diagnosis and monitoring, and then carries out academic research on the medical imaging artificial intelligence of colorectal cancer diagnosis and monitoring and machine learning, and finally summarizes it with the advanced computational intelligence system for the application of safe medical imaging.In the experimental part, this paper wants to carry out the staging preparation stage. It was concluded that the staging preparation stage of group Y was higher than that of group X and the difference was statistically significant. Then the overall accuracy rate of multimodal medical image fusion was 69.5% through pathological staging comparison. Finally, the diagnostic rate, the number of patients with effective treatment and satisfaction were analyzed. Finally, the average diagnostic rate of the new diagnosis method was 8.75% higher than that of the traditional diagnosis method. With the development of computer science and technology, the application field was expanding constantly. Computer aided diagnosis technology combining computer and medical images has become a research hotspot.


Asunto(s)
Inteligencia Artificial , Neoplasias Colorrectales , Humanos , Aprendizaje Automático , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador , Neoplasias Colorrectales/diagnóstico por imagen
15.
BMC Med Imaging ; 24(1): 271, 2024 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-39385108

RESUMEN

BACKGROUND: The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device. METHODS: In this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades. RESULTS: We show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems. CONCLUSION: In the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Músculo Esquelético/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
16.
BMC Med Imaging ; 24(1): 107, 2024 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-38734629

RESUMEN

This study addresses the critical challenge of detecting brain tumors using MRI images, a pivotal task in medical diagnostics that demands high accuracy and interpretability. While deep learning has shown remarkable success in medical image analysis, there remains a substantial need for models that are not only accurate but also interpretable to healthcare professionals. The existing methodologies, predominantly deep learning-based, often act as black boxes, providing little insight into their decision-making process. This research introduces an integrated approach using ResNet50, a deep learning model, combined with Gradient-weighted Class Activation Mapping (Grad-CAM) to offer a transparent and explainable framework for brain tumor detection. We employed a dataset of MRI images, enhanced through data augmentation, to train and validate our model. The results demonstrate a significant improvement in model performance, with a testing accuracy of 98.52% and precision-recall metrics exceeding 98%, showcasing the model's effectiveness in distinguishing tumor presence. The application of Grad-CAM provides insightful visual explanations, illustrating the model's focus areas in making predictions. This fusion of high accuracy and explainability holds profound implications for medical diagnostics, offering a pathway towards more reliable and interpretable brain tumor detection tools.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Interpretación de Imagen Asistida por Computador/métodos
17.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38750436

RESUMEN

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/clasificación , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Aprendizaje Automático , Interpretación de Imagen Asistida por Computador/métodos
18.
BMC Ophthalmol ; 24(1): 98, 2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-38438876

RESUMEN

Image segmentation is a fundamental task in deep learning, which is able to analyse the essence of the images for further development. However, for the supervised learning segmentation method, collecting pixel-level labels is very time-consuming and labour-intensive. In the medical image processing area for optic disc and cup segmentation, we consider there are two challenging problems that remain unsolved. One is how to design an efficient network to capture the global field of the medical image and execute fast in real applications. The other is how to train the deep segmentation network using a few training data due to some medical privacy issues. In this paper, to conquer such issues, we first design a novel attention-aware segmentation model equipped with the multi-scale attention module in the pyramid structure-like encoder-decoder network, which can efficiently learn the global semantics and the long-range dependencies of the input images. Furthermore, we also inject the prior knowledge that the optic cup lies inside the optic disc by a novel loss function. Then, we propose a self-supervised contrastive learning method for optic disc and cup segmentation. The unsupervised feature representation is learned by matching an encoded query to a dictionary of encoded keys using a contrastive technique. Finetuning the pre-trained model using the proposed loss function can help achieve good performance for the task. To validate the effectiveness of the proposed method, extensive systemic evaluations on different public challenging optic disc and cup benchmarks, including DRISHTI-GS and REFUGE datasets demonstrate the superiority of the proposed method, which can achieve new state-of-the-art performance approaching 0.9801 and 0.9087 F1 score respectively while gaining 0.9657 D C disc and 0.8976 D C cup . The code will be made publicly available.


Asunto(s)
Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagen , Concienciación , Benchmarking , Procesamiento de Imagen Asistido por Computador , Atención
19.
Skin Res Technol ; 30(9): e70050, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39246259

RESUMEN

BACKGROUND: AI medical image analysis shows potential applications in research on premature aging and skin. The purpose of this study was to explore the mechanism of the Zuogui pill based on artificial intelligence medical image analysis on ovarian function enhancement and skin elasticity repair in rats with premature aging. MATERIALS AND METHODS: The premature aging rat model was established by using an experimental animal model. Then Zuogui pills were injected into the rats with premature aging, and the images were detected by an optical microscope. Then, through the analysis of artificial intelligence medical images, the image data is analyzed to evaluate the indicators of ovarian function. RESULTS: Through optical microscope image detection, we observed that the Zuogui pill played an active role in repairing ovarian tissue structure and increasing the number of follicles in mice, and Zuogui pill also significantly increased the level of progesterone in the blood of mice. CONCLUSION: Most of the ZGP-induced outcomes are significantly dose-dependent.


Asunto(s)
Envejecimiento Prematuro , Inteligencia Artificial , Medicamentos Herbarios Chinos , Animales , Femenino , Ratas , Medicamentos Herbarios Chinos/farmacología , Medicamentos Herbarios Chinos/administración & dosificación , Ratones , Ovario/efectos de los fármacos , Ovario/diagnóstico por imagen , Ratas Sprague-Dawley , Envejecimiento de la Piel/efectos de los fármacos , Modelos Animales de Enfermedad , Piel/efectos de los fármacos , Piel/diagnóstico por imagen , Elasticidad/efectos de los fármacos , Progesterona/sangre , Progesterona/farmacología , Procesamiento de Imagen Asistido por Computador/métodos
20.
Artículo en Inglés | MEDLINE | ID: mdl-38555550

RESUMEN

Self-monitoring is essential for effectively regulating learning, but difficult in visual diagnostic tasks such as radiograph interpretation. Eye-tracking technology can visualize viewing behavior in gaze displays, thereby providing information about visual search and decision-making. We hypothesized that individually adaptive gaze-display feedback improves posttest performance and self-monitoring of medical students who learn to detect nodules in radiographs. We investigated the effects of: (1) Search displays, showing which part of the image was searched by the participant; and (2) Decision displays, showing which parts of the image received prolonged attention in 78 medical students. After a pretest and instruction, participants practiced identifying nodules in 16 cases under search-display, decision-display, or no feedback conditions (n = 26 per condition). A 10-case posttest, without feedback, was administered to assess learning outcomes. After each case, participants provided self-monitoring and confidence judgments. Afterward, participants reported on self-efficacy, perceived competence, feedback use, and perceived usefulness of the feedback. Bayesian analyses showed no benefits of gaze displays for post-test performance, monitoring accuracy (absolute difference between participants' estimated and their actual test performance), completeness of viewing behavior, self-efficacy, and perceived competence. Participants receiving search-displays reported greater feedback utilization than participants receiving decision-displays, and also found the feedback more useful when the gaze data displayed was precise and accurate. As the completeness of search was not related to posttest performance, search displays might not have been sufficiently informative to improve self-monitoring. Information from decision displays was rarely used to inform self-monitoring. Further research should address if and when gaze displays can support learning.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda