Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Stud Health Technol Inform ; 316: 1145-1150, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176583

RESUMEN

Advances in general-purpose computers have enabled the generation of high-quality synthetic medical images that human eyes cannot differ between real and AI-generated images. To analyse the efficacy of the generated medical images, this study proposed a modified VGG16-based algorithm to recognise AI-generated medical images. Initially, 10,000 synthetic medical skin lesion images were generated using a Generative Adversarial Network (GAN), providing a set of images for comparison to real images. Then, an enhanced VGG16-based algorithm has been developed to classify real images vs AI-generated images. Following hyperparameters tuning and training, the optimal approach can classify the images with 99.82% accuracy. Multiple other evaluations have been used to evaluate the efficacy of the proposed network. The complete dataset used in this study is available online to the research community for future research.


Asunto(s)
Aprendizaje Profundo , Humanos , Algoritmos , Enfermedades de la Piel/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias Cutáneas/diagnóstico por imagen
2.
SLAS Technol ; : 100178, 2024 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-39159747

RESUMEN

PCOS is thought to be associated with metabolic disorders, endocrine disorders, and reproductive system problems. By collecting relevant literature and conducting meta-analyses, we integrated data from multiple studies to enhance the reliability of the analysis results. Studies with medical image data were selected to ensure the accuracy and credibility of the studies. A statistical framework was employed to examine the biodiversity indicators associated with the gut microbiota. These findings provide robust support for the notion that PCOS is intricately linked to notable alterations within the gut microbial community. The utilization of a statistical approach and the systematic synthesis of research findings in this meta-analysis contribute to a more comprehensive understanding of the substantial impact of PCOS on the gut microbiota landscape. PCOS patients showed significant changes in the relative abundance of certain bacteria in their gut microbiota. This imbalance will lead to the instability of intestinal microecological environment, and then affect the health of the body.

3.
Sci Rep ; 14(1): 19261, 2024 08 20.
Artículo en Inglés | MEDLINE | ID: mdl-39164350

RESUMEN

Medical image fusion (MIF) techniques are proficient in combining medical images in distinct morphologies to obtain a reliable medical analysis. A single modality image could not offer adequate data for an accurate analysis. Therefore, a novel multimodal MIF-based artificial intelligence (AI) method has been presented. MIF approaches fuse multimodal medical images for exact and reliable medical recognition. Multimodal MIF improves diagnostic accuracy and clinical decision-making by combining complementary data in different imaging modalities. This article presents a new multimodal medical image fusion model utilizing Modified DWT with an Arithmetic Optimization Algorithm (MMIF-MDWTAOA) approach. The MMIF-MDWTAOA approach aims to generate a fused image with the significant details and features from each modality, leading to an elaborated depiction for precise interpretation by medical experts. The bilateral filtering (BF) approach is primarily employed for noise elimination. Next, the image decomposition process uses a modified discrete wavelet transform (MDWT) approach. However, the approximation coefficient of modality_1 and the detailed coefficient of modality_2 can be fused interchangeably. Furthermore, a fusion rule is derived from combining the multimodality data, and the AOA model is enforced to ensure the optimum selection of the fusion rule parameters. A sequence of simulations is accomplished to validate the enhanced output of the MMIF-MDWTAOA technique. The investigational validation of the MMIF-MDWTAOA technique showed the highest entropy values of 7.568 and 7.741 bits/pixel over other approaches.


Asunto(s)
Algoritmos , Imagen Multimodal , Análisis de Ondículas , Humanos , Imagen Multimodal/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Inteligencia Artificial , Tomografía Computarizada por Rayos X/métodos
4.
Front Bioeng Biotechnol ; 12: 1392807, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39104626

RESUMEN

Radiologists encounter significant challenges when segmenting and determining brain tumors in patients because this information assists in treatment planning. The utilization of artificial intelligence (AI), especially deep learning (DL), has emerged as a useful tool in healthcare, aiding radiologists in their diagnostic processes. This empowers radiologists to understand the biology of tumors better and provide personalized care to patients with brain tumors. The segmentation of brain tumors using multi-modal magnetic resonance imaging (MRI) images has received considerable attention. In this survey, we first discuss multi-modal and available magnetic resonance imaging modalities and their properties. Subsequently, we discuss the most recent DL-based models for brain tumor segmentation using multi-modal MRI. We divide this section into three parts based on the architecture: the first is for models that use the backbone of convolutional neural networks (CNN), the second is for vision transformer-based models, and the third is for hybrid models that use both convolutional neural networks and transformer in the architecture. In addition, in-depth statistical analysis is performed of the recent publication, frequently used datasets, and evaluation metrics for segmentation tasks. Finally, open research challenges are identified and suggested promising future directions for brain tumor segmentation to improve diagnostic accuracy and treatment outcomes for patients with brain tumors. This aligns with public health goals to use health technologies for better healthcare delivery and population health management.

5.
IBRO Neurosci Rep ; 16: 57-66, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-39007088

RESUMEN

Gliomas observed in medical images require expert neuro-radiologist evaluation for treatment planning and monitoring, motivating development of intelligent systems capable of automating aspects of tumour evaluation. Deep learning models for automatic image segmentation rely on the amount and quality of training data. In this study we developed a neuroimaging synthesis technique to augment data for training fully-convolutional networks (U-nets) to perform automatic glioma segmentation. We used StyleGAN2-ada to simultaneously generate fluid-attenuated inversion recovery (FLAIR) magnetic resonance images and corresponding glioma segmentation masks. Synthetic data were successively added to real training data (n = 2751) in fourteen rounds of 1000 and used to train U-nets that were evaluated on held-out validation (n = 590) and test sets (n = 588). U-nets were trained with and without geometric augmentation (translation, zoom and shear), and Dice coefficients were computed to evaluate segmentation performance. We also monitored the number of training iterations before stopping, total training time, and time per iteration to evaluate computational costs associated with training each U-net. Synthetic data augmentation yielded marginal improvements in Dice coefficients (validation set +0.0409, test set +0.0355), whereas geometric augmentation improved generalization (standard deviation between training, validation and test set performances of 0.01 with, and 0.04 without geometric augmentation). Based on the modest performance gains for automatic glioma segmentation we find it hard to justify the computational expense of developing a synthetic image generation pipeline. Future work may seek to optimize the efficiency of synthetic data generation for augmentation of neuroimaging data.

6.
Sensors (Basel) ; 24(14)2024 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-39066145

RESUMEN

Pancreatic cancer is a highly lethal disease with a poor prognosis. Its early diagnosis and accurate treatment mainly rely on medical imaging, so accurate medical image analysis is especially vital for pancreatic cancer patients. However, medical image analysis of pancreatic cancer is facing challenges due to ambiguous symptoms, high misdiagnosis rates, and significant financial costs. Artificial intelligence (AI) offers a promising solution by relieving medical personnel's workload, improving clinical decision-making, and reducing patient costs. This study focuses on AI applications such as segmentation, classification, object detection, and prognosis prediction across five types of medical imaging: CT, MRI, EUS, PET, and pathological images, as well as integrating these imaging modalities to boost diagnostic accuracy and treatment efficiency. In addition, this study discusses current hot topics and future directions aimed at overcoming the challenges in AI-enabled automated pancreatic cancer diagnosis algorithms.


Asunto(s)
Algoritmos , Inteligencia Artificial , Neoplasias Pancreáticas , Humanos , Neoplasias Pancreáticas/diagnóstico por imagen , Neoplasias Pancreáticas/diagnóstico , Neoplasias Pancreáticas/patología , Páncreas/diagnóstico por imagen , Páncreas/patología , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Imagen por Resonancia Magnética/métodos
7.
Sci Rep ; 14(1): 17320, 2024 07 27.
Artículo en Inglés | MEDLINE | ID: mdl-39068181

RESUMEN

The paper addresses the issue of ensuring the authenticity and copyright of medical images in telemedicine applications, with a specific emphasis on watermarking methods. While several systems only concentrate on identifying tampering in medical images, others also provide the capacity to restore the tampered regions upon detection. While several authentication techniques in medical imaging have successfully achieved their goals, previous research underscores a notable deficiency: the resilience of these schemes against unintentional attacks has not been sufficiently examined or emphasized in previous research. This indicates the need for further development and investigation in improving the robustness of medical image authentication techniques against unintentional attacks. This research proposes a Reversible-Zero Watermarking approach as a solution to address these problems. The new approach merges the advantages of both the reversible and zero watermarking techniques. This system is comprised of two parts. The first part is a zero-watermarking technique that uses VGG19-based feature extraction and watermark information to establish an ownership share. The second part incorporates this ownership share into the image in a reversible manner using a combination of a discrete wavelet transform, an integer wavelet transform, and a difference expansion. Research findings confirm that the suggested watermarking approach for medical images demonstrates substantial enhancements compared to current methodologies. Research findings indicate that NC values are often around 0.9 for different attacks, whereas BER values are close to 0. It demonstrates exceptional qualities in being imperceptible, distinguishable, and robust. Additionally, the system provides a persistent verification feature that functions independently of disputes or third-party storage, making it the preferred choice in the domain of medical image watermarking.


Asunto(s)
Seguridad Computacional , Humanos , Diagnóstico por Imagen/métodos , Algoritmos , Telemedicina , Procesamiento de Imagen Asistido por Computador/métodos , Análisis de Ondículas
8.
Front Comput Neurosci ; 18: 1418546, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38933391

RESUMEN

Background: The necessity of prompt and accurate brain tumor diagnosis is unquestionable for optimizing treatment strategies and patient prognoses. Traditional reliance on Magnetic Resonance Imaging (MRI) analysis, contingent upon expert interpretation, grapples with challenges such as time-intensive processes and susceptibility to human error. Objective: This research presents a novel Convolutional Neural Network (CNN) architecture designed to enhance the accuracy and efficiency of brain tumor detection in MRI scans. Methods: The dataset used in the study comprises 7,023 brain MRI images from figshare, SARTAJ, and Br35H, categorized into glioma, meningioma, no tumor, and pituitary classes, with a CNN-based multi-task classification model employed for tumor detection, classification, and location identification. Our methodology focused on multi-task classification using a single CNN model for various brain MRI classification tasks, including tumor detection, classification based on grade and type, and tumor location identification. Results: The proposed CNN model incorporates advanced feature extraction capabilities and deep learning optimization techniques, culminating in a groundbreaking paradigm shift in automated brain MRI analysis. With an exceptional tumor classification accuracy of 99%, our method surpasses current methodologies, demonstrating the remarkable potential of deep learning in medical applications. Conclusion: This study represents a significant advancement in the early detection and treatment planning of brain tumors, offering a more efficient and accurate alternative to traditional MRI analysis methods.

9.
Front Artif Intell ; 7: 1375474, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38881952

RESUMEN

Background: The most common Assisted Reproductive Technology is In-Vitro Fertilization (IVF). During IVF, embryologists commonly perform a morphological assessment to evaluate embryo quality and choose the best embryo for transferring to the uterus. However, embryo selection through morphological assessment is subjective, so various embryologists obtain different conclusions. Furthermore, humans can consider only a limited number of visual parameters resulting in a poor IVF success rate. Artificial intelligence (AI) for embryo selection is objective and can include many parameters, leading to better IVF outcomes. Objectives: This study sought to use AI to (1) predict pregnancy results based on embryo images, (2) assess using more than one image of the embryo in the prediction of pregnancy but based on the current process in IVF labs, and (3) compare results of AI-Based methods and embryologist experts in predicting pregnancy. Methods: A data set including 252 Time-lapse Videos of embryos related to IVF performed between 2017 and 2020 was collected. Frames related to 19 ± 1, 43 ± 1, and 67 ± 1 h post-insemination were extracted. Well-Known CNN architectures with transfer learning have been applied to these images. The results have been compared with an algorithm that only uses the final image of embryos. Furthermore, the results have been compared with five experienced embryologists. Results: To predict the pregnancy outcome, we applied five well-known CNN architectures (AlexNet, ResNet18, ResNet34, Inception V3, and DenseNet121). DeepEmbryo, using three images, predicts pregnancy better than the algorithm that only uses one final image. It also can predict pregnancy better than all embryologists. Different well-known architectures can successfully predict pregnancy chances with up to 75.0% accuracy using Transfer Learning. Conclusion: We have developed DeepEmbryo, an AI-based tool that uses three static images to predict pregnancy. Additionally, DeepEmbryo uses images that can be obtained in the current IVF process in almost all IVF labs. AI-based tools have great potential for predicting pregnancy and can be used as a proper tool in the future.

10.
Comput Methods Programs Biomed ; 253: 108238, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38823117

RESUMEN

BACKGROUND AND OBJECTIVE: Evaluating the interpretability of Deep Learning models is crucial for building trust and gaining insights into their decision-making processes. In this work, we employ class activation map based attribution methods in a setting where only High-Resolution Class Activation Mapping (HiResCAM) is known to produce faithful explanations. The objective is to evaluate the quality of the attribution maps using quantitative metrics and investigate whether faithfulness aligns with the metrics results. METHODS: We fine-tune pre-trained deep learning architectures over four medical image datasets in order to calculate attribution maps. The maps are evaluated on a threefold metrics basis utilizing well-established evaluation scores. RESULTS: Our experimental findings suggest that the Area Over Perturbation Curve (AOPC) and Max-Sensitivity scores favor the HiResCAM maps. On the other hand, the Heatmap Assisted Accuracy Score (HAAS) does not provide insights to our comparison as it evaluates almost all maps as inaccurate. To this purpose we further compare our calculated values against values obtained over a diverse group of models which are trained on non-medical benchmark datasets, to eventually achieve more responsive results. CONCLUSION: This study develops a series of experiments to discuss the connection between faithfulness and quantitative metrics over medical attribution maps. HiResCAM preserves the gradient effect on a pixel level ultimately producing high-resolution, informative and resilient mappings. In turn, this is depicted in the results of AOPC and Max-Sensitivity metrics, successfully identifying the faithful algorithm. In regards to HAAS, our experiments yield that it is sensitive over complex medical patterns, commonly characterized by strong color dependency and multiple attention areas.


Asunto(s)
Aprendizaje Profundo , Humanos , Algoritmos , Diagnóstico por Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación
11.
Neural Netw ; 178: 106460, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38906052

RESUMEN

Recently, multi-resolution pyramid-based techniques have emerged as the prevailing research approach for image super-resolution. However, these methods typically rely on a single mode of information transmission between levels. In our approach, a wavelet pyramid recursive neural network (WPRNN) based on wavelet energy entropy (WEE) constraint is proposed. This network transmits previous-level wavelet coefficients and additional shallow coefficient features to capture local details. Besides, the parameter of low- and high-frequency wavelet coefficients within each pyramid level and across pyramid levels is shared. A multi-resolution wavelet pyramid fusion (WPF) module is devised to facilitate information transfer across network pyramid levels. Additionally, a wavelet energy entropy loss is proposed to constrain the reconstruction of wavelet coefficients from the perspective of signal energy distribution. Finally, our method achieves the competitive reconstruction performance with the minimal parameters through an extensive series of experiments conducted on publicly available datasets, which demonstrates its practical utility.


Asunto(s)
Entropía , Redes Neurales de la Computación , Análisis de Ondículas , Humanos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
12.
Quant Imaging Med Surg ; 14(5): 3501-3518, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38720828

RESUMEN

Background: In the field of medical imaging, the rapid rise of convolutional neural networks (CNNs) has presented significant opportunities for conserving healthcare resources. However, with the wide spread application of CNNs, several challenges have emerged, such as enormous data annotation costs, difficulties in ensuring user privacy and security, weak model interpretability, and the consumption of substantial computational resources. The fundamental challenge lies in optimizing and seamlessly integrating CNN technology to enhance the precision and efficiency of medical diagnosis. Methods: This study sought to provide a comprehensive bibliometric overview of current research on the application of CNNs in medical imaging. Initially, bibliometric methods were used to calculate the frequency statistics, and perform the cluster analysis and the co-citation analysis of countries, institutions, authors, keywords, and references. Subsequently, the latent Dirichlet allocation (LDA) method was employed for the topic modeling of the literature. Next, an in-depth analysis of the topics was conducted, and the topics in the medical field, technical aspects, and trends in topic evolution were summarized. Finally, by integrating the bibliometrics and LDA results, the developmental trajectory, milestones, and future directions in this field were outlined. Results: A data set containing 6,310 articles in this field published from January 2013 to December 2023 was complied. With a total of 55,538 articles, the United States led in terms of the citation count, while in terms of the publication volume, China led with 2,385 articles. Harvard University emerged as the most influential institution, boasting an average of 69.92 citations per article. Within the realm of CNNs, residual neural network (ResNet) and U-Net stood out, receiving 1,602 and 1,419 citations, respectively, which highlights the significant attention these models have received. The impact of coronavirus disease 2019 (COVID-19) was unmistakable, as reflected by the publication of 597 articles, making it a focal point of research. Additionally, among various disease topics, with 290 articles, brain-related research was the most prevalent. Computed tomography (CT) imaging dominated the research landscape, representing 73% of the 30 different topics. Conclusions: Over the past 11 years, CNN-related research in medical imaging has grown exponentially. The findings of the present study provide insights into the field's status and research hotspots. In addition, this article meticulously chronicled the development of CNNs and highlighted key milestones, starting with LeNet in 1989, followed by a challenging 20-year exploration period, and culminating in the breakthrough moment with AlexNet in 2012. Finally, this article explored recent advancements in CNN technology, including semi-supervised learning, efficient learning, trustworthy artificial intelligence (AI), and federated learning methods, and also addressed challenges related to data annotation costs, diagnostic efficiency, model performance, and data privacy.

13.
Med Phys ; 51(8): 5550-5562, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38753547

RESUMEN

BACKGROUND: Liver fibrosis poses a significant public health challenge given its elevated incidence and associated mortality rates. Diffusion-Weighted Imaging (DWI) serves as a non-invasive diagnostic tool for supporting the identification of liver fibrosis. Deep learning, as a computer-aided diagnostic technology, can assist in recognizing the stage of liver fibrosis by extracting abstract features from DWI images. However, gathering samples is often challenging, posing a common dilemma in previous research. Moreover, previous studies frequently overlooked the cross-comparison information and latent connections among different DWI parameters. Thus, it is becoming a challenge to identify effective DWI parameters and dig potential features from multiple categories in a dataset with limited samples. PURPOSE: A self-defined Multi-view Contrastive Learning Network is developed to automatically classify multi-parameter DWI images and explore synergies between different DWI parameters. METHODS: A Dense-fusion Attention Contrastive Learning Network (DACLN) is designed and used to recognize DWI images. Concretely, a multi-view contrastive learning framework is constructed to train and extract features from raw multi-parameter DWI. Besides, a Dense-fusion module is designed to integrate feature and output predicted labels. RESULTS: We evaluated the performance of the proposed model on a set of real clinical data and analyzed the interpretability by Grad-CAM and annotation analysis, achieving average scores of 0.8825, 0.8702, 0.8933, 0.8727, and 0.8779 for accuracy, precision, recall, specificity and F-1 score. Of note, the experimental results revealed that IVIM-f, CTRW-ß, and MONO-ADC exhibited significant recognition ability and complementarity. CONCLUSION: Our method achieves competitive accuracy in liver fibrosis diagnosis using the limited multi-parameter DWI dataset and finds three types of DWI parameters with high sensitivity for diagnosing liver fibrosis, which suggests potential directions for future research.


Asunto(s)
Imagen de Difusión por Resonancia Magnética , Cirrosis Hepática , Cirrosis Hepática/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo , Automatización , Diagnóstico por Computador/métodos , Aprendizaje Automático , Redes Neurales de la Computación
14.
Cancers (Basel) ; 16(7)2024 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-38611040

RESUMEN

Breast cancer has a high mortality rate among cancers. If the type of breast tumor can be correctly diagnosed at an early stage, the survival rate of the patients will be greatly improved. Considering the actual clinical needs, the classification model of breast pathology images needs to have the ability to make a correct classification, even in facing image data with different characteristics. The existing convolutional neural network (CNN)-based models for the classification of breast tumor pathology images lack the requisite generalization capability to maintain high accuracy when confronted with pathology images of varied characteristics. Consequently, this study introduces a new classification model, STMLAN (Single-Task Meta Learning with Auxiliary Network), which integrates Meta Learning and an auxiliary network. Single-Task Meta Learning was proposed to endow the model with generalization ability, and the auxiliary network was used to enhance the feature characteristics of breast pathology images. The experimental results demonstrate that the STMLAN model proposed in this study improves accuracy by at least 1.85% in challenging multi-classification tasks compared to the existing methods. Furthermore, the Silhouette Score corresponding to the features learned by the model has increased by 31.85%, reflecting that the proposed model can learn more discriminative features, and the generalization ability of the overall model is also improved.

15.
Comput Med Imaging Graph ; 115: 102374, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38565036

RESUMEN

Medical images play a vital role in medical analysis by providing crucial information about patients' pathological conditions. However, the quality of these images can be compromised by many factors, such as limited resolution of the instruments, artifacts caused by movements, and the complexity of the scanned areas. As a result, low-resolution (LR) images cannot provide sufficient information for diagnosis. To address this issue, researchers have attempted to apply image super-resolution (SR) techniques to restore the high-resolution (HR) images from their LR counterparts. However, these techniques are designed for generic images, and thus suffer from many challenges unique to medical images. An obvious one is the diversity of the scanned objects; for example, the organs, tissues, and vessels typically appear in different sizes and shapes, and are thus hard to restore with standard convolution neural networks (CNNs). In this paper, we develop a dynamic-local learning framework to capture the details of these diverse areas, consisting of deformable convolutions with adjustable kernel shapes. Moreover, the global information between the tissues and organs is vital for medical diagnosis. To preserve global information, we propose pixel-pixel and patch-patch global learning using a non-local mechanism and a vision transformer (ViT), respectively. The result is a novel CNN-ViT neural network with Local-to-Global feature learning for medical image SR, referred to as LGSR, which can accurately restore both local details and global information. We evaluate our method on six public datasets and one large-scale private dataset, which include five different types of medical images (i.e., Ultrasound, OCT, Endoscope, CT, and MRI images). Experiments show that the proposed method achieves superior PSNR/SSIM and visual performance than the state of the arts with competitive computational costs, measured in network parameters, runtime, and FLOPs. What is more, the experiment conducted on OCT image segmentation for the downstream task demonstrates a significantly positive performance effect of LGSR.


Asunto(s)
Aprendizaje Profundo , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Diagnóstico por Imagen/métodos
16.
Med Image Anal ; 95: 103166, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38613918

RESUMEN

Several factors are associated with the success of deep learning. One of the most important reasons is the availability of large-scale datasets with clean annotations. However, obtaining datasets with accurate labels in the medical imaging domain is challenging. The reliability and consistency of medical labeling are some of these issues, and low-quality annotations with label noise usually exist. Because noisy labels reduce the generalization performance of deep neural networks, learning with noisy labels is becoming an essential task in medical image analysis. Literature on this topic has expanded in terms of volume and scope. However, no recent surveys have collected and organized this knowledge, impeding the ability of researchers and practitioners to utilize it. In this work, we presented an up-to-date survey of label-noise learning for medical image domain. We reviewed extensive literature, illustrated some typical methods, and showed unified taxonomies in terms of methodological differences. Subsequently, we conducted the methodological comparison and demonstrated the corresponding advantages and disadvantages. Finally, we discussed new research directions based on the characteristics of medical images. Our survey aims to provide researchers and practitioners with a solid understanding of existing medical label-noise learning, such as the main algorithms developed over the past few years, which could help them investigate new methods to combat with the negative effects of label noise.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Diagnóstico por Imagen , Reproducibilidad de los Resultados
17.
Nihon Hoshasen Gijutsu Gakkai Zasshi ; 80(6): 649-657, 2024 Jun 20.
Artículo en Japonés | MEDLINE | ID: mdl-38631870

RESUMEN

In this study, we investigated the effects of various disinfectants used to prevent infectious diseases on medical images and medical equipment. First, we investigated the effect of residual disinfectant on medical images in CT, mammography (MMG), and general imaging systems. Acrylic discs with various disinfectants attached were photographed using each imaging device, and visual evaluation and changes in image signal values were evaluated. We also conducted a questionnaire survey of each manufacturer regarding cleaning methods for medical devices. With CT/MMG, residual disinfectant could be visually confirmed on the image. Although this could not be confirmed with the general imaging system, a significant difference was confirmed in the image signal values of the general imaging system through statistical analysis. This is thought to be largely due to the influence of nonlinearity in the short-time imaging range of general imaging equipment. In addition, from the responses to a questionnaire survey of each medical device manufacturer, we were able to understand detailed cleaning methods that are not covered in medical device instruction manuals.


Asunto(s)
Desinfectantes , Desinfectantes/farmacología , Encuestas y Cuestionarios , Control de Infecciones/métodos , Tomografía Computarizada por Rayos X/instrumentación , Mamografía/instrumentación , Diagnóstico por Imagen/instrumentación , Equipos y Suministros
18.
Heliyon ; 10(6): e27398, 2024 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-38496891

RESUMEN

Background: Convolutional neural networks (CNNs) assume pivotal roles in aiding clinicians in diagnosis and treatment decisions. The rapid evolution of imaging technology has established three-dimensional (3D) CNNs as a formidable framework for delineating organs and anomalies in medical images. The prominence of 3D CNN frameworks is steadily growing within medical image segmentation and classification. Thus, our proposition entails a comprehensive review, encapsulating diverse 3D CNN algorithms for the segmentation of medical image anomalies and organs. Methods: This study systematically presents an exhaustive review of recent 3D CNN methodologies. Rigorous screening of abstracts and titles were carried out to establish their relevance. Research papers disseminated across academic repositories were meticulously chosen, analyzed, and appraised against specific criteria. Insights into the realm of anomalies and organ segmentation were derived, encompassing details such as network architecture and achieved accuracies. Results: This paper offers an all-encompassing analysis, unveiling the prevailing trends in 3D CNN segmentation. In-depth elucidations encompass essential insights, constraints, observations, and avenues for future exploration. A discerning examination indicates the preponderance of the encoder-decoder network in segmentation tasks. The encoder-decoder framework affords a coherent methodology for the segmentation of medical images. Conclusion: The findings of this study are poised to find application in clinical diagnosis and therapeutic interventions. Despite inherent limitations, CNN algorithms showcase commendable accuracy levels, solidifying their potential in medical image segmentation and classification endeavors.

19.
Micron ; 180: 103615, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38471391

RESUMEN

Medical imaging plays a critical role in diagnosing and treating various medical conditions. However, interpreting medical images can be challenging even for expert clinicians, as they are often degraded by noise and artifacts that can hinder the accurate identification and analysis of diseases, leading to severe consequences such as patient misdiagnosis or mortality. Various types of noise, including Gaussian, Rician, and Salt-pepper noise, can corrupt the area of interest, limiting the precision and accuracy of algorithms. Denoising algorithms have shown the potential in improving the quality of medical images by removing noise and other artifacts that obscure essential information. Deep learning has emerged as a powerful tool for image analysis and has demonstrated promising results in denoising different medical images such as MRIs, CT scans, PET scans, etc. This review paper provides a comprehensive overview of state-of-the-art deep learning algorithms used for denoising medical images. A total of 120 relevant papers were reviewed, and after screening with specific inclusion and exclusion criteria, 104 papers were selected for analysis. This study aims to provide a thorough understanding for researchers in the field of intelligent denoising by presenting an extensive survey of current techniques and highlighting significant challenges that remain to be addressed. The findings of this review are expected to contribute to the development of intelligent models that enable timely and accurate diagnoses of medical disorders. It was found that 40% of the researchers used models based on Deep convolutional neural networks to denoise the images, followed by encoder-decoder (18%) and other artificial intelligence-based techniques (15%) (Like DIP, etc.). Generative adversarial network was used by 12%, transformer-based approaches (13%) and multilayer perceptron was used by 2% of the researchers. Moreover, Gaussian noise was present in 35% of the images, followed by speckle noise (16%), poisson noise (14%), artifacts (10%), rician noise (7%), Salt-pepper noise (6%), Impulse noise (3%) and other types of noise (9%). While the progress in developing novel models for the denoising of medical images is evident, significant work remains to be done in creating standardized denoising models that perform well across a wide spectrum of medical images. Overall, this review highlights the importance of denoising medical images and provides a comprehensive understanding of the current state-of-the-art deep learning algorithms in this field.


Asunto(s)
Aprendizaje Profundo , Humanos , Inteligencia Artificial , Relación Señal-Ruido , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
20.
Sci Rep ; 14(1): 6086, 2024 03 13.
Artículo en Inglés | MEDLINE | ID: mdl-38480847

RESUMEN

Research on different machine learning (ML) has become incredibly popular during the past few decades. However, for some researchers not familiar with statistics, it might be difficult to understand how to evaluate the performance of ML models and compare them with each other. Here, we introduce the most common evaluation metrics used for the typical supervised ML tasks including binary, multi-class, and multi-label classification, regression, image segmentation, object detection, and information retrieval. We explain how to choose a suitable statistical test for comparing models, how to obtain enough values of the metric for testing, and how to perform the test and interpret its results. We also present a few practical examples about comparing convolutional neural networks used to classify X-rays with different lung infections and detect cancer tumors in positron emission tomography images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Tomografía de Emisión de Positrones
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...