Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Imaging ; 24(1): 118, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38773391

RESUMO

Brain tumor diagnosis using MRI scans poses significant challenges due to the complex nature of tumor appearances and variations. Traditional methods often require extensive manual intervention and are prone to human error, leading to misdiagnosis and delayed treatment. Current approaches primarily include manual examination by radiologists and conventional machine learning techniques. These methods rely heavily on feature extraction and classification algorithms, which may not capture the intricate patterns present in brain MRI images. Conventional techniques often suffer from limited accuracy and generalizability, mainly due to the high variability in tumor appearance and the subjective nature of manual interpretation. Additionally, traditional machine learning models may struggle with the high-dimensional data inherent in MRI images. To address these limitations, our research introduces a deep learning-based model utilizing convolutional neural networks (CNNs).Our model employs a sequential CNN architecture with multiple convolutional, max-pooling, and dropout layers, followed by dense layers for classification. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The precision, recall, and F1-scores ranging from 97 to 98% with a roc-auc ranging from 99 to 100% for each tumor category further substantiate the model's effectiveness. Additionally, the utilization of Grad-CAM visualizations provides insights into the model's decision-making process, enhancing interpretability. This research addresses the pressing need for enhanced diagnostic accuracy in identifying brain tumors through MRI imaging, tackling challenges such as variability in tumor appearance and the need for rapid, reliable diagnostic tools.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Masculino , Feminino
2.
BMC Med Imaging ; 24(1): 105, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38730390

RESUMO

Categorizing Artificial Intelligence of Medical Things (AIoMT) devices within the realm of standard Internet of Things (IoT) and Internet of Medical Things (IoMT) devices, particularly at the server and computational layers, poses a formidable challenge. In this paper, we present a novel methodology for categorizing AIoMT devices through the application of decentralized processing, referred to as "Federated Learning" (FL). Our approach involves deploying a system on standard IoT devices and labeled IoMT devices for training purposes and attribute extraction. Through this process, we extract and map the interconnected attributes from a global federated cum aggression server. The aim of this terminology is to extract interdependent devices via federated learning, ensuring data privacy and adherence to operational policies. Consequently, a global training dataset repository is coordinated to establish a centralized indexing and synchronization knowledge repository. The categorization process employs generic labels for devices transmitting medical data through regular communication channels. We evaluate our proposed methodology across a variety of IoT, IoMT, and AIoMT devices, demonstrating effective classification and labeling. Our technique yields a reliable categorization index for facilitating efficient access and optimization of medical devices within global servers.


Assuntos
Inteligência Artificial , Blockchain , Internet das Coisas , Humanos
3.
BMC Med Imaging ; 24(1): 176, 2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39030496

RESUMO

Medical imaging stands as a critical component in diagnosing various diseases, where traditional methods often rely on manual interpretation and conventional machine learning techniques. These approaches, while effective, come with inherent limitations such as subjectivity in interpretation and constraints in handling complex image features. This research paper proposes an integrated deep learning approach utilizing pre-trained models-VGG16, ResNet50, and InceptionV3-combined within a unified framework to improve diagnostic accuracy in medical imaging. The method focuses on lung cancer detection using images resized and converted to a uniform format to optimize performance and ensure consistency across datasets. Our proposed model leverages the strengths of each pre-trained network, achieving a high degree of feature extraction and robustness by freezing the early convolutional layers and fine-tuning the deeper layers. Additionally, techniques like SMOTE and Gaussian Blur are applied to address class imbalance, enhancing model training on underrepresented classes. The model's performance was validated on the IQ-OTH/NCCD lung cancer dataset, which was collected from the Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases over a period of three months in fall 2019. The proposed model achieved an accuracy of 98.18%, with precision and recall rates notably high across all classes. This improvement highlights the potential of integrated deep learning systems in medical diagnostics, providing a more accurate, reliable, and efficient means of disease detection.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação
4.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750436

RESUMO

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Aprendizado de Máquina , Interpretação de Imagem Assistida por Computador/métodos
7.
Front Med (Lausanne) ; 11: 1373244, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38515985

RESUMO

Breast cancer, a prevalent cancer among women worldwide, necessitates precise and prompt detection for successful treatment. While conventional histopathological examination is the benchmark, it is a lengthy process and prone to variations among different observers. Employing machine learning to automate the diagnosis of breast cancer presents a viable option, striving to improve both precision and speed. Previous studies have primarily focused on applying various machine learning and deep learning models for the classification of breast cancer images. These methodologies leverage convolutional neural networks (CNNs) and other advanced algorithms to differentiate between benign and malignant tumors from histopathological images. Current models, despite their potential, encounter obstacles related to generalizability, computational performance, and managing datasets with imbalances. Additionally, a significant number of these models do not possess the requisite transparency and interpretability, which are vital for medical diagnostic purposes. To address these limitations, our study introduces an advanced machine learning model based on EfficientNetV2. This model incorporates state-of-the-art techniques in image processing and neural network architecture, aiming to improve accuracy, efficiency, and robustness in classification. We employed the EfficientNetV2 model, fine-tuned for the specific task of breast cancer image classification. Our model underwent rigorous training and validation using the BreakHis dataset, which includes diverse histopathological images. Advanced data preprocessing, augmentation techniques, and a cyclical learning rate strategy were implemented to enhance model performance. The introduced model exhibited remarkable efficacy, attaining an accuracy rate of 99.68%, balanced precision and recall as indicated by a significant F1 score, and a considerable Cohen's Kappa value. These indicators highlight the model's proficiency in correctly categorizing histopathological images, surpassing current techniques in reliability and effectiveness. The research emphasizes improved accessibility, catering to individuals with disabilities and the elderly. By enhancing visual representation and interpretability, the proposed approach aims to make strides in inclusive medical image interpretation, ensuring equitable access to diagnostic information.

8.
Heliyon ; 10(9): e29802, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38707335

RESUMO

There is an increasing demand for efficient and precise plant disease detection methods that can quickly identify disease outbreaks. For this, researchers have developed various machine learning and image processing techniques. However, real-field images present challenges due to complex backgrounds, similarities between different disease symptoms, and the need to detect multiple diseases simultaneously. These obstacles hinder the development of a reliable classification model. The attention mechanisms emerge as a critical factor in enhancing the robustness of classification models by selectively focusing on relevant regions or features within infected regions in an image. This paper provides details about various types of attention mechanisms and explores the utilization of these techniques for the machine learning solutions created by researchers for image segmentation, feature extraction, object detection, and classification for efficient plant disease identification. Experiments are conducted on three models: MobileNetV2, EfficientNetV2, and ShuffleNetV2, to assess the effectiveness of attention modules. For this, Squeeze and Excitation layers, the Convolutional Block Attention Module, and transformer modules have been integrated into these models, and their performance has been evaluated using different metrics. The outcomes show that adding attention modules enhances the original models' functionality.

9.
Diagnostics (Basel) ; 14(12)2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38928629

RESUMO

Deep learning has attained state-of-the-art results in general image segmentation problems; however, it requires a substantial number of annotated images to achieve the desired outcomes. In the medical field, the availability of annotated images is often limited. To address this challenge, few-shot learning techniques have been successfully adapted to rapidly generalize to new tasks with only a few samples, leveraging prior knowledge. In this paper, we employ a gradient-based method known as Model-Agnostic Meta-Learning (MAML) for medical image segmentation. MAML is a meta-learning algorithm that quickly adapts to new tasks by updating a model's parameters based on a limited set of training samples. Additionally, we use an enhanced 3D U-Net as the foundational network for our models. The enhanced 3D U-Net is a convolutional neural network specifically designed for medical image segmentation. We evaluate our approach on the TotalSegmentator dataset, considering a few annotated images for four tasks: liver, spleen, right kidney, and left kidney. The results demonstrate that our approach facilitates rapid adaptation to new tasks using only a few annotated images. In 10-shot settings, our approach achieved mean dice coefficients of 93.70%, 85.98%, 81.20%, and 89.58% for liver, spleen, right kidney, and left kidney segmentation, respectively. In five-shot sittings, the approach attained mean Dice coefficients of 90.27%, 83.89%, 77.53%, and 87.01% for liver, spleen, right kidney, and left kidney segmentation, respectively. Finally, we assess the effectiveness of our proposed approach on a dataset collected from a local hospital. Employing five-shot sittings, we achieve mean Dice coefficients of 90.62%, 79.86%, 79.87%, and 78.21% for liver, spleen, right kidney, and left kidney segmentation, respectively.

10.
PLoS One ; 19(3): e0298731, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38527047

RESUMO

A shell and tube heat exchanger (STHE) for heat recovery applications was studied to discover the intricacies of its optimization. To optimize performance, a hybrid optimization methodology was developed by combining the Neural Fitting Tool (NFTool), Particle Swarm Optimization (PSO), and Grey Relational Analysis (GRE). STHE heat exchangers were analyzed systematically using the Taguchi method to analyze the critical elements related to a particular response. To clarify the complex relationship between the heat exchanger efficiency and operational parameters, grey relational grades (GRGs) are first computed. A forecast of the grey relation coefficients was then conducted using NFTool to provide more insight into the complex dynamics. An optimized parameter with a grey coefficient was created after applying PSO analysis, resulting in a higher grey coefficient and improved performance of the heat exchanger. A major and far-reaching application of this study was based on heat recovery. A detailed comparison was conducted between the estimated values and the experimental results as a result of the hybrid optimization algorithm. In the current study, the results demonstrate that the proposed counter-flow shell and tube strategy is effective for optimizing performance.


Assuntos
Algoritmos , Temperatura Alta
11.
Heliyon ; 10(7): e28195, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38571667

RESUMO

People who work in dangerous environments include farmers, sailors, travelers, and mining workers. Due to the fact that they must evaluate the changes taking place in their immediate surroundings, they must gather information and data from the real world. It becomes crucial to regularly monitor meteorological parameters such air quality, rainfall, water level, pH value, wind direction and speed, temperature, atmospheric pressure, humidity, soil moisture, light intensity, and turbidity in order to avoid risks or calamities. Enhancing environmental standards is largely influenced by IoT. It greatly advances sustainable living with its innovative and cutting-edge techniques for monitoring air quality and treating water. With the aid of various sensors, microcontroller (Arduino Uno), GSM, Wi-Fi, and HTTP protocols, the suggested system is a real-time smart monitoring system based on the Internet of Things. Also, the proposed system has HTTP-based webpage enabled by Wi-Fi to transfer the data to remote locations. This technology makes it feasible to track changes in the weather from any location at any distance. The proposed system is a sophisticated, efficient, accurate, cost-effective, and dependable weather station that will be valuable to anyone who wants to monitor environmental changes on a regular basis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA