Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Data Brief ; 56: 110821, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39252785

RESUMEN

Fruits are mature ovaries of flowering plants that are integral to human diets, providing essential nutrients such as vitamins, minerals, fiber and antioxidants that are crucial for health and disease prevention. Accurate classification and segmentation of fruits are crucial in the agricultural sector for enhancing the efficiency of sorting and quality control processes, which significantly benefit automated systems by reducing labor costs and improving product consistency. This paper introduces the "FruitSeg30_Segmentation Dataset & Mask Annotations", a novel dataset designed to advance the capability of deep learning models in fruit segmentation and classification. Comprising 1969 high-quality images across 30 distinct fruit classes, this dataset provides diverse visuals essential for a robust model. Utilizing a U-Net architecture, the model trained on this dataset achieved training accuracy of 94.72 %, validation accuracy of 92.57 %, precision of 94 %, recall of 91 %, f1-score of 92.5 %, IoU score of 86 %, and maximum dice score of 0.9472, demonstrating superior performance in segmentation tasks. The FruitSeg30 dataset fills a critical gap and sets new standards in dataset quality and diversity, enhancing agricultural technology and food industry applications.

2.
Heliyon ; 10(19): e38596, 2024 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-39430511

RESUMEN

Pollen grains play a critical role in environmental, agricultural, and allergy research despite their tiny dimensions. The accurate classification of pollen grains remains a significant challenge, mainly attributable to their intricate structures and the extensive diversity of species. Traditional methods often lack accuracy and effectiveness, prompting the need for advanced solutions. This study introduces a novel deep learning framework, PollenNet, designed to tackle the intricate challenge of pollen grain image classification. The efficiency of PollenNet is thoroughly evaluated through stratified 5-fold cross-validation, comparing it with cutting-edge methods to demonstrate its superior performance. A comprehensive data preparation phase is conducted, including removing duplicates and low-quality images, applying Non-local Means Denoising for noise reduction, and Gamma correction to adjust image brightness. Furthermore, Explainable AI (XAI) is utilized to enhance the interpretability of the model, while Receiver Operating Characteristic (ROC) curve analysis serves as a quantitative method for evaluating the model's capabilities. PollenNet demonstrates superior performance when compared to existing models, with an accuracy of 98.45 %, precision of 98.20 %, specificity of 98.40 %, recall of 98.30 %, and f1-score of 98.25 %. The model also maintains low Mean Squared Error (0.03) and Mean Absolute Error (0.02) rates. The ROC curve analysis, the low False Positive Rate (0.016), and the False Negative Rate (0.017) highlight the reliability and dependability of the model. This study significantly improves the efficacy of classifying pollen grains, indicating an important advancement in the application of deep learning for ecological research.

3.
PeerJ Comput Sci ; 10: e1950, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38660192

RESUMEN

Gastrointestinal (GI) diseases are prevalent medical conditions that require accurate and timely diagnosis for effective treatment. To address this, we developed the Multi-Fusion Convolutional Neural Network (MF-CNN), a deep learning framework that strategically integrates and adapts elements from six deep learning models, enhancing feature extraction and classification of GI diseases from endoscopic images. The MF-CNN architecture leverages truncated and partially frozen layers from existing models, augmented with novel components such as Auxiliary Fusing Layers (AuxFL), Fusion Residual Block (FuRB), and Alpha Dropouts (αDO) to improve precision and robustness. This design facilitates the precise identification of conditions such as ulcerative colitis, polyps, esophagitis, and healthy colons. Our methodology involved preprocessing endoscopic images sourced from open databases, including KVASIR and ETIS-Larib Polyp DB, using adaptive histogram equalization (AHE) to enhance their quality. The MF-CNN framework supports detailed feature mapping for improved interpretability of the model's internal workings. An ablation study was conducted to validate the contribution of each component, demonstrating that the integration of AuxFL, αDO, and FuRB played a crucial part in reducing overfitting and efficiency saturation and enhancing overall model performance. The MF-CNN demonstrated outstanding performance in terms of efficacy, achieving an accuracy rate of 99.25%. It also excelled in other key performance metrics with a precision of 99.27%, a recall of 99.25%, and an F1-score of 99.25%. These metrics confirmed the model's proficiency in accurate classification and its capability to minimize false positives and negatives across all tested GI disease categories. Furthermore, the AUC values were exceptional, averaging 1.00 for both test and validation sets, indicating perfect discriminative ability. The findings of the P-R curve analysis and confusion matrix further confirmed the robust classification performance of the MF-CNN. This research introduces a technique for medical imaging that can potentially transform diagnostics in gastrointestinal healthcare facilities worldwide.

4.
Sci Rep ; 13(1): 22874, 2023 12 18.
Artículo en Inglés | MEDLINE | ID: mdl-38129433

RESUMEN

Heart failure (HF) is a leading cause of mortality worldwide. Machine learning (ML) approaches have shown potential as an early detection tool for improving patient outcomes. Enhancing the effectiveness and clinical applicability of the ML model necessitates training an efficient classifier with a diverse set of high-quality datasets. Hence, we proposed two novel hybrid ML methods ((a) consisting of Boosting, SMOTE, and Tomek links (BOO-ST); (b) combining the best-performing conventional classifier with ensemble classifiers (CBCEC)) to serve as an efficient early warning system for HF mortality. The BOO-ST was introduced to tackle the challenge of class imbalance, while CBCEC was responsible for training the processed and selected features derived from the Feature Importance (FI) and Information Gain (IG) feature selection techniques. We also conducted an explicit and intuitive comprehension to explore the impact of potential characteristics correlating with the fatality cases of HF. The experimental results demonstrated the proposed classifier CBCEC showcases a significant accuracy of 93.67% in terms of providing the early forecasting of HF mortality. Therefore, we can reveal that our proposed aspects (BOO-ST and CBCEC) can be able to play a crucial role in preventing the death rate of HF and reducing stress in the healthcare sector.


Asunto(s)
Insuficiencia Cardíaca , Aprendizaje Automático , Humanos , Predicción , Insuficiencia Cardíaca/diagnóstico
5.
J Pers Med ; 12(5)2022 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-35629103

RESUMEN

In recent years, lung disease has increased manyfold, causing millions of casualties annually. To combat the crisis, an efficient, reliable, and affordable lung disease diagnosis technique has become indispensable. In this study, a multiclass classification of lung disease from frontal chest X-ray imaging using a fine-tuned CNN model is proposed. The classification is conducted on 10 disease classes of the lungs, namely COVID-19, Effusion, Tuberculosis, Pneumonia, Lung Opacity, Mass, Nodule, Pneumothorax, and Pulmonary Fibrosis, along with the Normal class. The dataset is a collective dataset gathered from multiple sources. After pre-processing and balancing the dataset with eight augmentation techniques, a total of 80,000 X-ray images were fed to the model for classification purposes. Initially, eight pre-trained CNN models, AlexNet, GoogLeNet, InceptionV3, MobileNetV2, VGG16, ResNet 50, DenseNet121, and EfficientNetB7, were employed on the dataset. Among these, the VGG16 achieved the highest accuracy at 92.95%. To further improve the classification accuracy, LungNet22 was constructed upon the primary structure of the VGG16 model. An ablation study was used in the work to determine the different hyper-parameters. Using the Adam Optimizer, the proposed model achieved a commendable accuracy of 98.89%. To verify the performance of the model, several performance matrices, including the ROC curve and the AUC values, were computed as well.

6.
Front Oncol ; 12: 931141, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36003775

RESUMEN

Skin cancer these days have become quite a common occurrence especially in certain geographic areas such as Oceania. Early detection of such cancer with high accuracy is of utmost importance, and studies have shown that deep learning- based intelligent approaches to address this concern have been fruitful. In this research, we present a novel deep learning- based classifier that has shown promise in classifying this type of cancer on a relevant preprocessed dataset having important features pre-identified through an effective feature extraction method. Skin cancer in modern times has become one of the most ubiquitous types of cancer. Accurate identification of cancerous skin lesions is of vital importance in treating this malady. In this research, we employed a deep learning approach to identify benign and malignant skin lesions. The initial dataset was obtained from Kaggle before several preprocessing steps for hair and background removal, image enhancement, selection of the region of interest (ROI), region-based segmentation, morphological gradient, and feature extraction were performed, resulting in histopathological images data with 20 input features based on geometrical and textural features. A principle component analysis (PCA)-based feature extraction technique was put into action to reduce the dimensionality to 10 input features. Subsequently, we applied our deep learning classifier, SkinNet-16, to detect the cancerous lesion accurately at a very early stage. The highest accuracy was obtained with the Adamax optimizer with a learning rate of 0.006 from the neural network-based model developed in this study. The model also delivered an impressive accuracy of approximately 99.19%.

7.
Biology (Basel) ; 10(11)2021 Nov 13.
Artículo en Inglés | MEDLINE | ID: mdl-34827167

RESUMEN

COVID-19, regarded as the deadliest virus of the 21st century, has claimed the lives of millions of people around the globe in less than two years. Since the virus initially affects the lungs of patients, X-ray imaging of the chest is helpful for effective diagnosis. Any method for automatic, reliable, and accurate screening of COVID-19 infection would be beneficial for rapid detection and reducing medical or healthcare professional exposure to the virus. In the past, Convolutional Neural Networks (CNNs) proved to be quite successful in the classification of medical images. In this study, an automatic deep learning classification method for detecting COVID-19 from chest X-ray images is suggested using a CNN. A dataset consisting of 3616 COVID-19 chest X-ray images and 10,192 healthy chest X-ray images was used. The original data were then augmented to increase the data sample to 26,000 COVID-19 and 26,000 healthy X-ray images. The dataset was enhanced using histogram equalization, spectrum, grays, cyan and normalized with NCLAHE before being applied to CNN models. Initially using the dataset, the symptoms of COVID-19 were detected by employing eleven existing CNN models; VGG16, VGG19, MobileNetV2, InceptionV3, NFNet, ResNet50, ResNet101, DenseNet, EfficientNetB7, AlexNet, and GoogLeNet. From the models, MobileNetV2 was selected for further modification to obtain a higher accuracy of COVID-19 detection. Performance evaluation of the models was demonstrated using a confusion matrix. It was observed that the modified MobileNetV2 model proposed in the study gave the highest accuracy of 98% in classifying COVID-19 and healthy chest X-rays among all the implemented CNN models. The second-best performance was achieved from the pre-trained MobileNetV2 with an accuracy of 97%, followed by VGG19 and ResNet101 with 95% accuracy for both the models. The study compares the compilation time of the models. The proposed model required the least compilation time with 2 h, 50 min and 21 s. Finally, the Wilcoxon signed-rank test was performed to test the statistical significance. The results suggest that the proposed method can efficiently identify the symptoms of infection from chest X-ray images better than existing methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA