Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 11 de 11
1.
Sci Rep ; 14(1): 10812, 2024 05 11.
Article En | MEDLINE | ID: mdl-38734714

Cervical cancer, the second most prevalent cancer affecting women, arises from abnormal cell growth in the cervix, a crucial anatomical structure within the uterus. The significance of early detection cannot be overstated, prompting the use of various screening methods such as Pap smears, colposcopy, and Human Papillomavirus (HPV) testing to identify potential risks and initiate timely intervention. These screening procedures encompass visual inspections, Pap smears, colposcopies, biopsies, and HPV-DNA testing, each demanding the specialized knowledge and skills of experienced physicians and pathologists due to the inherently subjective nature of cancer diagnosis. In response to the imperative for efficient and intelligent screening, this article introduces a groundbreaking methodology that leverages pre-trained deep neural network models, including Alexnet, Resnet-101, Resnet-152, and InceptionV3, for feature extraction. The fine-tuning of these models is accompanied by the integration of diverse machine learning algorithms, with ResNet152 showcasing exceptional performance, achieving an impressive accuracy rate of 98.08%. It is noteworthy that the SIPaKMeD dataset, publicly accessible and utilized in this study, contributes to the transparency and reproducibility of our findings. The proposed hybrid methodology combines aspects of DL and ML for cervical cancer classification. Most intricate and complicated features from images can be extracted through DL. Further various ML algorithms can be implemented on extracted features. This innovative approach not only holds promise for significantly improving cervical cancer detection but also underscores the transformative potential of intelligent automation within the realm of medical diagnostics, paving the way for more accurate and timely interventions.


Deep Learning , Early Detection of Cancer , Uterine Cervical Neoplasms , Humans , Uterine Cervical Neoplasms/diagnosis , Uterine Cervical Neoplasms/pathology , Female , Early Detection of Cancer/methods , Neural Networks, Computer , Algorithms , Papanicolaou Test/methods , Colposcopy/methods
2.
BMC Med Imaging ; 24(1): 38, 2024 Feb 08.
Article En | MEDLINE | ID: mdl-38331800

Deep learning recently achieved advancement in the segmentation of medical images. In this regard, U-Net is the most predominant deep neural network, and its architecture is the most prevalent in the medical imaging society. Experiments conducted on difficult datasets directed us to the conclusion that the traditional U-Net framework appears to be deficient in certain respects, despite its overall excellence in segmenting multimodal medical images. Therefore, we propose several modifications to the existing cutting-edge U-Net model. The technical approach involves applying a Multi-Dimensional U-Convolutional Neural Network to achieve accurate segmentation of multimodal biomedical images, enhancing precision and comprehensiveness in identifying and analyzing structures across diverse imaging modalities. As a result of the enhancements, we propose a novel framework called Multi-Dimensional U-Convolutional Neural Network (MDU-CNN) as a potential successor to the U-Net framework. On a large set of multimodal medical images, we compared our proposed framework, MDU-CNN, to the classical U-Net. There have been small changes in the case of perfect images, and a huge improvement is obtained in the case of difficult images. We tested our model on five distinct datasets, each of which presented unique challenges, and found that it has obtained a better performance of 1.32%, 5.19%, 4.50%, 10.23% and 0.87%, respectively.


Neural Networks, Computer , Societies, Medical , Humans , Image Processing, Computer-Assisted
3.
BMC Med Imaging ; 24(1): 21, 2024 Jan 19.
Article En | MEDLINE | ID: mdl-38243215

The current approach to diagnosing and classifying brain tumors relies on the histological evaluation of biopsy samples, which is invasive, time-consuming, and susceptible to manual errors. These limitations underscore the pressing need for a fully automated, deep-learning-based multi-classification system for brain malignancies. This article aims to leverage a deep convolutional neural network (CNN) to enhance early detection and presents three distinct CNN models designed for different types of classification tasks. The first CNN model achieves an impressive detection accuracy of 99.53% for brain tumors. The second CNN model, with an accuracy of 93.81%, proficiently categorizes brain tumors into five distinct types: normal, glioma, meningioma, pituitary, and metastatic. Furthermore, the third CNN model demonstrates an accuracy of 98.56% in accurately classifying brain tumors into their different grades. To ensure optimal performance, a grid search optimization approach is employed to automatically fine-tune all the relevant hyperparameters of the CNN models. The utilization of large, publicly accessible clinical datasets results in robust and reliable classification outcomes. This article conducts a comprehensive comparison of the proposed models against classical models, such as AlexNet, DenseNet121, ResNet-101, VGG-19, and GoogleNet, reaffirming the superiority of the deep CNN-based approach in advancing the field of brain tumor classification and early detection.


Brain Neoplasms , Glioma , Meningeal Neoplasms , Humans , Brain , Brain Neoplasms/diagnostic imaging , Neural Networks, Computer
4.
Sci Rep ; 13(1): 23029, 2023 12 27.
Article En | MEDLINE | ID: mdl-38155247

Accurately classifying brain tumor types is critical for timely diagnosis and potentially saving lives. Magnetic Resonance Imaging (MRI) is a widely used non-invasive method for obtaining high-contrast grayscale brain images, primarily for tumor diagnosis. The application of Convolutional Neural Networks (CNNs) in deep learning has revolutionized diagnostic systems, leading to significant advancements in medical imaging interpretation. In this study, we employ a transfer learning-based fine-tuning approach using EfficientNets to classify brain tumors into three categories: glioma, meningioma, and pituitary tumors. We utilize the publicly accessible CE-MRI Figshare dataset to fine-tune five pre-trained models from the EfficientNets family, ranging from EfficientNetB0 to EfficientNetB4. Our approach involves a two-step process to refine the pre-trained EfficientNet model. First, we initialize the model with weights from the ImageNet dataset. Then, we add additional layers, including top layers and a fully connected layer, to enable tumor classification. We conduct various tests to assess the robustness of our fine-tuned EfficientNets in comparison to other pre-trained models. Additionally, we analyze the impact of data augmentation on the model's test accuracy. To gain insights into the model's decision-making, we employ Grad-CAM visualization to examine the attention maps generated by the most optimal model, effectively highlighting tumor locations within brain images. Our results reveal that using EfficientNetB2 as the underlying framework yields significant performance improvements. Specifically, the overall test accuracy, precision, recall, and F1-score were found to be 99.06%, 98.73%, 99.13%, and 98.79%, respectively.


Brain Neoplasms , Deep Learning , Glioma , Meningeal Neoplasms , Humans , Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Glioma/diagnostic imaging
5.
Sci Rep ; 13(1): 17574, 2023 10 16.
Article En | MEDLINE | ID: mdl-37845403

The electroencephalogram (EEG) has emerged over the past few decades as one of the key tools used by clinicians to detect seizures and other neurological abnormalities of the human brain. The proper diagnosis of epilepsy is crucial due to its distinctive nature and the subsequent negative effects of epileptic seizures on patients. The classification of minimally pre-processed, raw multichannel EEG signal recordings is the foundation of this article's unique method for identifying seizures in pre-adult patients. The new method makes use of the automatic feature learning capabilities of a three-dimensional deep convolution auto-encoder (3D-DCAE) associated with a neural network-based classifier to build an integrated framework that endures training in a supervised manner to attain the highest level of classification precision among brain state signals, both ictal and interictal. A pair of models were created and evaluated for testing and assessing our method, utilizing three distinct EEG data section lengths, and a tenfold cross-validation procedure. Based on five evaluation criteria, the labelled hybrid convolutional auto-encoder (LHCAE) model, which utilizes a classifier based on bidirectional long short-term memory (Bi-LSTM) and an EEG segment length of 4 s, had the best efficiency. This proposed model has 99.08 ± 0.54% accuracy, 99.21 ± 0.50% sensitivity, 99.11 ± 0.57% specificity, 99.09 ± 0.55% precision, and an F1-score of 99.16 ± 0.58%, according to the publicly available Children's Hospital Boston (CHB) dataset. Based on the obtained outcomes, the proposed seizure classification model outperforms the other state-of-the-art method's performance in the same dataset.


Deep Learning , Epilepsy , Child , Humans , Epilepsy/diagnosis , Seizures/diagnosis , Neural Networks, Computer , Brain/diagnostic imaging , Electroencephalography/methods , Signal Processing, Computer-Assisted , Algorithms
6.
Sci Rep ; 13(1): 13588, 2023 08 21.
Article En | MEDLINE | ID: mdl-37604952

Heart disease is a significant global cause of mortality, and predicting it through clinical data analysis poses challenges. Machine learning (ML) has emerged as a valuable tool for diagnosing and predicting heart disease by analyzing healthcare data. Previous studies have extensively employed ML techniques in medical research for heart disease prediction. In this study, eight ML classifiers were utilized to identify crucial features that enhance the accuracy of heart disease prediction. Various combinations of features and well-known classification algorithms were employed to develop the prediction model. Neural network models, such as Naïve Bayes and Radial Basis Functions, were implemented, achieving accuracies of 94.78% and 90.78% respectively in heart disease prediction. Among the state-of-the-art methods for cardiovascular problem prediction, Learning Vector Quantization exhibited the highest accuracy rate of 98.7%. The motivation behind predicting Cardiovascular Heart Disease lies in its potential to save lives, improves health outcomes, and allocates healthcare resources efficiently. The key contributions encompass early intervention, personalized medicine, technological advancements, the impact on public health, and ongoing research, all of which collectively work toward reducing the burden of CHD on both individual patients and society as a whole.


Cardiovascular Diseases , Cardiovascular System , Heart Diseases , Humans , Bayes Theorem , Heart , Heart Diseases/diagnosis , Cardiovascular Diseases/diagnosis
7.
Diagnostics (Basel) ; 13(8)2023 Apr 10.
Article En | MEDLINE | ID: mdl-37189485

We developed a framework to detect and grade knee RA using digital X-radiation images and used it to demonstrate the ability of deep learning approaches to detect knee RA using a consensus-based decision (CBD) grading system. The study aimed to evaluate the efficiency with which a deep learning approach based on artificial intelligence (AI) can find and determine the severity of knee RA in digital X-radiation images. The study comprised people over 50 years with RA symptoms, such as knee joint pain, stiffness, crepitus, and functional impairments. The digitized X-radiation images of the people were obtained from the BioGPS database repository. We used 3172 digital X-radiation images of the knee joint from an anterior-posterior perspective. The trained Faster-CRNN architecture was used to identify the knee joint space narrowing (JSN) area in digital X-radiation images and extract the features using ResNet-101 with domain adaptation. In addition, we employed another well-trained model (VGG16 with domain adaptation) for knee RA severity classification. Medical experts graded the X-radiation images of the knee joint using a consensus-based decision score. We trained the enhanced-region proposal network (ERPN) using this manually extracted knee area as the test dataset image. An X-radiation image was fed into the final model, and a consensus decision was used to grade the outcome. The presented model correctly identified the marginal knee JSN region with 98.97% of accuracy, with a total knee RA intensity classification accuracy of 99.10%, with a sensitivity of 97.3%, a specificity of 98.2%, a precision of 98.1%, and a dice score of 90.1% compared with other conventional models.

8.
Diagnostics (Basel) ; 13(6)2023 Mar 17.
Article En | MEDLINE | ID: mdl-36980463

To improve the accuracy of tumor identification, it is necessary to develop a reliable automated diagnostic method. In order to precisely categorize brain tumors, researchers developed a variety of segmentation algorithms. Segmentation of brain images is generally recognized as one of the most challenging tasks in medical image processing. In this article, a novel automated detection and classification method was proposed. The proposed approach consisted of many phases, including pre-processing MRI images, segmenting images, extracting features, and classifying images. During the pre-processing portion of an MRI scan, an adaptive filter was utilized to eliminate background noise. For feature extraction, the local-binary grey level co-occurrence matrix (LBGLCM) was used, and for image segmentation, enhanced fuzzy c-means clustering (EFCMC) was used. After extracting the scan features, we used a deep learning model to classify MRI images into two groups: glioma and normal. The classifications were created using a convolutional recurrent neural network (CRNN). The proposed technique improved brain image classification from a defined input dataset. MRI scans from the REMBRANDT dataset, which consisted of 620 testing and 2480 training sets, were used for the research. The data demonstrate that the newly proposed method outperformed its predecessors. The proposed CRNN strategy was compared against BP, U-Net, and ResNet, which are three of the most prevalent classification approaches currently being used. For brain tumor classification, the proposed system outcomes were 98.17% accuracy, 91.34% specificity, and 98.79% sensitivity.

9.
Sensors (Basel) ; 23(3)2023 Jan 19.
Article En | MEDLINE | ID: mdl-36772207

Rapid improvements in ultrasound imaging technology have made it much more useful for screening and diagnosing breast problems. Local-speckle-noise destruction in ultrasound breast images may impair image quality and impact observation and diagnosis. It is crucial to remove localized noise from images. In the article, we have used the hybrid deep learning technique to remove local speckle noise from breast ultrasound images. The contrast of ultrasound breast images was first improved using logarithmic and exponential transforms, and then guided filter algorithms were used to enhance the details of the glandular ultrasound breast images. In order to finish the pre-processing of ultrasound breast images and enhance image clarity, spatial high-pass filtering algorithms were used to remove the extreme sharpening. In order to remove local speckle noise without sacrificing the image edges, edge-sensitive terms were eventually added to the Logical-Pool Recurrent Neural Network (LPRNN). The mean square error and false recognition rate both fell below 1.1% at the hundredth training iteration, showing that the LPRNN had been properly trained. Ultrasound images that have had local speckle noise destroyed had signal-to-noise ratios (SNRs) greater than 65 dB, peak SNR ratios larger than 70 dB, edge preservation index values greater than the experimental threshold of 0.48, and quick destruction times. The time required to destroy local speckle noise is low, edge information is preserved, and image features are brought into sharp focus.


Deep Learning , Humans , Female , Ultrasonography, Mammary , Ultrasonography/methods , Algorithms , Neural Networks, Computer , Signal-To-Noise Ratio
10.
Diagnostics (Basel) ; 13(3)2023 Feb 02.
Article En | MEDLINE | ID: mdl-36766652

Every year, cervical cancer is a leading cause of mortality in women all over the world. This cancer can be cured if it is detected early and patients are treated promptly. This study proposes a new strategy for the detection of cervical cancer using cervigram pictures. The associated histogram equalization (AHE) technique is used to improve the edges of the cervical image, and then the finite ridgelet transform is used to generate a multi-resolution picture. Then, from this converted multi-resolution cervical picture, features such as ridgelets, gray-level run-length matrices, moment invariant, and enhanced local ternary pattern are retrieved. A feed-forward backward propagation neural network is used to train and test these extracted features in order to classify the cervical images as normal or abnormal. To detect and segment cancer regions, morphological procedures are applied to the abnormal cervical images. The cervical cancer detection system's performance metrics include 98.11% sensitivity, 98.97% specificity, 99.19% accuracy, a PPV of 98.88%, an NPV of 91.91%, an LPR of 141.02%, an LNR of 0.0836, 98.13% precision, 97.15% FPs, and 90.89% FNs. The simulation outcomes show that the proposed method is better at detecting and segmenting cervical cancer than the traditional methods.

11.
J Pharm Bioallied Sci ; 5(Suppl 2): S139-41, 2013 Jul.
Article En | MEDLINE | ID: mdl-23956592

Hyperplasias of the mandible are usually seen in relation to the condyle or affecting one half of the mandible, such cases being described as hemimandibular hyperplasia or elongation. This article presents a rare case of hyperplasia of the right body of the mandible. The case being unique in that although being present from childhood did not cause any functional disturbances or any occlusal disharmony characteristically seen in such developmental anomalies. Here, we describe the clinical, radiographic and histopathologic findings that led to the diagnosis hyperplasia of the mandibular body and the treatment rendered to provide the esthetic correction.

...