Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.695
Filter
1.
J Crit Care ; 85: 154923, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39357434

ABSTRACT

BACKGROUND: Upper gastrointestinal bleeding (UGIB) is a significant cause of morbidity and mortality worldwide. This study investigates the use of residual variables and machine learning (ML) models for predicting major bleeding in patients with severe UGIB after their first intensive care unit (ICU) admission. METHODS: The Medical Information Mart for Intensive Care IV and eICU databases were used. Conventional ML and long short-term memory models were constructed using pre-ICU and ICU admission day data to predict the recurrence of major gastrointestinal bleeding. In the models, residual data were utilized by subtracting the normal range from the test result. The models included eight algorithms. Shapley additive explanations and saliency maps were used for feature interpretability. RESULTS: Twenty-five ML models were developed using data from 2604 patients. The light gradient-boosting machine algorithm model using pre-ICU admission residual data outperformed other models that used test results directly, with an AUC of 0.96. The key factors included aspartate aminotransferase, blood urea nitrogen, albumin, length of ICU admission, and respiratory rate. CONCLUSIONS: ML models using residuals improved the accuracy and interpretability in predicting major bleeding during ICU admission in patients with UGIB. These interpretable features may facilitate the early identification and management of high-risk patients, thereby improving hemodynamic stability and outcomes.

2.
Histopathology ; 2024 Oct 03.
Article in English | MEDLINE | ID: mdl-39360579

ABSTRACT

AIMS: To create and validate a weakly supervised artificial intelligence (AI) model for detection of abnormal colorectal histology, including dysplasia and cancer, and prioritise biopsies according to clinical significance (severity of diagnosis). MATERIALS AND METHODS: Triagnexia Colorectal, a weakly supervised deep learning model, was developed for the classification of colorectal samples from haematoxylin and eosin (H&E)-stained whole slide images. The model was trained on 24 983 digitised images and assessed by multiple pathologists in a simulated digital pathology environment. The AI application was implemented as part of a point and click graphical user interface to streamline decision-making. Pathologists assessed the accuracy of the AI tool, its value, ease of use and integration into the digital pathology workflow. RESULTS: Validation of the model was conducted on two cohorts: the first, on 100 single-slide cases, achieved micro-average model specificity of 0.984, micro-average model sensitivity of 0.949 and micro-average model F1 score of 0.949 across all classes. A secondary multi-institutional validation cohort, of 101 single-slide cases, achieved micro-average model specificity of 0.978, micro-average model sensitivity of 0.931 and micro-average model F1 score of 0.931 across all classes. Pathologists reflected their positive impressions on the overall accuracy of the AI in detecting colorectal pathology abnormalities. CONCLUSIONS: We have developed a high-performing colorectal biopsy AI triage model that can be integrated into a routine digital pathology workflow to assist pathologists in prioritising cases and identifying cases with dysplasia/cancer versus non-neoplastic biopsies.

3.
Article in English | MEDLINE | ID: mdl-39353461

ABSTRACT

BACKGROUND: The risk of biochemical recurrence (BCR) after radiotherapy for localized prostate cancer (PCa) varies widely within standard risk groups. There is a need for low-cost tools to more robustly predict recurrence and personalize therapy. Radiomic features from pretreatment MRI show potential as noninvasive biomarkers for BCR prediction. However, previous research has not fully combined radiomics with clinical and pathological data to predict BCR in PCa patients following radiotherapy. Purpose: This study aims to predict 5-year BCR using radiomics from pretreatment T2W MRI and clinical-pathological data in PCa patients treated with radiation therapy, and to develop a unified model compatible with both 1.5T and 3T MRI scanners. Methods: A total of 150 T2W scans and clinical parameters were preprocessed. Of these, 120 cases were used for training and validation, and 30 for testing. Four distinct machine learning models were developed: Model 1 used radiomics, Model 2 used clinical and pathological data, and Model 3 combined these using late fusion. Model 4 integrated radiomic and clinical-pathological data using early fusion. Results: Model 1 achieved an AUC of 0.73, while Model 2 had an AUC of 0.64 for predicting outcomes in 30 new test cases. Model 3, using late fusion, had an AUC of 0.69. Early fusion models showed strong potential, with Model 4 reaching an AUC of 0.84, highlighting the effectiveness of the early fusion model. Conclusions: This study is the first to use a fusion technique for predicting BCR in PCa patients following radiotherapy, utilizing pre-treatment T2W MRI images and clinical-pathological data. The methodology improves predictive accuracy by fusing radiomics with clinical-pathological information, even with a relatively small dataset, and introduces the first unified model for both 1.5T and 3T MRI images.

4.
Heliyon ; 10(19): e37745, 2024 Oct 15.
Article in English | MEDLINE | ID: mdl-39386823

ABSTRACT

Acute myeloid leukemia (AML) is a highly aggressive cancer form that affects myeloid cells, leading to the excessive growth of immature white blood cells (WBCs) in both bone marrow and peripheral blood. Timely AML detection is crucial for effective treatment and patient well-being. Currently, AML diagnosis relies on the manual recognition of immature WBCs through peripheral blood smear analysis, which is time-consuming, prone to errors, and subject to inter-observers' variation. This study aimed to develop a computer-aided diagnostic framework for AML, called "CAE-ResVGG FusionNet", that precisely identifies and classifies immature WBCs into their respective subtypes. The proposed framework leverages an integrated approach, by combining a convolutional autoencoder (CAE) with finely tuned adaptations of the VGG19 and ResNet50 architectures to extract features from CAE-derived embeddings. The process begins with a binary classification model distinguishing between mature and immature WBCs followed by a multiclassifier further classifying immature cells into four subtypes: myeloblasts, monoblasts, erythroblasts, and promyelocytes. The CAE-ResVGG FusionNet workflow comprises four primary stages, including data preprocessing, feature extraction, classification, and validation. The preprocessing phase involves applying data augmentation methods using geometric transformations and synthetic image generation using the CAE to address imbalance in the WBC distribution. Feature extraction involves image embedding and transfer learning, where CAE-derived image representations are used by a custom integrated model of VGG19 and ResNet50 pretrained models. The classification phase employs a weighted ensemble approach that leverages VGG19 and ResNet50, where the optimal weighting parameters are selected using a grid search. The model performance was assessed during the validation phase using the overall accuracy, precision, and sensitivity, while the area under the receiver characteristic curve (AUC) was used to evaluate the model's discriminatory capability. The proposed framework exhibited notable results, achieving an average accuracy of 99.9%, sensitivity of 91.7%, and precision of 98.8%. The model demonstrated exceptional discriminatory ability, as evidenced by an AUC of 99.6%. Significantly, the proposed system outperformed previous methods, indicating its superior diagnostic ability.

5.
Indian J Otolaryngol Head Neck Surg ; 76(5): 4036-4042, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39376269

ABSTRACT

Background: Laryngeal cancer accounts for a third of all head and neck malignancies, necessitating timely detection for effective treatment and enhanced patient outcomes. Machine learning shows promise in medical diagnostics, but the impact of model complexity on diagnostic efficacy in laryngeal cancer detection can be ambiguous. Methods: In this study, we examine the relationship between model sophistication and diagnostic efficacy by evaluating three approaches: Logistic Regression, a small neural network with 4 layers of neurons and a more complex convolutional neural network with 50 layers and examine their efficacy on laryngeal cancer detection on computed tomography images. Results: Logistic regression achieved 82.5% accuracy. The 4-Layer NN reached 87.2% accuracy, while ResNet-50, a deep learning architecture, achieved the highest accuracy at 92.6%. Its deep learning capabilities excelled in discerning fine-grained CT image features. Conclusion: Our study highlights the choices involved in selecting a laryngeal cancer detection model. Logistic regression is interpretable but may struggle with complex patterns. The 4-Layer NN balances complexity and accuracy. ResNet-50 excels in image classification but demands resources. This research advances understanding affect machine learning model complexity could have on learning features of laryngeal tumor features in contrast CT images for purposes of disease prediction.

6.
Acad Radiol ; 2024 Oct 14.
Article in English | MEDLINE | ID: mdl-39406577

ABSTRACT

RATIONALE AND OBJECTIVES: This study aimed to develop a deep learning (DL)-based model for detecting and diagnosing cerebral aneurysms in clinical settings, with and without human assistance. MATERIALS AND METHODS: The DL model was trained using data from 3829 patients across 11 clinical centers and tested on 484 patients from three institutions. Image interpretations were conducted by 10 radiologists (four junior, six senior), the DL model alone, and a combination of radiologists with the DL model. Time spent on post-processing and reading was recorded. The analysis of the area under the curve (AUC), sensitivity, and specificity for the above-mentioned three reading modes was performed at both the lesion and patient levels. RESULTS: Combining the DL model with radiologists reduced image interpretation time by 37.2% and post-processing time by 90.8%. With DL model assistance, the AUC increased from 0.842 to 0.881 (P = 0.008) for junior radiologists (JRs) and from 0.853 to 0.895 (P < 0.001) for senior radiologists (SRs). With DL model assistance, sensitivity significantly improved at both lesion (JR: 68.9% to 81.6%, P = 0.011; SR: 72.4% to 83.5%, P < 0.001) and patient levels (JR: 76.2% to 86.9%, P = 0.011; SR: 80.1% to 88.2%, P < 0.001). Specificity at the patient level showed improvement (JR: 82.6% to 82.7%, P = 0.005; SR: 82.6% to 86.1%, P = 0.021). CONCLUSIONS: The DL model enhanced radiologists' diagnostic performance in detecting cerebral aneurysms, especially for JRs, and expedited the workflow.

7.
Front Oncol ; 14: 1417862, 2024.
Article in English | MEDLINE | ID: mdl-39381041

ABSTRACT

Introduction: Colorectal cancer (CRC) is one of the main causes of deaths worldwide. Early detection and diagnosis of its precursor lesion, the polyp, is key to reduce its mortality and to improve procedure efficiency. During the last two decades, several computational methods have been proposed to assist clinicians in detection, segmentation and classification tasks but the lack of a common public validation framework makes it difficult to determine which of them is ready to be deployed in the exploration room. Methods: This study presents a complete validation framework and we compare several methodologies for each of the polyp characterization tasks. Results: Results show that the majority of the approaches are able to provide good performance for the detection and segmentation task, but that there is room for improvement regarding polyp classification. Discussion: While studied show promising results in the assistance of polyp detection and segmentation tasks, further research should be done in classification task to obtain reliable results to assist the clinicians during the procedure. The presented framework provides a standarized method for evaluating and comparing different approaches, which could facilitate the identification of clinically prepared assisting methods.

8.
Digit Health ; 10: 20552076241284349, 2024.
Article in English | MEDLINE | ID: mdl-39381826

ABSTRACT

Objective: The proportion of older people will soon include nearly a quarter of the world population. This leads to an increased prevalence of non-communicable diseases such as Alzheimer's disease (AD), a progressive neurodegenerative disorder and the most common dementia. mild cognitive impairment (MCI) can be considered its prodromal stage. The early diagnosis of AD is a huge issue. We face it by solving these classification tasks: MCI-AD and cognitively normal (CN)-MCI-AD. Methods: An intelligent computing system has been developed and implemented to face both challenges. A non-neural preprocessing module was followed by a processing one based on a hybrid and ontogenetic neural architecture, the modular hybrid growing neural gas (MyGNG). The MyGNG is hierarchically organized, with a growing neural gas (GNG) for clustering followed by a perceptron for labeling. For each task, 495 and 819 patients from the Alzheimer's disease neuroimaging initiative (ADNI) database were used, respectively, each with 211 characteristics. Results: Encouraging results have been obtained in the MCI-AD classification task, reaching values of area under the curve (AUC) of 0.96 and sensitivity of 0.91, whereas 0.86 and 0.9 in CN-MCI-AD. Furthermore, a comparative study with popular machine learning (ML) models was also performed for each of these tasks. Conclusions: The MyGNG proved to be a better computational solution than the other ML methods analyzed. Also, it had a similar performance to other deep learning schemes with neuroimaging. Our findings suggest that our proposal may be an interesting computing solution for the early diagnosis of AD.

9.
Cancers (Basel) ; 16(19)2024 Sep 26.
Article in English | MEDLINE | ID: mdl-39409906

ABSTRACT

Esophageal cancer has a dismal prognosis and necessitates a multimodal and multidisciplinary approach from diagnosis to treatment. High-definition white-light endoscopy and histopathological confirmation remain the gold standard for the definitive diagnosis of premalignant and malignant lesions. Artificial intelligence using deep learning (DL) methods for image analysis constitutes a promising adjunct for the clinical endoscopist that could effectively decrease BE overdiagnosis and unnecessary surveillance, while also assisting in the timely detection of dysplastic BE and esophageal cancer. A plethora of studies published during the last five years have consistently reported highly accurate DL algorithms with comparable or superior performance compared to endoscopists. Recent efforts aim to expand DL utilization into further aspects of esophageal neoplasia management including histologic diagnosis, segmentation of gross tumor volume, pretreatment prediction and post-treatment evaluation of patient response to systemic therapy and operative guidance during minimally invasive esophagectomy. Our manuscript serves as an introduction to the growing literature of DL applications for image analysis in the management of esophageal neoplasia, concisely presenting all currently published studies. We also aim to guide the clinician across basic functional principles, evaluation metrics and limitations of DL for image recognition to facilitate the comprehension and critical evaluation of the presented studies.

10.
J Clin Med ; 13(19)2024 Oct 05.
Article in English | MEDLINE | ID: mdl-39407999

ABSTRACT

Artificial intelligence (AI) has a wide and increasing range of applications across various sectors. In medicine, AI has already made an impact in numerous fields, rapidly transforming healthcare delivery through its growing applications in diagnosis, treatment and overall patient care. Equally, AI is swiftly and essentially transforming the landscape of kidney transplantation (KT), offering innovative solutions for longstanding problems that have eluded resolution through traditional approaches outside its spectrum. The purpose of this review is to explore the present and future applications of artificial intelligence in KT, with a focus on pre-transplant evaluation, surgical assistance, outcomes and post-transplant care. We discuss its great potential and the inevitable limitations that accompany these technologies. We conclude that by fostering collaboration between AI technologies and medical practitioners, we can pave the way for a future where advanced, personalised care becomes the standard in KT and beyond.

11.
Front Oncol ; 14: 1437185, 2024.
Article in English | MEDLINE | ID: mdl-39372865

ABSTRACT

Introduction: Brain tumors are characterized by abnormal cell growth within or around the brain, posing severe health risks often associated with high mortality rates. Various imaging techniques, including magnetic resonance imaging (MRI), are commonly employed to visualize the brain and identify malignant growths. Computer-aided diagnosis tools (CAD) utilizing Convolutional Neural Networks (CNNs) have proven effective in feature extraction and predictive analysis across diverse medical imaging modalities. Methods: This study explores a CNN trained and evaluated with nine activation functions, encompassing eight established ones from the literature and a modified version of the soft sign activation function. Results: The latter demonstrates notable efficacy in discriminating between four types of brain tumors in MR images, achieving an accuracy of 97.6%. The sensitivity for glioma is 93.7%; for meningioma, it is 97.4%; for cases with no tumor, it is 98.8%; and for pituitary tumors, it reaches 100%. Discussion: In this manuscript, we propose an advanced CNN architecture that integrates a newly developed activation function. Our extensive experimentation and analysis showcase the model's remarkable ability to precisely distinguish between different types of brain tumors within a substantial and diverse dataset. The findings from our study suggest that this model could serve as an invaluable supplementary tool for healthcare practitioners, including specialized medical professionals and resident physicians, in the accurate diagnosis of brain tumors.

12.
Endocrine ; 2024 Oct 07.
Article in English | MEDLINE | ID: mdl-39375254

ABSTRACT

PURPOSE: Thyroid nodules are highly prevalent in the general population, posing a clinical challenge in accurately distinguishing between benign and malignant cases. This study aimed to investigate the diagnostic performance of different strategies, utilizing a combination of a computer-aided diagnosis system (AmCAD) and shear wave elastography (SWE) imaging, to effectively differentiate benign and malignant thyroid nodules in ultrasonography. METHODS: A total of 126 thyroid nodules with pathological confirmation were prospectively included in this study. The AmCAD was utilized to analyze the ultrasound imaging characteristics of the nodules, while the SWE was employed to measure their stiffness in both transverse and longitudinal thyroid scans. Twelve diagnostic patterns were formed by combining AmCAD diagnosis and SWE values, including isolation, series, parallel, and integration. The diagnostic performance was assessed using the receiver operating characteristic curve and area under the curve (AUC). Sensitivity, specificity, accuracy, missed malignancy rate, and unnecessary biopsy rate were also determined. RESULTS: Various diagnostic schemes have shown specific advantages in terms of diagnostic performance. Overall, integrating AmCAD with SWE imaging in the transverse scan yielded the most favorable diagnostic performance, achieving an AUC of 72.2% (95% confidence interval (CI): 63.0-81.5%), outperforming other diagnostic schemes. Furthermore, in the subgroup analysis of nodules measuring <2 cm or 2-4 cm, the integrated scheme consistently exhibited promising diagnostic performance, with AUCs of 74.2% (95% CI: 61.9-86.4%) and 77.4% (95% CI: 59.4-95.3%) respectively, surpassing other diagnostic schemes. The integrated scheme also effectively addressed thyroid nodule management by reducing the missed malignancy rate to 9.5% and unnecessary biopsy rate to 22.2%. CONCLUSION: The integration of AmCAD and SWE imaging in the transverse thyroid scan significantly enhances the diagnostic performance for distinguishing benign and malignant thyroid nodules. This strategy offers clinicians the advantage of obtaining more accurate clinical diagnoses and making well-informed decisions regarding patient management.

13.
BMC Med Res Methodol ; 24(1): 217, 2024 Sep 27.
Article in English | MEDLINE | ID: mdl-39333923

ABSTRACT

BACKGROUND: In computer-aided diagnosis (CAD) studies utilizing multireader multicase (MRMC) designs, missing data might occur when there are instances of misinterpretation or oversight by the reader or problems with measurement techniques. Improper handling of these missing data can lead to bias. However, little research has been conducted on addressing the missing data issue within the MRMC framework. METHODS: We introduced a novel approach that integrates multiple imputation with MRMC analysis (MI-MRMC). An elaborate simulation study was conducted to compare the efficacy of our proposed approach with that of the traditional complete case analysis strategy within the MRMC design. Furthermore, we applied these approaches to a real MRMC design CAD study on aneurysm detection via head and neck CT angiograms to further validate their practicality. RESULTS: Compared with traditional complete case analysis, the simulation study demonstrated the MI-MRMC approach provides an almost unbiased estimate of diagnostic capability, alongside satisfactory performance in terms of statistical power and the type I error rate within the MRMC framework, even in small sample scenarios. In the real CAD study, the proposed MI-MRMC method further demonstrated strong performance in terms of both point estimates and confidence intervals compared with traditional complete case analysis. CONCLUSION: Within MRMC design settings, the adoption of an MI-MRMC approach in the face of missing data can facilitate the attainment of unbiased and robust estimates of diagnostic capability.


Subject(s)
Computer Simulation , Humans , Research Design , Algorithms , Data Interpretation, Statistical
14.
Diagnostics (Basel) ; 14(18)2024 Sep 11.
Article in English | MEDLINE | ID: mdl-39335688

ABSTRACT

Objectives: Optical coherence tomography (OCT) has recently been used in gynecology to detect cervical lesions in vivo and proven more effective than colposcopy in clinical trials. However, most gynecologists are unfamiliar with this new imaging technique, requiring intelligent computer-aided diagnosis approaches to help them interpret cervical OCT images efficiently. This study aims to (1) develop a clinically-usable deep learning (DL)-based classification model of 3D OCT volumes from cervical tissue and (2) validate the DL model's effectiveness in detecting high-risk cervical lesions, including high-grade squamous intraepithelial lesions and cervical cancer. Method: The proposed DL model, designed based on the convolutional neural network architecture, combines a feature pyramid network (FPN) with texture encoding and deep supervision. We extracted, represent, and fused four-scale texture features to improve classification performance on high-risk local lesions. We also designed an auxiliary classification mechanism based on deep supervision to adjust the weight of each scale in FPN adaptively, enabling low-cost training of the whole model. Results: In the binary classification task detecting positive subjects with high-risk cervical lesions, our DL model achieved an 81.55% (95% CI, 72.70-88.51%) F1-score with 82.35% (95% CI, 69.13-91.60%) sensitivity and 81.48% (95% CI, 68.57-90.75%) specificity on the Renmin dataset, outperforming five experienced medical experts. It also achieved an 84.34% (95% CI, 74.71-91.39%) F1-score with 87.50% (95% CI, 73.20-95.81%) sensitivity and 90.59% (95% CI, 82.29-95.85%) specificity on the Huaxi dataset, comparable to the overall level of the best investigator. Moreover, our DL model provides visual diagnostic evidence of histomorphological and texture features learned in OCT images to assist gynecologists in making clinical decisions quickly. Conclusions: Our DL model holds great promise to be used in cervical lesion screening with OCT efficiently and effectively.

15.
Med Biol Eng Comput ; 2024 Sep 30.
Article in English | MEDLINE | ID: mdl-39343842

ABSTRACT

Recent advancements in deep learning have significantly improved the intelligent classification of gastrointestinal (GI) diseases, particularly in aiding clinical diagnosis. This paper seeks to review a computer-aided diagnosis (CAD) system for GI diseases, aligning with the actual clinical diagnostic process. It offers a comprehensive survey of deep learning (DL) techniques tailored for classifying GI diseases, addressing challenges inherent in complex scenes, clinical constraints, and technical obstacles encountered in GI imaging. Firstly, the esophagus, stomach, small intestine, and large intestine were located to determine the organs where the lesions were located. Secondly, location detection and classification of a single disease are performed on the premise that the organ's location corresponding to the image is known. Finally, comprehensive classification for multiple diseases is carried out. The results of single and multi-classification are compared to achieve more accurate classification outcomes, and a more effective computer-aided diagnosis system for gastrointestinal diseases was further constructed.

16.
J Dent ; 150: 105373, 2024 Sep 26.
Article in English | MEDLINE | ID: mdl-39332519

ABSTRACT

OBJECTIVES: Artificial intelligence (AI) could be used as an automatically diagnosis method for dental disease due to its accuracy and efficiency. This research proposed a novel convolutional neural network (CNN)-based deep learning (DL) ensemble model for tooth position detection, tooth outline segmentation, tooth tissue segmentation, periodontal bone loss and periodontitis stage prediction using dental panoramic radiographs. METHODS: The dental panoramic radiographs of 320 patients during the period January 2020 to December 2023 were collected in our dataset. All images were de-identified without private information. In total, 8462 teeth were included. The algorithms that DL ensemble model adopted include YOLOv8, Mask R-CNN, and TransUNet. The prediction results of DL method were compared with diagnosis of periodontists. RESULTS: The periodontal bone loss degree deviation between the DL method and ground truth drawn by the three periodontists was 5.28%. The overall PCC value of the DL method and the periodontists' diagnoses was 0.832 (P <​ 0.001). ​The ICC value was 0.806 (P <​ 0.001). The total diagnostic accuracy of the DL method was 89.45%. CONCLUSIONS: The proposed DL ensemble model in this study shows high accuracy and efficiency in radiographic detection and a valuable adjunct to periodontal diagnosis. The method has strong potential to enhance clinical professional performance and build more efficient dental health services. CLINICAL SIGNIFICANCE: The DL method not only could help dentists for rapid and accurate auxiliary diagnosis and prevent medical negligence, but also could be used as a useful learning resource for inexperienced dentists and dental students.

17.
Med Image Anal ; 99: 103320, 2024 Sep 02.
Article in English | MEDLINE | ID: mdl-39244796

ABSTRACT

The potential and promise of deep learning systems to provide an independent assessment and relieve radiologists' burden in screening mammography have been recognized in several studies. However, the low cancer prevalence, the need to process high-resolution images, and the need to combine information from multiple views and scales still pose technical challenges. Multi-view architectures that combine information from the four mammographic views to produce an exam-level classification score are a promising approach to the automated processing of screening mammography. However, training such architectures from exam-level labels, without relying on pixel-level supervision, requires very large datasets and may result in suboptimal accuracy. Emerging architectures such as Visual Transformers (ViT) and graph-based architectures can potentially integrate ipsi-lateral and contra-lateral breast views better than traditional convolutional neural networks, thanks to their stronger ability of modeling long-range dependencies. In this paper, we extensively evaluate novel transformer-based and graph-based architectures against state-of-the-art multi-view convolutional neural networks, trained in a weakly-supervised setting on a middle-scale dataset, both in terms of performance and interpretability. Extensive experiments on the CSAW dataset suggest that, while transformer-based architecture outperform other architectures, different inductive biases lead to complementary strengths and weaknesses, as each architecture is sensitive to different signs and mammographic features. Hence, an ensemble of different architectures should be preferred over a winner-takes-all approach to achieve more accurate and robust results. Overall, the findings highlight the potential of a wide range of multi-view architectures for breast cancer classification, even in datasets of relatively modest size, although the detection of small lesions remains challenging without pixel-wise supervision or ad-hoc networks.

18.
Med Biol Eng Comput ; 2024 Sep 18.
Article in English | MEDLINE | ID: mdl-39292382

ABSTRACT

Atherosclerosis causes heart disease by forming plaques in arterial walls. IVUS imaging provides a high-resolution cross-sectional view of coronary arteries and plaque morphology. Healthcare professionals diagnose and quantify atherosclerosis physically or using VH-IVUS software. Since manual or VH-IVUS software-based diagnosis is time-consuming, automated plaque characterization tools are essential for accurate atherosclerosis detection and classification. Recently, deep learning (DL) and computer vision (CV) approaches are promising tools for automatically classifying plaques on IVUS images. With this motivation, this manuscript proposes an automated atherosclerotic plaque classification method using a hybrid Ant Lion Optimizer with Deep Learning (AAPC-HALODL) technique on IVUS images. The AAPC-HALODL technique uses the faster regional convolutional neural network (Faster RCNN)-based segmentation approach to identify diseased regions in the IVUS images. Next, the ShuffleNet-v2 model generates a useful set of feature vectors from the segmented IVUS images, and its hyperparameters can be optimally selected by using the HALO technique. Finally, an average ensemble classification process comprising a stacked autoencoder (SAE) and deep extreme learning machine (DELM) model can be utilized. The MICCAI Challenge 2011 dataset was used for AAPC-HALODL simulation analysis. A detailed comparative study showed that the AAPC-HALODL approach outperformed other DL models with a maximum accuracy of 98.33%, precision of 97.87%, sensitivity of 98.33%, and F score of 98.10%.

19.
Sci Rep ; 14(1): 20647, 2024 09 04.
Article in English | MEDLINE | ID: mdl-39232180

ABSTRACT

Lung cancer (LC) is a life-threatening and dangerous disease all over the world. However, earlier diagnoses and treatment can save lives. Earlier diagnoses of malevolent cells in the lungs responsible for oxygenating the human body and expelling carbon dioxide due to significant procedures are critical. Even though a computed tomography (CT) scan is the best imaging approach in the healthcare sector, it is challenging for physicians to identify and interpret the tumour from CT scans. LC diagnosis in CT scan using artificial intelligence (AI) can help radiologists in earlier diagnoses, enhance performance, and decrease false negatives. Deep learning (DL) for detecting lymph node contribution on histopathological slides has become popular due to its great significance in patient diagnoses and treatment. This study introduces a computer-aided diagnosis for LC by utilizing the Waterwheel Plant Algorithm with DL (CADLC-WWPADL) approach. The primary aim of the CADLC-WWPADL approach is to classify and identify the existence of LC on CT scans. The CADLC-WWPADL method uses a lightweight MobileNet model for feature extraction. Besides, the CADLC-WWPADL method employs WWPA for the hyperparameter tuning process. Furthermore, the symmetrical autoencoder (SAE) model is utilized for classification. An investigational evaluation is performed to demonstrate the significant detection outputs of the CADLC-WWPADL technique. An extensive comparative study reported that the CADLC-WWPADL technique effectively performs with other models with a maximum accuracy of 99.05% under the benchmark CT image dataset.


Subject(s)
Algorithms , Deep Learning , Diagnosis, Computer-Assisted , Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/diagnosis , Lung Neoplasms/pathology , Tomography, X-Ray Computed/methods , Diagnosis, Computer-Assisted/methods
20.
Comput Methods Programs Biomed ; 256: 108379, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39217667

ABSTRACT

BACKGROUND AND OBJECTIVE: The incidence of facial fractures is on the rise globally, yet limited studies are addressing the diverse forms of facial fractures present in 3D images. In particular, due to the nature of the facial fracture, the direction in which the bone fractures vary, and there is no clear outline, it is difficult to determine the exact location of the fracture in 2D images. Thus, 3D image analysis is required to find the exact fracture area, but it needs heavy computational complexity and expensive pixel-wise labeling for supervised learning. In this study, we tackle the problem of reducing the computational burden and increasing the accuracy of fracture localization by using a weakly-supervised object localization without pixel-wise labeling in a 3D image space. METHODS: We propose a Very Fast, High-Resolution Aggregation 3D Detection CAM (VFHA-CAM) model, which can detect various facial fractures. To better detect tiny fractures, our model uses high-resolution feature maps and employs Ablation CAM to find an exact fracture location without pixel-wise labeling, where we use a rough fracture image detected with 3D box-wise labeling. To this end, we extract important features and use only essential features to reduce the computational complexity in 3D image space. RESULTS: Experimental findings demonstrate that VFHA-CAM surpasses state-of-the-art 2D detection methods by up to 20% in sensitivity/person and specificity/person, achieving sensitivity/person and specificity/person scores of 87% and 85%, respectively. In addition, Our VFHA-CAM reduces location analysis time to 76 s without performance degradation compared to a simple Ablation CAM method that takes more than 20 min. CONCLUSION: This study introduces a novel weakly-supervised object localization approach for bone fracture detection in 3D facial images. The proposed method employs a 3D detection model, which helps detect various forms of facial bone fractures accurately. The CAM algorithm adopted for fracture area segmentation within a 3D fracture detection box is key in quickly informing medical staff of the exact location of a facial bone fracture in a weakly-supervised object localization. In addition, we provide 3D visualization so that even non-experts unfamiliar with 3D CT images can identify the fracture status and location.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Skull Fractures/diagnostic imaging , Facial Bones/diagnostic imaging , Facial Bones/injuries , Tomography, X-Ray Computed/methods
SELECTION OF CITATIONS
SEARCH DETAIL