Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 9.953
Filter
1.
Medicine (Baltimore) ; 103(28): e38938, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38996141

ABSTRACT

The ENDOANGEL (EN) computer-assisted detection technique has emerged as a promising tool for enhancing the detection rate of colorectal adenomas during colonoscopies. However, its efficacy in identifying missed adenomas during subsequent colonoscopies remains unclear. Thus, we herein aimed to compare the adenoma miss rate (AMR) between EN-assisted and standard colonoscopies. Data from patients who underwent a second colonoscopy (EN-assisted or standard) within 6 months between September 2022 and May 2023 were analyzed. The EN-assisted group exhibited a significantly higher AMR (24.3% vs 11.9%, P = .005) than the standard group. After adjusting for potential confounders, multivariable analysis revealed that the EN-assisted group had a better ability to detect missed adenomas than the standard group (odds ratio = 2.89; 95% confidence interval = 1.14-7.80, P = .029). These findings suggest that EN-assisted colonoscopy represents a valuable advancement in improving AMR compared with standard colonoscopy. The integration of EN-assisted colonoscopy into routine clinical practice may offer significant benefits to patients requiring hospital resection of lesions following adenoma detection during their first colonoscopy.


Subject(s)
Adenoma , Colonoscopy , Colorectal Neoplasms , Humans , Colonoscopy/methods , Colorectal Neoplasms/diagnosis , Male , Female , Retrospective Studies , Adenoma/diagnosis , Adenoma/diagnostic imaging , Middle Aged , Aged , Missed Diagnosis/statistics & numerical data , Diagnosis, Computer-Assisted/methods , Adult
2.
Sci Rep ; 14(1): 17118, 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39054346

ABSTRACT

In recent years, artificial intelligence has made remarkable strides, improving various aspects of our daily lives. One notable application is in intelligent chatbots that use deep learning models. These systems have shown tremendous promise in the medical sector, enhancing healthcare quality, treatment efficiency, and cost-effectiveness. However, their role in aiding disease diagnosis, particularly chronic conditions, remains underexplored. Addressing this issue, this study employs large language models from the GPT series, in conjunction with deep learning techniques, to design and develop a diagnostic system targeted at chronic diseases. Specifically, performed transfer learning and fine-tuning on the GPT-2 model, enabling it to assist in accurately diagnosing 24 common chronic diseases. To provide a user-friendly interface and seamless interactive experience, we further developed a dialog-based interface, naming it Chat Ella. This system can make precise predictions for chronic diseases based on the symptoms described by users. Experimental results indicate that our model achieved an accuracy rate of 97.50% on the validation set, and an area under the curve (AUC) value reaching 99.91%. Moreover, conducted user satisfaction tests, which revealed that 68.7% of participants approved of Chat Ella, while 45.3% of participants found the system made daily medical consultations more convenient. It can rapidly and accurately assess a patient's condition based on the symptoms described and provide timely feedback, making it of significant value in the design of medical auxiliary products for household use.


Subject(s)
Deep Learning , Humans , Chronic Disease , Artificial Intelligence , Diagnosis, Computer-Assisted/methods
3.
Front Endocrinol (Lausanne) ; 15: 1372397, 2024.
Article in English | MEDLINE | ID: mdl-39015174

ABSTRACT

Background: Data-driven digital learning could improve the diagnostic performance of novice students for thyroid nodules. Objective: To evaluate the efficacy of digital self-learning and artificial intelligence-based computer-assisted diagnosis (AI-CAD) for inexperienced readers to diagnose thyroid nodules. Methods: Between February and August 2023, a total of 26 readers (less than 1 year of experience in thyroid US from various departments) from 6 hospitals participated in this study. Readers completed an online learning session comprising 3,000 thyroid nodules annotated as benign or malignant independently. They were asked to assess a test set consisting of 120 thyroid nodules with known surgical pathology before and after a learning session. Then, they referred to AI-CAD and made their final decisions on the thyroid nodules. Diagnostic performances before and after self-training and with AI-CAD assistance were evaluated and compared between radiology residents and readers from different specialties. Results: AUC (area under the receiver operating characteristic curve) improved after the self-learning session, and it improved further after radiologists referred to AI-CAD (0.679 vs 0.713 vs 0.758, p<0.05). Although the 18 radiology residents showed improved AUC (0.7 to 0.743, p=0.016) and accuracy (69.9% to 74.2%, p=0.013) after self-learning, the readers from other departments did not. With AI-CAD assistance, sensitivity (radiology 70.3% to 74.9%, others 67.9% to 82.3%, all p<0.05) and accuracy (radiology 74.2% to 77.1%, others 64.4% to 72.8%, all p <0.05) improved in all readers. Conclusion: While AI-CAD assistance helps improve the diagnostic performance of all inexperienced readers for thyroid nodules, self-learning was only effective for radiology residents with more background knowledge of ultrasonography. Clinical Impact: Online self-learning, along with AI-CAD assistance, can effectively enhance the diagnostic performance of radiology residents in thyroid cancer.


Subject(s)
Artificial Intelligence , Diagnosis, Computer-Assisted , Thyroid Nodule , Humans , Thyroid Nodule/diagnosis , Thyroid Nodule/diagnostic imaging , Female , Male , Diagnosis, Computer-Assisted/methods , Clinical Competence , Adult , Ultrasonography/methods , Radiology/education , ROC Curve , Internship and Residency/methods , Middle Aged
4.
PLoS One ; 19(7): e0304757, 2024.
Article in English | MEDLINE | ID: mdl-38990817

ABSTRACT

Recent advancements in AI, driven by big data technologies, have reshaped various industries, with a strong focus on data-driven approaches. This has resulted in remarkable progress in fields like computer vision, e-commerce, cybersecurity, and healthcare, primarily fueled by the integration of machine learning and deep learning models. Notably, the intersection of oncology and computer science has given rise to Computer-Aided Diagnosis (CAD) systems, offering vital tools to aid medical professionals in tumor detection, classification, recurrence tracking, and prognosis prediction. Breast cancer, a significant global health concern, is particularly prevalent in Asia due to diverse factors like lifestyle, genetics, environmental exposures, and healthcare accessibility. Early detection through mammography screening is critical, but the accuracy of mammograms can vary due to factors like breast composition and tumor characteristics, leading to potential misdiagnoses. To address this, an innovative CAD system leveraging deep learning and computer vision techniques was introduced. This system enhances breast cancer diagnosis by independently identifying and categorizing breast lesions, segmenting mass lesions, and classifying them based on pathology. Thorough validation using the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) demonstrated the CAD system's exceptional performance, with a 99% success rate in detecting and classifying breast masses. While the accuracy of detection is 98.5%, when segmenting breast masses into separate groups for examination, the method's performance was approximately 95.39%. Upon completing all the analysis, the system's classification phase yielded an overall accuracy of 99.16% for classification. The potential for this integrated framework to outperform current deep learning techniques is proposed, despite potential challenges related to the high number of trainable parameters. Ultimately, this recommended framework offers valuable support to researchers and physicians in breast cancer diagnosis by harnessing cutting-edge AI and image processing technologies, extending recent advances in deep learning to the medical domain.


Subject(s)
Breast Neoplasms , Deep Learning , Diagnosis, Computer-Assisted , Mammography , Humans , Breast Neoplasms/diagnosis , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/classification , Female , Mammography/methods , Diagnosis, Computer-Assisted/methods , Early Detection of Cancer/methods
5.
Article in Chinese | MEDLINE | ID: mdl-38973043

ABSTRACT

Objective:To build a VGG-based computer-aided diagnostic model for chronic sinusitis and evaluate its efficacy. Methods:①A total of 5 000 frames of diagnosed sinus CT images were collected. The normal group consisted of 1 000 frames(250 frames each of maxillary sinus, frontal sinus, septal sinus, and pterygoid sinus), while the abnormal group consisted of 4 000 frames(1 000 frames each of maxillary sinusitis, frontal sinusitis, septal sinusitis, and pterygoid sinusitis). ②The models were trained and simulated to obtain five classification models for the normal group, the pteroid sinusitis group, the frontal sinusitis group, the septal sinusitis group and the maxillary sinusitis group, respectively. The classification efficacy of the models was evaluated objectively in six dimensions: accuracy, precision, sensitivity, specificity, interpretation time and area under the ROC curve(AUC). ③Two hundred randomly selected images were read by the model with three groups of physicians(low, middle and high seniority) to constitute a comparative experiment. The efficacy of the model was objectively evaluated using the aforementioned evaluation indexes in conjunction with clinical analysis. Results:①Simulation experiment: The overall recognition accuracy of the model is 83.94%, with a precision of 89.52%, sensitivity of 83.94%, specificity of 95.99%, and the average interpretation time of each frame is 0.2 s. The AUC for sphenoid sinusitis was 0.865(95%CI 0.849-0.881), for frontal sinusitis was 0.924(0.991-0.936), for ethmoidoid sinusitis was 0.895(0.880-0.909), and for maxillary sinusitis was 0.974(0.967-0.982). ②Comparison experiment: In terms of recognition accuracy, the model was 84.52%, while the low-seniority physicians group was 78.50%, the middle-seniority physicians group was 80.50%, and the seniority physicians group was 83.50%; In terms of recognition accuracy, the model was 85.67%, the low seniority physicians group was 79.72%, the middle seniority physicians group was 82.67%, and the high seniority physicians group was 83.66%. In terms of recognition sensitivity, the model was 84.52%, the low seniority group was 78.50%, the middle seniority group was 80.50%, and the high seniority group was 83.50%. In terms of recognition specificity, the model was 96.58%, the low-seniority physicians group was 94.63%, the middle-seniority physicians group was 95.13%, and the seniority physicians group was 95.88%. In terms of time consumption, the average image per frame of the model is 0.20 s, the average image per frame of the low-seniority physicians group is 2.35 s, the average image per frame of the middle-seniority physicians group is 1.98 s, and the average image per frame of the senior physicians group is 2.19 s. Conclusion:This study demonstrates the potential of a deep learning-based artificial intelligence diagnostic model for chronic sinusitis to classify and diagnose chronic sinusitis; the deep learning-based artificial intelligence diagnosis model for chronic sinusitis has good classification performance and high diagnostic efficacy.


Subject(s)
Sinusitis , Tomography, X-Ray Computed , Humans , Chronic Disease , Tomography, X-Ray Computed/methods , Sinusitis/classification , Sinusitis/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Sensitivity and Specificity , Maxillary Sinusitis/diagnostic imaging , Maxillary Sinusitis/classification , Maxillary Sinus/diagnostic imaging , ROC Curve
6.
J Gastric Cancer ; 24(3): 327-340, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38960891

ABSTRACT

PURPOSE: Results of initial endoscopic biopsy of gastric lesions often differ from those of the final pathological diagnosis. We evaluated whether an artificial intelligence-based gastric lesion detection and diagnostic system, ENdoscopy as AI-powered Device Computer Aided Diagnosis for Gastroscopy (ENAD CAD-G), could reduce this discrepancy. MATERIALS AND METHODS: We retrospectively collected 24,948 endoscopic images of early gastric cancers (EGCs), dysplasia, and benign lesions from 9,892 patients who underwent esophagogastroduodenoscopy between 2011 and 2021. The diagnostic performance of ENAD CAD-G was evaluated using the following real-world datasets: patients referred from community clinics with initial biopsy results of atypia (n=154), participants who underwent endoscopic resection for neoplasms (Internal video set, n=140), and participants who underwent endoscopy for screening or suspicion of gastric neoplasm referred from community clinics (External video set, n=296). RESULTS: ENAD CAD-G classified the referred gastric lesions of atypia into EGC (accuracy, 82.47%; 95% confidence interval [CI], 76.46%-88.47%), dysplasia (88.31%; 83.24%-93.39%), and benign lesions (83.12%; 77.20%-89.03%). In the Internal video set, ENAD CAD-G identified dysplasia and EGC with diagnostic accuracies of 88.57% (95% CI, 83.30%-93.84%) and 91.43% (86.79%-96.07%), respectively, compared with an accuracy of 60.71% (52.62%-68.80%) for the initial biopsy results (P<0.001). In the External video set, ENAD CAD-G classified EGC, dysplasia, and benign lesions with diagnostic accuracies of 87.50% (83.73%-91.27%), 90.54% (87.21%-93.87%), and 88.85% (85.27%-92.44%), respectively. CONCLUSIONS: ENAD CAD-G is superior to initial biopsy for the detection and diagnosis of gastric lesions that require endoscopic resection. ENAD CAD-G can assist community endoscopists in identifying gastric lesions that require endoscopic resection.


Subject(s)
Artificial Intelligence , Stomach Neoplasms , Humans , Stomach Neoplasms/pathology , Stomach Neoplasms/diagnosis , Stomach Neoplasms/surgery , Retrospective Studies , Female , Male , Gastroscopy/methods , Middle Aged , Aged , Diagnosis, Computer-Assisted/methods , Biopsy/methods , Precancerous Conditions/pathology , Precancerous Conditions/diagnosis , Precancerous Conditions/surgery , Endoscopy, Digestive System/methods , Early Detection of Cancer/methods
7.
BMC Med Imaging ; 24(1): 165, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38956579

ABSTRACT

BACKGROUND: Pneumoconiosis has a significant impact on the quality of patient survival due to its difficult staging diagnosis and poor prognosis. This study aimed to develop a computer-aided diagnostic system for the screening and staging of pneumoconiosis based on a multi-stage joint deep learning approach using X-ray chest radiographs of pneumoconiosis patients. METHODS: In this study, a total of 498 medical chest radiographs were obtained from the Department of Radiology of West China Fourth Hospital. The dataset was randomly divided into a training set and a test set at a ratio of 4:1. Following histogram equalization for image enhancement, the images were segmented using the U-Net model, and staging was predicted using a convolutional neural network classification model. We first used Efficient-Net for multi-classification staging diagnosis, but the results showed that stage I/II of pneumoconiosis was difficult to diagnose. Therefore, based on clinical practice we continued to improve the model by using the Res-Net 34 Multi-stage joint method. RESULTS: Of the 498 cases collected, the classification model using the Efficient-Net achieved an accuracy of 83% with a Quadratic Weighted Kappa (QWK) score of 0.889. The classification model using the multi-stage joint approach of Res-Net 34 achieved an accuracy of 89% with an area under the curve (AUC) of 0.98 and a high QWK score of 0.94. CONCLUSIONS: In this study, the diagnostic accuracy of pneumoconiosis staging was significantly improved by an innovative combined multi-stage approach, which provided a reference for clinical application and pneumoconiosis screening.


Subject(s)
Deep Learning , Pneumoconiosis , Humans , Pneumoconiosis/diagnostic imaging , Pneumoconiosis/pathology , Male , Middle Aged , Female , Radiography, Thoracic/methods , Aged , Adult , Neural Networks, Computer , China , Diagnosis, Computer-Assisted/methods , Radiographic Image Interpretation, Computer-Assisted/methods
8.
BMC Med Imaging ; 24(1): 177, 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39030508

ABSTRACT

BACKGROUND: Cancer pathology shows disease development and associated molecular features. It provides extensive phenotypic information that is cancer-predictive and has potential implications for planning treatment. Based on the exceptional performance of computational approaches in the field of digital pathogenic, the use of rich phenotypic information in digital pathology images has enabled us to identify low-level gliomas (LGG) from high-grade gliomas (HGG). Because the differences between the textures are so slight, utilizing just one feature or a small number of features produces poor categorization results. METHODS: In this work, multiple feature extraction methods that can extract distinct features from the texture of histopathology image data are used to compare the classification outcomes. The successful feature extraction algorithms GLCM, LBP, multi-LBGLCM, GLRLM, color moment features, and RSHD have been chosen in this paper. LBP and GLCM algorithms are combined to create LBGLCM. The LBGLCM feature extraction approach is extended in this study to multiple scales using an image pyramid, which is defined by sampling the image both in space and scale. The preprocessing stage is first used to enhance the contrast of the images and remove noise and illumination effects. The feature extraction stage is then carried out to extract several important features (texture and color) from histopathology images. Third, the feature fusion and reduction step is put into practice to decrease the number of features that are processed, reducing the computation time of the suggested system. The classification stage is created at the end to categorize various brain cancer grades. We performed our analysis on the 821 whole-slide pathology images from glioma patients in the Cancer Genome Atlas (TCGA) dataset. Two types of brain cancer are included in the dataset: GBM and LGG (grades II and III). 506 GBM images and 315 LGG images are included in our analysis, guaranteeing representation of various tumor grades and histopathological features. RESULTS: The fusion of textural and color characteristics was validated in the glioma patients using the 10-fold cross-validation technique with an accuracy equals to 95.8%, sensitivity equals to 96.4%, DSC equals to 96.7%, and specificity equals to 97.1%. The combination of the color and texture characteristics produced significantly better accuracy, which supported their synergistic significance in the predictive model. The result indicates that the textural characteristics can be an objective, accurate, and comprehensive glioma prediction when paired with conventional imagery. CONCLUSION: The results outperform current approaches for identifying LGG from HGG and provide competitive performance in classifying four categories of glioma in the literature. The proposed model can help stratify patients in clinical studies, choose patients for targeted therapy, and customize specific treatment schedules.


Subject(s)
Algorithms , Brain Neoplasms , Color , Glioma , Neoplasm Grading , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Brain Neoplasms/classification , Glioma/diagnostic imaging , Glioma/pathology , Glioma/classification , Diagnosis, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods
9.
Sensors (Basel) ; 24(14)2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39066141

ABSTRACT

This research proposes an innovative, intelligent hand-assisted diagnostic system aiming to achieve a comprehensive assessment of hand function through information fusion technology. Based on the single-vision algorithm we designed, the system can perceive and analyze the morphology and motion posture of the patient's hands in real time. This visual perception can provide an objective data foundation and capture the continuous changes in the patient's hand movement, thereby providing more detailed information for the assessment and providing a scientific basis for subsequent treatment plans. By introducing medical knowledge graph technology, the system integrates and analyzes medical knowledge information and combines it with a voice question-answering system, allowing patients to communicate and obtain information effectively even with limited hand function. Voice question-answering, as a subjective and convenient interaction method, greatly improves the interactivity and communication efficiency between patients and the system. In conclusion, this system holds immense potential as a highly efficient and accurate hand-assisted assessment tool, delivering enhanced diagnostic services and rehabilitation support for patients.


Subject(s)
Algorithms , Hand , Humans , Hand/physiology , Diagnosis, Computer-Assisted/methods
10.
Comput Methods Programs Biomed ; 254: 108309, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39002431

ABSTRACT

BACKGROUND AND OBJECTIVE: This paper proposes a fully automated and unsupervised stochastic segmentation approach using two-level joint Markov-Gibbs Random Field (MGRF) to detect the vascular system from retinal Optical Coherence Tomography Angiography (OCTA) images, which is a critical step in developing Computer-Aided Diagnosis (CAD) systems for detecting retinal diseases. METHODS: Using a new probabilistic model based on a Linear Combination of Discrete Gaussian (LCDG), the first level models the appearance of OCTA images and their spatially smoothed images. The parameters of the LCDG model are estimated using a modified Expectation Maximization (EM) algorithm. The second level models the maps of OCTA images, including the vascular system and other retina tissues, using MGRF with analytically estimated parameters from the input images. The proposed segmentation approach employs modified self-organizing maps as a MAP-based optimizer maximizing the joint likelihood and handles the Joint MGRF model in a new, unsupervised way. This approach deviates from traditional stochastic optimization approaches and leverages non-linear optimization to achieve more accurate segmentation results. RESULTS: The proposed segmentation framework is evaluated quantitatively on a dataset of 204 subjects. Achieving 0.92 ± 0.03 Dice similarity coefficient, 0.69 ± 0.25 95-percentile bidirectional Hausdorff distance, and 0.93 ± 0.03 accuracy, confirms the superior performance of the proposed approach. CONCLUSIONS: The conclusions drawn from the study highlight the superior performance of the proposed unsupervised and fully automated segmentation approach in detecting the vascular system from OCTA images. This approach not only deviates from traditional methods but also achieves more accurate segmentation results, demonstrating its potential in aiding the development of CAD systems for detecting retinal diseases.


Subject(s)
Algorithms , Retinal Vessels , Tomography, Optical Coherence , Humans , Retinal Vessels/diagnostic imaging , Tomography, Optical Coherence/methods , Image Processing, Computer-Assisted/methods , Markov Chains , Retinal Diseases/diagnostic imaging , Models, Statistical , Diagnosis, Computer-Assisted/methods , Angiography/methods
12.
Sci Rep ; 14(1): 17447, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-39075091

ABSTRACT

The bone marrow overproduces immature cells in the malignancy known as Acute Lymphoblastic Leukemia (ALL). In the United States, about 6500 occurrences of ALL are diagnosed each year in both children and adults, comprising nearly 25% of pediatric cancer cases. Recently, many computer-assisted diagnosis (CAD) systems have been proposed to aid hematologists in reducing workload, providing correct results, and managing enormous volumes of data. Traditional CAD systems rely on hematologists' expertise, specialized features, and subject knowledge. Utilizing early detection of ALL can aid radiologists and doctors in making medical decisions. In this study, Deep Dilated Residual Convolutional Neural Network (DDRNet) is presented for the classification of blood cell images, focusing on eosinophils, lymphocytes, monocytes, and neutrophils. To tackle challenges like vanishing gradients and enhance feature extraction, the model incorporates Deep Residual Dilated Blocks (DRDB) for faster convergence. Conventional residual blocks are strategically placed between layers to preserve original information and extract general feature maps. Global and Local Feature Enhancement Blocks (GLFEB) balance weak contributions from shallow layers for improved feature normalization. The global feature from the initial convolution layer, when combined with GLFEB-processed features, reinforces classification representations. The Tanh function introduces non-linearity. A Channel and Spatial Attention Block (CSAB) is integrated into the neural network to emphasize or minimize specific feature channels, while fully connected layers transform the data. The use of a sigmoid activation function concentrates on relevant features for multiclass lymphoblastic leukemia classification The model was analyzed with Kaggle dataset (16,249 images) categorized into four classes, with a training and testing ratio of 80:20. Experimental results showed that DRDB, GLFEB and CSAB blocks' feature discrimination ability boosted the DDRNet model F1 score to 0.96 with minimal computational complexity and optimum classification accuracy of 99.86% and 91.98% for training and testing data. The DDRNet model stands out from existing methods due to its high testing accuracy of 91.98%, F1 score of 0.96, minimal computational complexity, and enhanced feature discrimination ability. The strategic combination of these blocks (DRDB, GLFEB, and CSAB) are designed to address specific challenges in the classification process, leading to improved discrimination of features crucial for accurate multi-class blood cell image identification. Their effective integration within the model contributes to the superior performance of DDRNet.


Subject(s)
Deep Learning , Precursor Cell Lymphoblastic Leukemia-Lymphoma , Precursor Cell Lymphoblastic Leukemia-Lymphoma/pathology , Precursor Cell Lymphoblastic Leukemia-Lymphoma/classification , Humans , Neural Networks, Computer , Diagnosis, Computer-Assisted/methods , Child
13.
Biomed Eng Online ; 23(1): 55, 2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38886737

ABSTRACT

BACKGROUND: Schizophrenia (SZ), a psychiatric disorder for which there is no precise diagnosis, has had a serious impact on the quality of human life and social activities for many years. Therefore, an advanced approach for accurate treatment is required. NEW METHOD: In this study, we provide a classification approach for SZ patients based on a spatial-temporal residual graph convolutional neural network (STRGCN). The model primarily collects spatial frequency features and temporal frequency features by spatial graph convolution and single-channel temporal convolution, respectively, and blends them both for the classification learning, in contrast to traditional approaches that only evaluate temporal frequency information in EEG and disregard spatial frequency features across brain regions. RESULTS: We conducted extensive experiments on the publicly available dataset Zenodo and our own collected dataset. The classification accuracy of the two datasets on our proposed method reached 96.32% and 85.44%, respectively. In the experiment, the dataset using delta has the best classification performance in the sub-bands. COMPARISON WITH EXISTING METHODS: Other methods mainly rely on deep learning models dominated by convolutional neural networks and long and short time memory networks, lacking exploration of the functional connections between channels. In contrast, the present method can treat the EEG signal as a graph and integrate and analyze the temporal frequency and spatial frequency features in the EEG signal. CONCLUSION: We provide an approach to not only performs better than other classic machine learning and deep learning algorithms on the dataset we used in diagnosing schizophrenia, but also understand the effects of schizophrenia on brain network features.


Subject(s)
Electroencephalography , Neural Networks, Computer , Schizophrenia , Schizophrenia/diagnosis , Schizophrenia/physiopathology , Humans , Electroencephalography/methods , Signal Processing, Computer-Assisted , Automation , Diagnosis, Computer-Assisted/methods , Spatio-Temporal Analysis
14.
Biomed Eng Online ; 23(1): 50, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38824547

ABSTRACT

BACKGROUND: Over 60% of epilepsy patients globally are children, whose early diagnosis and treatment are critical for their development and can substantially reduce the disease's burden on both families and society. Numerous algorithms for automated epilepsy detection from EEGs have been proposed. Yet, the occurrence of epileptic seizures during an EEG exam cannot always be guaranteed in clinical practice. Models that exclusively use seizure EEGs for detection risk artificially enhanced performance metrics. Therefore, there is a pressing need for a universally applicable model that can perform automatic epilepsy detection in a variety of complex real-world scenarios. METHOD: To address this problem, we have devised a novel technique employing a temporal convolutional neural network with self-attention (TCN-SA). Our model comprises two primary components: a TCN for extracting time-variant features from EEG signals, followed by a self-attention (SA) layer that assigns importance to these features. By focusing on key features, our model achieves heightened classification accuracy for epilepsy detection. RESULTS: The efficacy of our model was validated on a pediatric epilepsy dataset we collected and on the Bonn dataset, attaining accuracies of 95.50% on our dataset, and 97.37% (A v. E), and 93.50% (B vs E), respectively. When compared with other deep learning architectures (temporal convolutional neural network, self-attention network, and standardized convolutional neural network) using the same datasets, our TCN-SA model demonstrated superior performance in the automated detection of epilepsy. CONCLUSION: The proven effectiveness of the TCN-SA approach substantiates its potential as a valuable tool for the automated detection of epilepsy, offering significant benefits in diverse and complex real-world clinical settings.


Subject(s)
Electroencephalography , Epilepsy , Neural Networks, Computer , Epilepsy/diagnosis , Humans , Signal Processing, Computer-Assisted , Automation , Child , Deep Learning , Diagnosis, Computer-Assisted/methods , Time Factors
15.
Medicine (Baltimore) ; 103(25): e38478, 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38905434

ABSTRACT

The diagnosis of pneumoconiosis is complex and subjective, leading to inevitable variability in readings. This is especially true for inexperienced doctors. To improve accuracy, a computer-assisted diagnosis system is used for more effective pneumoconiosis diagnoses. Three models (Resnet50, Resnet101, and DenseNet) were used for pneumoconiosis classification based on 1250 chest X-ray images. Three experienced and highly qualified physicians read the collected digital radiography images and classified them from category 0 to category III in a double-blinded manner. The results of the 3 physicians in agreement were considered the relative gold standards. Subsequently, 3 models were used to train and test these images and their performance was evaluated using multi-class classification metrics. We used kappa values and accuracy to evaluate the consistency and reliability of the optimal model with clinical typing. The results showed that ResNet101 was the optimal model among the 3 convolutional neural networks. The AUC of ResNet101 was 1.0, 0.9, 0.89, and 0.94 for detecting pneumoconiosis categories 0, I, II, and III, respectively. The micro-average and macro-average mean AUC values were 0.93 and 0.94, respectively. The accuracy and Kappa values of ResNet101 were 0.72 and 0.7111 for quadruple classification and 0.98 and 0.955 for dichotomous classification, respectively, compared with the relative standard classification of the clinic. This study develops a deep learning based model for screening and staging of pneumoconiosis is using chest radiographs. The ResNet101 model performed relatively better in classifying pneumoconiosis than radiologists. The dichotomous classification displayed outstanding performance, thereby indicating the feasibility of deep learning techniques in pneumoconiosis screening.


Subject(s)
Deep Learning , Pneumoconiosis , Radiography, Thoracic , Humans , Pneumoconiosis/diagnostic imaging , Pneumoconiosis/diagnosis , Radiography, Thoracic/methods , Male , Middle Aged , Reproducibility of Results , Female , Diagnosis, Computer-Assisted/methods , Aged , Neural Networks, Computer
16.
Biomed Phys Eng Express ; 10(4)2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38848695

ABSTRACT

Recent advancements in computational intelligence, deep learning, and computer-aided detection have had a significant impact on the field of medical imaging. The task of image segmentation, which involves accurately interpreting and identifying the content of an image, has garnered much attention. The main objective of this task is to separate objects from the background, thereby simplifying and enhancing the significance of the image. However, existing methods for image segmentation have their limitations when applied to certain types of images. This survey paper aims to highlight the importance of image segmentation techniques by providing a thorough examination of their advantages and disadvantages. The accurate detection of cancer regions in medical images is crucial for ensuring effective treatment. In this study, we have also extensive analysis of Computer-Aided Diagnosis (CAD) systems for cancer identification, with a focus on recent research advancements. The paper critically assesses various techniques for cancer detection and compares their effectiveness. Convolutional neural networks (CNNs) have attracted particular interest due to their ability to segment and classify medical images in large datasets, thanks to their capacity for self- learning and decision-making.


Subject(s)
Algorithms , Artificial Intelligence , Diagnostic Imaging , Image Processing, Computer-Assisted , Neoplasms , Neural Networks, Computer , Humans , Neoplasms/diagnostic imaging , Neoplasms/diagnosis , Image Processing, Computer-Assisted/methods , Diagnostic Imaging/methods , Diagnosis, Computer-Assisted/methods , Deep Learning
17.
Sci Rep ; 14(1): 13442, 2024 06 11.
Article in English | MEDLINE | ID: mdl-38862529

ABSTRACT

With the advancement of internet communication and telemedicine, people are increasingly turning to the web for various healthcare activities. With an ever-increasing number of diseases and symptoms, diagnosing patients becomes challenging. In this work, we build a diagnosis assistant to assist doctors, which identifies diseases based on patient-doctor interaction. During diagnosis, doctors utilize both symptomatology knowledge and diagnostic experience to identify diseases accurately and efficiently. Inspired by this, we investigate the role of medical knowledge in disease diagnosis through doctor-patient interaction. We propose a two-channel, knowledge-infused, discourse-aware disease diagnosis model (KI-DDI), where the first channel encodes patient-doctor communication using a transformer-based encoder, while the other creates an embedding of symptom-disease using a graph attention network (GAT). In the next stage, the conversation and knowledge graph embeddings are infused together and fed to a deep neural network for disease identification. Furthermore, we first develop an empathetic conversational medical corpus comprising conversations between patients and doctors, annotated with intent and symptoms information. The proposed model demonstrates a significant improvement over the existing state-of-the-art models, establishing the crucial roles of (a) a doctor's effort for additional symptom extraction (in addition to patient self-report) and (b) infusing medical knowledge in identifying diseases effectively. Many times, patients also show their medical conditions, which acts as crucial evidence in diagnosis. Therefore, integrating visual sensory information would represent an effective avenue for enhancing the capabilities of diagnostic assistants.


Subject(s)
Physician-Patient Relations , Humans , Telemedicine , Diagnosis, Computer-Assisted/methods , Neural Networks, Computer , Communication
18.
BMC Med Imaging ; 24(1): 141, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38862884

ABSTRACT

OBJECTIVE: To evaluate the consistency between doctors and artificial intelligence (AI) software in analysing and diagnosing pulmonary nodules, and assess whether the characteristics of pulmonary nodules derived from the two methods are consistent for the interpretation of carcinomatous nodules. MATERIALS AND METHODS: This retrospective study analysed participants aged 40-74 in the local area from 2011 to 2013. Pulmonary nodules were examined radiologically using a low-dose chest CT scan, evaluated by an expert panel of doctors in radiology, oncology, and thoracic departments, as well as a computer-aided diagnostic(CAD) system based on the three-dimensional(3D) convolutional neural network (CNN) with DenseNet architecture(InferRead CT Lung, IRCL). Consistency tests were employed to assess the uniformity of the radiological characteristics of the pulmonary nodules. The receiver operating characteristic (ROC) curve was used to evaluate the diagnostic accuracy. Logistic regression analysis is utilized to determine whether the two methods yield the same predictive factors for cancerous nodules. RESULTS: A total of 570 subjects were included in this retrospective study. The AI software demonstrated high consistency with the panel's evaluation in determining the position and diameter of the pulmonary nodules (kappa = 0.883, concordance correlation coefficient (CCC) = 0.809, p = 0.000). The comparison of the solid nodules' attenuation characteristics also showed acceptable consistency (kappa = 0.503). In patients diagnosed with lung cancer, the area under the curve (AUC) for the panel and AI were 0.873 (95%CI: 0.829-0.909) and 0.921 (95%CI: 0.884-0.949), respectively. However, there was no significant difference (p = 0.0950). The maximum diameter, solid nodules, subsolid nodules were the crucial factors for interpreting carcinomatous nodules in the analysis of expert panel and IRCL pulmonary nodule characteristics. CONCLUSION: AI software can assist doctors in diagnosing nodules and is consistent with doctors' evaluations and diagnosis of pulmonary nodules.


Subject(s)
Artificial Intelligence , Diagnosis, Computer-Assisted , Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Retrospective Studies , Middle Aged , Male , Aged , Female , Adult , Diagnosis, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Early Detection of Cancer/methods , ROC Curve , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Software
19.
Comput Biol Med ; 178: 108740, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38901184

ABSTRACT

Alzheimer's disease (AD), one of the most common dementias, has about 4.6 million new cases yearly worldwide. Due to the significant amount of suspected AD patients, early screening for the disease has become particularly important. There are diversified types of AD diagnosis data, such as cognitive tests, images, and risk factors, many prior investigations have primarily concentrated on integrating only high-dimensional features and simple fusion concatenation, resulting in less-than-optimal outcomes for AD diagnosis. Therefore, We propose an enhanced multimodal AD diagnostic framework comprising a feature-aware module and an automatic model fusion strategy (AMFS). To preserve the correlation and significance features within a low-dimensional space, the feature-aware module employs a low-dimensional SHapley Additive exPlanation (SHAP) boosting feature selection as the initial step, following this analysis, diverse tiers of low-dimensional features are extracted from patients' biological data. Besides, in the high-dimensional stage, the feature-aware module integrates cross-modal attention mechanisms to capture subtle relationships among different cognitive domains, neuroimaging modalities, and risk factors. Subsequently, we integrate the aforementioned feature-aware module with graph convolutional networks (GCN) to address heterogeneous data in multimodal AD, while also possessing the capability to perceive relationships between different modalities. Lastly, our proposed AMFS autonomously learns optimal parameters for aligning two sub-models. The validation tests using two ADNI datasets show the high accuracies of 95.9% and 91.9% respectively, in AD diagnosis. The methods efficiently select features from multimodal AD data, optimizing model fusion for potential clinical assistance in diagnostics.


Subject(s)
Alzheimer Disease , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/diagnosis , Humans , Aged , Male , Female , Neuroimaging/methods , Diagnosis, Computer-Assisted/methods , Algorithms
20.
Comput Biol Med ; 178: 108742, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38875908

ABSTRACT

In recent years, there has been a significant improvement in the accuracy of the classification of pigmented skin lesions using artificial intelligence algorithms. Intelligent analysis and classification systems are significantly superior to visual diagnostic methods used by dermatologists and oncologists. However, the application of such systems in clinical practice is severely limited due to a lack of generalizability and risks of potential misclassification. Successful implementation of artificial intelligence-based tools into clinicopathological practice requires a comprehensive study of the effectiveness and performance of existing models, as well as further promising areas for potential research development. The purpose of this systematic review is to investigate and evaluate the accuracy of artificial intelligence technologies for detecting malignant forms of pigmented skin lesions. For the study, 10,589 scientific research and review articles were selected from electronic scientific publishers, of which 171 articles were included in the presented systematic review. All selected scientific articles are distributed according to the proposed neural network algorithms from machine learning to multimodal intelligent architectures and are described in the corresponding sections of the manuscript. This research aims to explore automated skin cancer recognition systems, from simple machine learning algorithms to multimodal ensemble systems based on advanced encoder-decoder models, visual transformers (ViT), and generative and spiking neural networks. In addition, as a result of the analysis, future directions of research, prospects, and potential for further development of automated neural network systems for classifying pigmented skin lesions are discussed.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Skin Neoplasms , Humans , Skin Neoplasms/classification , Skin Neoplasms/diagnosis , Skin Neoplasms/pathology , Diagnosis, Computer-Assisted/methods , Algorithms , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL