Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24.686
Filtrar
1.
Acute Med ; 23(2): 66-74, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39132729

RESUMEN

BACKGROUND: Chatbots hold great potential to serve as support tool in diagnosis and clinical decision process. In this study, we aimed to evaluate the accuracy of chatbots in diagnosing pulmonary embolism (PE). Furthermore, we assessed their performance in determining the PE severity. METHOD: 65 case reports meeting our inclusion criteria were selected for this study. Two emergency medicine (EM) physicians crafted clinical vignettes and introduced them to the Bard, Bing, and ChatGPT-3.5 with asking the top 10 diagnoses. After obtaining all differential diagnoses lists, vignettes enriched with supplemental data redirected to the chatbots with asking the severity of PE. RESULTS: ChatGPT-3.5, Bing, and Bard listed PE within the top 10 diagnoses list with accuracy rates of 92.3%, 92.3%, and 87.6%, respectively. For the top 3 diagnoses, Bard achieved 75.4% accuracy, while ChatGPT and Bing both had 67.7%. As the top diagnosis, Bard, ChatGPT-3.5, and Bing were accurate in 56.9%, 47.7% and 30.8% cases, respectively. Significant differences between Bard and both Bing (p=0.000) and ChatGPT (p=0.007) were noted in this group. Massive PEs were correctly identified with over 85% success rate. Overclassification rates for Bard, ChatGPT-3.5 and Bing at 38.5%, 23.3% and 20%, respectively. Misclassification rates were highest in submassive group. CONCLUSION: Although chatbots aren't intended for diagnosis, their high level of diagnostic accuracy and success rate in identifying massive PE underscore the promising potential of chatbots as clinical decision support tool. However, further research with larger patient datasets is required to validate and refine their performance in real-world clinical settings.


Asunto(s)
Embolia Pulmonar , Humanos , Embolia Pulmonar/diagnóstico , Masculino , Femenino , Diagnóstico Diferencial , Inteligencia Artificial , Persona de Mediana Edad , Enfermedad Aguda , Diagnóstico por Computador/métodos , Índice de Severidad de la Enfermedad
2.
PLoS Comput Biol ; 20(8): e1012327, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39102445

RESUMEN

Plasmodium parasites cause Malaria disease, which remains a significant threat to global health, affecting 200 million people and causing 400,000 deaths yearly. Plasmodium falciparum and Plasmodium vivax remain the two main malaria species affecting humans. Identifying the malaria disease in blood smears requires years of expertise, even for highly trained specialists. Literature studies have been coping with the automatic identification and classification of malaria. However, several points must be addressed and investigated so these automatic methods can be used clinically in a Computer-aided Diagnosis (CAD) scenario. In this work, we assess the transfer learning approach by using well-known pre-trained deep learning architectures. We considered a database with 6222 Region of Interest (ROI), of which 6002 are from the Broad Bioimage Benchmark Collection (BBBC), and 220 were acquired locally by us at Fundação Oswaldo Cruz (FIOCRUZ) in Porto Velho Velho, Rondônia-Brazil, which is part of the legal Amazon. We exhaustively cross-validated the dataset using 100 distinct partitions with 80% train and 20% test for each considering circular ROIs (rough segmentation). Our experimental results show that DenseNet201 has a potential to identify Plasmodium parasites in ROIs (infected or uninfected) of microscopic images, achieving 99.41% AUC with a fast processing time. We further validated our results, showing that DenseNet201 was significantly better (99% confidence interval) than the other networks considered in the experiment. Our results support claiming that transfer learning with texture features potentially differentiates subjects with malaria, spotting those with Plasmodium even in Leukocytes images, which is a challenge. In Future work, we intend scale our approach by adding more data and developing a friendly user interface for CAD use. We aim at aiding the worldwide population and our local natives living nearby the legal Amazon's rivers.


Asunto(s)
Microscopía , Humanos , Microscopía/métodos , Plasmodium falciparum/patogenicidad , Plasmodium vivax , Biología Computacional/métodos , Malaria/parasitología , Plasmodium , Aprendizaje Profundo , Bases de Datos Factuales , Procesamiento de Imagen Asistido por Computador/métodos , Malaria Falciparum/parasitología , Diagnóstico por Computador/métodos
3.
Codas ; 36(5): e20230241, 2024.
Artículo en Portugués, Inglés | MEDLINE | ID: mdl-39109754

RESUMEN

PURPOSE: Propose normalization values of the Horus® computerized posturography platform, in children aged 4 to 6 years, without auditory and/or vestibular complaints. METHODS: Cross-sectional study, 216 children aged 4 to 6 years participated. All the children underwent to visual screening, audiological evaluation and computerized posturography, which consists of research on stability limits and seven sensory conditions. The results were statistically analyzed using the tests non-parametric Kruskal-Walli, post hoc Dunn-Bonferroni for pairwise age comparisons and the Mann-Whitney U for sex analysis. Categorical data were presented in relative frequency and quantitative data in mean and standard deviation. RESULTS: Standardization values were described for the stability limit and for the seven sensory conditions. There was a difference for the stability limit between sex at 4 years old(p<0.007) and, in the comparison between ages 4 and 5 (p=0.005) and 4 and 6 years old(p<0.001). In the residual functional balance, comparison between ages, there was a difference between 4 and 5, 4 and 6, 5 and 6 years, however for different data. The presence of statistical difference for different evaluation data also occurred in the analysis by sex. In the sensory systems, the findings between ages showed differences for the vestibular system, right and left optokinetic visual dependence, tunnel visual dependence and for the composite balance index. CONCLUSION: It was possible to establish normative values for the Horus® posturography in healthy children aged 4 to 6 years.


OBJETIVO: Propor valores de normatização da plataforma de posturografia computadorizada Horus®, em crianças de 4 a 6 anos, sem queixas auditivas e/ou vestibulares. MÉTODO: Estudo transversal. Participaram 216 crianças na faixa etária de 4 a 6 anos. Todas realizaram triagem visual, avaliação auditiva e posturografia computadorizada composta por pesquisa do limite de estabilidade e sete condições sensoriais. Analisaram-se os resultados estatisticamente por testes não paramétrico Kruskal-Walli, post hoc Dunn-Bonferroni para comparações par-a-par nas idades e U de Mann-Whitney para análise entre sexo. Os dados categóricos foram apresentados em frequência relativa e os dados quantitativos pela média e desvio padrão. RESULTADOS: Foram descritos valores de normatização para o limite de estabilidade e para as sete condições sensoriais. Houve diferença para o limite de estabilidade entre sexos aos 4 anos (p<0,007) e, na comparação entre as idades 4 e 5 anos (p=0,005) e 4 e 6 anos (p<0,001). No equilíbrio funcional residual, comparação entre idades, houve diferença entre 4 e 5, 4 e 6 e, 5 e 6 anos, entretanto para diferentes dados. A presença de diferença estatística para diferentes dados da avaliação, ocorreu também na análise por sexo. Nos sistemas sensoriais os achados entre idades mostraram diferença para o sistema vestibular, dependência visual optocinética direita e esquerda, dependência visual túnel e para índice de equilíbrio composto. Sugere-se que para esta população, as respostas na posturografia sejam analisadas por faixa etária e sexo. CONCLUSÃO: Foi possível estabelecer valores normativos para a posturografia Horus® em crianças hígidas na faixa etária de 4 a 6 anos.


Asunto(s)
Equilibrio Postural , Humanos , Estudios Transversales , Preescolar , Masculino , Femenino , Valores de Referencia , Niño , Equilibrio Postural/fisiología , Pruebas de Función Vestibular/métodos , Pruebas de Función Vestibular/instrumentación , Pruebas de Función Vestibular/normas , Diagnóstico por Computador/normas , Diagnóstico por Computador/métodos
4.
BMC Med Imaging ; 24(1): 177, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39030508

RESUMEN

BACKGROUND: Cancer pathology shows disease development and associated molecular features. It provides extensive phenotypic information that is cancer-predictive and has potential implications for planning treatment. Based on the exceptional performance of computational approaches in the field of digital pathogenic, the use of rich phenotypic information in digital pathology images has enabled us to identify low-level gliomas (LGG) from high-grade gliomas (HGG). Because the differences between the textures are so slight, utilizing just one feature or a small number of features produces poor categorization results. METHODS: In this work, multiple feature extraction methods that can extract distinct features from the texture of histopathology image data are used to compare the classification outcomes. The successful feature extraction algorithms GLCM, LBP, multi-LBGLCM, GLRLM, color moment features, and RSHD have been chosen in this paper. LBP and GLCM algorithms are combined to create LBGLCM. The LBGLCM feature extraction approach is extended in this study to multiple scales using an image pyramid, which is defined by sampling the image both in space and scale. The preprocessing stage is first used to enhance the contrast of the images and remove noise and illumination effects. The feature extraction stage is then carried out to extract several important features (texture and color) from histopathology images. Third, the feature fusion and reduction step is put into practice to decrease the number of features that are processed, reducing the computation time of the suggested system. The classification stage is created at the end to categorize various brain cancer grades. We performed our analysis on the 821 whole-slide pathology images from glioma patients in the Cancer Genome Atlas (TCGA) dataset. Two types of brain cancer are included in the dataset: GBM and LGG (grades II and III). 506 GBM images and 315 LGG images are included in our analysis, guaranteeing representation of various tumor grades and histopathological features. RESULTS: The fusion of textural and color characteristics was validated in the glioma patients using the 10-fold cross-validation technique with an accuracy equals to 95.8%, sensitivity equals to 96.4%, DSC equals to 96.7%, and specificity equals to 97.1%. The combination of the color and texture characteristics produced significantly better accuracy, which supported their synergistic significance in the predictive model. The result indicates that the textural characteristics can be an objective, accurate, and comprehensive glioma prediction when paired with conventional imagery. CONCLUSION: The results outperform current approaches for identifying LGG from HGG and provide competitive performance in classifying four categories of glioma in the literature. The proposed model can help stratify patients in clinical studies, choose patients for targeted therapy, and customize specific treatment schedules.


Asunto(s)
Algoritmos , Neoplasias Encefálicas , Color , Glioma , Clasificación del Tumor , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Neoplasias Encefálicas/clasificación , Glioma/diagnóstico por imagen , Glioma/patología , Glioma/clasificación , Diagnóstico por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos
5.
BMC Med Imaging ; 24(1): 165, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956579

RESUMEN

BACKGROUND: Pneumoconiosis has a significant impact on the quality of patient survival due to its difficult staging diagnosis and poor prognosis. This study aimed to develop a computer-aided diagnostic system for the screening and staging of pneumoconiosis based on a multi-stage joint deep learning approach using X-ray chest radiographs of pneumoconiosis patients. METHODS: In this study, a total of 498 medical chest radiographs were obtained from the Department of Radiology of West China Fourth Hospital. The dataset was randomly divided into a training set and a test set at a ratio of 4:1. Following histogram equalization for image enhancement, the images were segmented using the U-Net model, and staging was predicted using a convolutional neural network classification model. We first used Efficient-Net for multi-classification staging diagnosis, but the results showed that stage I/II of pneumoconiosis was difficult to diagnose. Therefore, based on clinical practice we continued to improve the model by using the Res-Net 34 Multi-stage joint method. RESULTS: Of the 498 cases collected, the classification model using the Efficient-Net achieved an accuracy of 83% with a Quadratic Weighted Kappa (QWK) score of 0.889. The classification model using the multi-stage joint approach of Res-Net 34 achieved an accuracy of 89% with an area under the curve (AUC) of 0.98 and a high QWK score of 0.94. CONCLUSIONS: In this study, the diagnostic accuracy of pneumoconiosis staging was significantly improved by an innovative combined multi-stage approach, which provided a reference for clinical application and pneumoconiosis screening.


Asunto(s)
Aprendizaje Profundo , Neumoconiosis , Humanos , Neumoconiosis/diagnóstico por imagen , Neumoconiosis/patología , Masculino , Persona de Mediana Edad , Femenino , Radiografía Torácica/métodos , Anciano , Adulto , Redes Neurales de la Computación , China , Diagnóstico por Computador/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
7.
Artículo en Chino | MEDLINE | ID: mdl-38973043

RESUMEN

Objective:To build a VGG-based computer-aided diagnostic model for chronic sinusitis and evaluate its efficacy. Methods:①A total of 5 000 frames of diagnosed sinus CT images were collected. The normal group consisted of 1 000 frames(250 frames each of maxillary sinus, frontal sinus, septal sinus, and pterygoid sinus), while the abnormal group consisted of 4 000 frames(1 000 frames each of maxillary sinusitis, frontal sinusitis, septal sinusitis, and pterygoid sinusitis). ②The models were trained and simulated to obtain five classification models for the normal group, the pteroid sinusitis group, the frontal sinusitis group, the septal sinusitis group and the maxillary sinusitis group, respectively. The classification efficacy of the models was evaluated objectively in six dimensions: accuracy, precision, sensitivity, specificity, interpretation time and area under the ROC curve(AUC). ③Two hundred randomly selected images were read by the model with three groups of physicians(low, middle and high seniority) to constitute a comparative experiment. The efficacy of the model was objectively evaluated using the aforementioned evaluation indexes in conjunction with clinical analysis. Results:①Simulation experiment: The overall recognition accuracy of the model is 83.94%, with a precision of 89.52%, sensitivity of 83.94%, specificity of 95.99%, and the average interpretation time of each frame is 0.2 s. The AUC for sphenoid sinusitis was 0.865(95%CI 0.849-0.881), for frontal sinusitis was 0.924(0.991-0.936), for ethmoidoid sinusitis was 0.895(0.880-0.909), and for maxillary sinusitis was 0.974(0.967-0.982). ②Comparison experiment: In terms of recognition accuracy, the model was 84.52%, while the low-seniority physicians group was 78.50%, the middle-seniority physicians group was 80.50%, and the seniority physicians group was 83.50%; In terms of recognition accuracy, the model was 85.67%, the low seniority physicians group was 79.72%, the middle seniority physicians group was 82.67%, and the high seniority physicians group was 83.66%. In terms of recognition sensitivity, the model was 84.52%, the low seniority group was 78.50%, the middle seniority group was 80.50%, and the high seniority group was 83.50%. In terms of recognition specificity, the model was 96.58%, the low-seniority physicians group was 94.63%, the middle-seniority physicians group was 95.13%, and the seniority physicians group was 95.88%. In terms of time consumption, the average image per frame of the model is 0.20 s, the average image per frame of the low-seniority physicians group is 2.35 s, the average image per frame of the middle-seniority physicians group is 1.98 s, and the average image per frame of the senior physicians group is 2.19 s. Conclusion:This study demonstrates the potential of a deep learning-based artificial intelligence diagnostic model for chronic sinusitis to classify and diagnose chronic sinusitis; the deep learning-based artificial intelligence diagnosis model for chronic sinusitis has good classification performance and high diagnostic efficacy.


Asunto(s)
Sinusitis , Tomografía Computarizada por Rayos X , Humanos , Enfermedad Crónica , Tomografía Computarizada por Rayos X/métodos , Sinusitis/clasificación , Sinusitis/diagnóstico por imagen , Diagnóstico por Computador/métodos , Sensibilidad y Especificidad , Sinusitis Maxilar/diagnóstico por imagen , Sinusitis Maxilar/clasificación , Seno Maxilar/diagnóstico por imagen , Curva ROC
8.
Cell Biochem Funct ; 42(5): e4088, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38973163

RESUMEN

The field of image processing is experiencing significant advancements to support professionals in analyzing histological images obtained from biopsies. The primary objective is to enhance the process of diagnosis and prognostic evaluations. Various forms of cancer can be diagnosed by employing different segmentation techniques followed by postprocessing approaches that can identify distinct neoplastic areas. Using computer approaches facilitates a more objective and efficient study of experts. The progressive advancement of histological image analysis holds significant importance in modern medicine. This paper provides an overview of the current advances in segmentation and classification approaches for images of follicular lymphoma. This research analyzes the primary image processing techniques utilized in the various stages of preprocessing, segmentation of the region of interest, classification, and postprocessing as described in the existing literature. The study also examines the strengths and weaknesses associated with these approaches. Additionally, this study encompasses an examination of validation procedures and an exploration of prospective future research roads in the segmentation of neoplasias.


Asunto(s)
Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador , Linfoma Folicular , Linfoma Folicular/diagnóstico , Linfoma Folicular/patología , Humanos
9.
J Gastric Cancer ; 24(3): 327-340, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38960891

RESUMEN

PURPOSE: Results of initial endoscopic biopsy of gastric lesions often differ from those of the final pathological diagnosis. We evaluated whether an artificial intelligence-based gastric lesion detection and diagnostic system, ENdoscopy as AI-powered Device Computer Aided Diagnosis for Gastroscopy (ENAD CAD-G), could reduce this discrepancy. MATERIALS AND METHODS: We retrospectively collected 24,948 endoscopic images of early gastric cancers (EGCs), dysplasia, and benign lesions from 9,892 patients who underwent esophagogastroduodenoscopy between 2011 and 2021. The diagnostic performance of ENAD CAD-G was evaluated using the following real-world datasets: patients referred from community clinics with initial biopsy results of atypia (n=154), participants who underwent endoscopic resection for neoplasms (Internal video set, n=140), and participants who underwent endoscopy for screening or suspicion of gastric neoplasm referred from community clinics (External video set, n=296). RESULTS: ENAD CAD-G classified the referred gastric lesions of atypia into EGC (accuracy, 82.47%; 95% confidence interval [CI], 76.46%-88.47%), dysplasia (88.31%; 83.24%-93.39%), and benign lesions (83.12%; 77.20%-89.03%). In the Internal video set, ENAD CAD-G identified dysplasia and EGC with diagnostic accuracies of 88.57% (95% CI, 83.30%-93.84%) and 91.43% (86.79%-96.07%), respectively, compared with an accuracy of 60.71% (52.62%-68.80%) for the initial biopsy results (P<0.001). In the External video set, ENAD CAD-G classified EGC, dysplasia, and benign lesions with diagnostic accuracies of 87.50% (83.73%-91.27%), 90.54% (87.21%-93.87%), and 88.85% (85.27%-92.44%), respectively. CONCLUSIONS: ENAD CAD-G is superior to initial biopsy for the detection and diagnosis of gastric lesions that require endoscopic resection. ENAD CAD-G can assist community endoscopists in identifying gastric lesions that require endoscopic resection.


Asunto(s)
Inteligencia Artificial , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/patología , Neoplasias Gástricas/diagnóstico , Neoplasias Gástricas/cirugía , Estudios Retrospectivos , Femenino , Masculino , Gastroscopía/métodos , Persona de Mediana Edad , Anciano , Diagnóstico por Computador/métodos , Biopsia/métodos , Lesiones Precancerosas/patología , Lesiones Precancerosas/diagnóstico , Lesiones Precancerosas/cirugía , Endoscopía del Sistema Digestivo/métodos , Detección Precoz del Cáncer/métodos
10.
Medicine (Baltimore) ; 103(28): e38938, 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38996141

RESUMEN

The ENDOANGEL (EN) computer-assisted detection technique has emerged as a promising tool for enhancing the detection rate of colorectal adenomas during colonoscopies. However, its efficacy in identifying missed adenomas during subsequent colonoscopies remains unclear. Thus, we herein aimed to compare the adenoma miss rate (AMR) between EN-assisted and standard colonoscopies. Data from patients who underwent a second colonoscopy (EN-assisted or standard) within 6 months between September 2022 and May 2023 were analyzed. The EN-assisted group exhibited a significantly higher AMR (24.3% vs 11.9%, P = .005) than the standard group. After adjusting for potential confounders, multivariable analysis revealed that the EN-assisted group had a better ability to detect missed adenomas than the standard group (odds ratio = 2.89; 95% confidence interval = 1.14-7.80, P = .029). These findings suggest that EN-assisted colonoscopy represents a valuable advancement in improving AMR compared with standard colonoscopy. The integration of EN-assisted colonoscopy into routine clinical practice may offer significant benefits to patients requiring hospital resection of lesions following adenoma detection during their first colonoscopy.


Asunto(s)
Adenoma , Colonoscopía , Neoplasias Colorrectales , Humanos , Colonoscopía/métodos , Neoplasias Colorrectales/diagnóstico , Masculino , Femenino , Estudios Retrospectivos , Adenoma/diagnóstico , Adenoma/diagnóstico por imagen , Persona de Mediana Edad , Anciano , Diagnóstico Erróneo/estadística & datos numéricos , Diagnóstico por Computador/métodos , Adulto
11.
Sensors (Basel) ; 24(14)2024 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-39066141

RESUMEN

This research proposes an innovative, intelligent hand-assisted diagnostic system aiming to achieve a comprehensive assessment of hand function through information fusion technology. Based on the single-vision algorithm we designed, the system can perceive and analyze the morphology and motion posture of the patient's hands in real time. This visual perception can provide an objective data foundation and capture the continuous changes in the patient's hand movement, thereby providing more detailed information for the assessment and providing a scientific basis for subsequent treatment plans. By introducing medical knowledge graph technology, the system integrates and analyzes medical knowledge information and combines it with a voice question-answering system, allowing patients to communicate and obtain information effectively even with limited hand function. Voice question-answering, as a subjective and convenient interaction method, greatly improves the interactivity and communication efficiency between patients and the system. In conclusion, this system holds immense potential as a highly efficient and accurate hand-assisted assessment tool, delivering enhanced diagnostic services and rehabilitation support for patients.


Asunto(s)
Algoritmos , Mano , Humanos , Mano/fisiología , Diagnóstico por Computador/métodos
12.
Comput Methods Programs Biomed ; 254: 108309, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39002431

RESUMEN

BACKGROUND AND OBJECTIVE: This paper proposes a fully automated and unsupervised stochastic segmentation approach using two-level joint Markov-Gibbs Random Field (MGRF) to detect the vascular system from retinal Optical Coherence Tomography Angiography (OCTA) images, which is a critical step in developing Computer-Aided Diagnosis (CAD) systems for detecting retinal diseases. METHODS: Using a new probabilistic model based on a Linear Combination of Discrete Gaussian (LCDG), the first level models the appearance of OCTA images and their spatially smoothed images. The parameters of the LCDG model are estimated using a modified Expectation Maximization (EM) algorithm. The second level models the maps of OCTA images, including the vascular system and other retina tissues, using MGRF with analytically estimated parameters from the input images. The proposed segmentation approach employs modified self-organizing maps as a MAP-based optimizer maximizing the joint likelihood and handles the Joint MGRF model in a new, unsupervised way. This approach deviates from traditional stochastic optimization approaches and leverages non-linear optimization to achieve more accurate segmentation results. RESULTS: The proposed segmentation framework is evaluated quantitatively on a dataset of 204 subjects. Achieving 0.92 ± 0.03 Dice similarity coefficient, 0.69 ± 0.25 95-percentile bidirectional Hausdorff distance, and 0.93 ± 0.03 accuracy, confirms the superior performance of the proposed approach. CONCLUSIONS: The conclusions drawn from the study highlight the superior performance of the proposed unsupervised and fully automated segmentation approach in detecting the vascular system from OCTA images. This approach not only deviates from traditional methods but also achieves more accurate segmentation results, demonstrating its potential in aiding the development of CAD systems for detecting retinal diseases.


Asunto(s)
Algoritmos , Vasos Retinianos , Tomografía de Coherencia Óptica , Humanos , Vasos Retinianos/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Cadenas de Markov , Enfermedades de la Retina/diagnóstico por imagen , Modelos Estadísticos , Diagnóstico por Computador/métodos , Angiografía/métodos
13.
Scand J Gastroenterol ; 59(8): 925-932, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38950889

RESUMEN

OBJECTIVES: Recently, artificial intelligence (AI) has been applied to clinical diagnosis. Although AI has already been developed for gastrointestinal (GI) tract endoscopy, few studies have applied AI to endoscopic ultrasound (EUS) images. In this study, we used a computer-assisted diagnosis (CAD) system with deep learning analysis of EUS images (EUS-CAD) and assessed its ability to differentiate GI stromal tumors (GISTs) from other mesenchymal tumors and their risk classification performance. MATERIALS AND METHODS: A total of 101 pathologically confirmed cases of subepithelial lesions (SELs) arising from the muscularis propria layer, including 69 GISTs, 17 leiomyomas and 15 schwannomas, were examined. A total of 3283 EUS images were used for training and five-fold-cross-validation, and 827 images were independently tested for diagnosing GISTs. For the risk classification of 69 GISTs, including very-low-, low-, intermediate- and high-risk GISTs, 2,784 EUS images were used for training and three-fold-cross-validation. RESULTS: For the differential diagnostic performance of GIST among all SELs, the accuracy, sensitivity, specificity and area under the receiver operating characteristic (ROC) curve were 80.4%, 82.9%, 75.3% and 0.865, respectively, whereas those for intermediate- and high-risk GISTs were 71.8%, 70.2%, 72.0% and 0.771, respectively. CONCLUSIONS: The EUS-CAD system showed a good diagnostic yield in differentiating GISTs from other mesenchymal tumors and successfully demonstrated the GIST risk classification feasibility. This system can determine whether treatment is necessary based on EUS imaging alone without the need for additional invasive examinations.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador , Endosonografía , Neoplasias Gastrointestinales , Tumores del Estroma Gastrointestinal , Curva ROC , Humanos , Diagnóstico Diferencial , Tumores del Estroma Gastrointestinal/diagnóstico por imagen , Tumores del Estroma Gastrointestinal/patología , Tumores del Estroma Gastrointestinal/diagnóstico , Neoplasias Gastrointestinales/diagnóstico por imagen , Neoplasias Gastrointestinales/diagnóstico , Femenino , Persona de Mediana Edad , Masculino , Anciano , Adulto , Medición de Riesgo , Sensibilidad y Especificidad , Anciano de 80 o más Años
14.
Nutr Metab Cardiovasc Dis ; 34(9): 2034-2045, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39004592

RESUMEN

AIM: Machine learning may be a tool with the potential for obesity prediction. This study aims to review the literature on the performance of machine learning models in predicting obesity and to quantify the pooled results through a meta-analysis. DATA SYNTHESIS: A systematic review and meta-analysis were conducted, including studies that used machine learning to predict obesity. Searches were conducted in October 2023 across databases including LILACS, Web of Science, Scopus, Embase, and CINAHL. We included studies that utilized classification models and reported results in the Area Under the ROC Curve (AUC) (PROSPERO registration: CRD42022306940), without imposing restrictions on the year of publication. The risk of bias was assessed using an adapted version of the Transparent Reporting of a multivariable prediction model for individual Prognosis or Diagnosis (TRIPOD). Meta-analysis was conducted using MedCalc software. A total of 14 studies were included, with the majority demonstrating satisfactory performance for obesity prediction, with AUCs exceeding 0.70. The random forest algorithm emerged as the top performer in obesity prediction, achieving an AUC of 0.86 (95%CI: 0.76-0.96; I2: 99.8%), closely followed by logistic regression with an AUC of 0.85 (95%CI: 0.75-0.95; I2: 99.6%). The least effective model was gradient boosting, with an AUC of 0.77 (95%CI: 0.71-0.82; I2: 98.1%). CONCLUSION: Machine learning models demonstrated satisfactory predictive performance for obesity. However, future research should utilize more comparable data, larger databases, and a broader range of machine learning models.


Asunto(s)
Aprendizaje Automático , Obesidad , Valor Predictivo de las Pruebas , Humanos , Obesidad/diagnóstico , Obesidad/epidemiología , Masculino , Femenino , Anciano , Factores de Riesgo , Medición de Riesgo , Persona de Mediana Edad , Adulto , Factores de Edad , Reproducibilidad de los Resultados , Técnicas de Apoyo para la Decisión , Adulto Joven , Anciano de 80 o más Años , Diagnóstico por Computador , Pronóstico
15.
J Electrocardiol ; 85: 96-108, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38971625

RESUMEN

BACKGROUND: Electrocardiograms (ECGs) are vital for diagnosing cardiac conditions but obtaining clean signals in Left Ventricular Assist Device (LVAD) patients is hindered by electromagnetic interference (EMI). Traditional filters have limited efficacy. There is a current need for an easy and effective method. METHODS: Raw ECG data obtained from 5 patients with LVADs. LVAD types included HeartMate II, III at multiple impeller speeds, and a case with HeartMate III and a ProtekDuo. ECG spectral profiles were examined ensuring the presence of diverse types of EMI in the study. ECGs were then processed with four denoising techniques: Moving Average Filter, Finite Impulse Response Filter, Fast Fourier Transform, and Discrete Wavelet Transform. RESULTS: Discrete Wavelet Transform proved as the most promising method. It offered a one solution fits all, enabling automatic processing with minimal user input while preserving crucial high-frequency components and reducing LVAD EMI artifacts. CONCLUSION: Our study demonstrates the practicality and efficiency of Discrete Wavelet Transform in obtaining high-fidelity ECGs in LVAD patients. This method could enhance clinical diagnosis and monitoring.


Asunto(s)
Algoritmos , Electrocardiografía , Corazón Auxiliar , Análisis de Ondículas , Humanos , Electrocardiografía/métodos , Artefactos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Masculino , Diagnóstico por Computador/métodos , Femenino , Persona de Mediana Edad , Relación Señal-Ruido
16.
Lancet Gastroenterol Hepatol ; 9(9): 802-810, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39033774

RESUMEN

BACKGROUND: Computer-aided detection (CADe) systems for colonoscopy have been shown to increase small polyp detection during colonoscopy in the general population. People with Lynch syndrome represent an ideal target population for CADe-assisted colonoscopy because adenomas, the primary cancer precursor lesions, are characterised by their small size and higher likelihood of showing advanced histology. We aimed to evaluate the performance of CADe-assisted colonoscopy in detecting adenomas in individuals with Lynch syndrome. METHODS: TIMELY was an international, multicentre, parallel, randomised controlled trial done in 11 academic centres and six community centres in Belgium, Germany, Italy, and Spain. We enrolled individuals aged 18 years or older with pathogenic or likely pathogenic MLH1, MSH2, MSH6, or EPCAM variants. Participants were consecutively randomly assigned (1:1) to either CADe (GI Genius) assisted white light endoscopy (WLE) or WLE alone. A centre-stratified randomisation sequence was generated through a computer-generated system with a separate randomisation list for each centre according to block-permuted randomisation (block size 26 patients per centre). Allocation was automatically provided by the online AEG-REDCap database. Participants were masked to the random assignment but endoscopists were not. The primary outcome was the mean number of adenomas per colonoscopy, calculated by dividing the total number of adenomas detected by the total number of colonoscopies and assessed in the intention-to-treat population. This trial is registered with ClinicalTrials.gov, NCT04909671. FINDINGS: Between Sept 13, 2021, and April 6, 2023, 456 participants were screened for eligibility, 430 of whom were randomly assigned to receive CADe-assisted colonoscopy (n=214) or WLE (n=216). 256 (60%) participants were female and 174 (40%) were male. In the intention-to-treat analysis, the mean number of adenomas per colonoscopy was 0·64 (SD 1·57) in the CADe group and 0·64 (1·17) in the WLE group (adjusted rate ratio 1·03 [95% CI 0·72-1·47); p=0·87). No adverse events were reported during the trial. INTERPRETATION: In this multicentre international trial, CADe did not improve the detection of adenomas in individuals with Lynch syndrome. High-quality procedures and thorough inspection and exposure of the colonic mucosa remain the cornerstone in surveillance of Lynch syndrome. FUNDING: Spanish Gastroenterology Association, Spanish Society of Digestive Endoscopy, European Society of Gastrointestinal Endoscopy, Societat Catalana de Digestologia, Instituto Carlos III, Beca de la Marato de TV3 2020. Co-funded by the European Union.


Asunto(s)
Adenoma , Inteligencia Artificial , Colonoscopía , Neoplasias Colorrectales Hereditarias sin Poliposis , Humanos , Neoplasias Colorrectales Hereditarias sin Poliposis/diagnóstico , Masculino , Femenino , Colonoscopía/métodos , Persona de Mediana Edad , Adenoma/diagnóstico , Adenoma/patología , Adulto , Detección Precoz del Cáncer/métodos , Anciano , Diagnóstico por Computador/métodos
17.
Comput Biol Med ; 179: 108874, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39013343

RESUMEN

Smart healthcare has advanced the medical industry with the integration of data-driven approaches. Artificial intelligence and machine learning provided remarkable progress, but there is a lack of transparency and interpretability in such applications. To overcome such limitations, explainable AI (EXAI) provided a promising result. This paper applied the EXAI for disease diagnosis in the advancement of smart healthcare. The paper combined the approach of transfer learning, vision transformer, and explainable AI and designed an ensemble approach for prediction of disease and its severity. The result is evaluated on a dataset of Alzheimer's disease. The result analysis compared the performance of transfer learning models with the ensemble model of transfer learning and vision transformer. For training, InceptionV3, VGG19, Resnet50, and Densenet121 transfer learning models were selected for ensembling with vision transformer. The result compares the performance of two models: a transfer learning (TL) model and an ensemble transfer learning (Ensemble TL) model combined with vision transformer (ViT) on ADNI dataset. For the TL model, the accuracy is 58 %, precision is 52 %, recall is 42 %, and the F1-score is 44 %. Whereas, the Ensemble TL model with ViT shows significantly improved performance i.e., 96 % of accuracy, 94 % of precision, 90 % of recall and 92 % of F1-score on ADNI dataset. This shows the efficacy of the ensemble model over transfer learning models.


Asunto(s)
Enfermedad de Alzheimer , Humanos , Enfermedad de Alzheimer/diagnóstico , Aprendizaje Automático , Inteligencia Artificial , Diagnóstico por Computador/métodos , Algoritmos
18.
J Speech Lang Hear Res ; 67(8): 2729-2742, 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39052433

RESUMEN

PURPOSE: This article describes DigiSpan, a new computer-controlled auditory test of forward and reverse digit span, designed to be administered by clinicians, and presents normative and test-retest reliability data for adults. METHOD: DigiSpan mimics conventional live-voice tests in that it commences with trials that ascend in length until a stopping criterion is met, giving rise to a conventional scaled score. It then administers five additional adaptive trials, the length of which depends on the correctness of the response to the previous trial. Each of these two segments of the measurement gives rise to a scaled score. The ascending and adaptive scores are averaged to give an overall score and subtracted to produce an internal measure of consistency, and hence reliability. Young adults with an Mage of 25 years (N = 163) were tested, of whom 65 were retested on a separate day. RESULTS: The scaled scores from the conventional ascending trials were highly consistent with existing normative data based on live-voice tests. Combination of the conventional scaled score with a scaled score based on the adaptive trials led to 44% reduction in error variance for forward memory span and 20% reduction for reverse memory span. The average of these (32%) is similar to but (insignificantly) less than the 42% reduction in error variance that can be predicted based on adding the five adaptive trials. CONCLUSIONS: Replacing live-voice production of digits by a clinician with recorded, computer-controlled production has not affected the difficulty of the test. Adding five additional trials around the sequence length that a test participant can just remember has produced a decrease in measurement error. In addition, the availability of separate scaled scores for the ascending and adaptive phases enables the reliability of the combined score to be checked, for both forward and reverse measurements. The combination of standardized delivery, increased accuracy, internal reliability check, and fast automated scoring makes the test highly suitable for clinical use.


Asunto(s)
Memoria a Corto Plazo , Humanos , Memoria a Corto Plazo/fisiología , Adulto , Reproducibilidad de los Resultados , Adulto Joven , Femenino , Masculino , Diagnóstico por Computador/métodos , Adolescente
19.
Sci Rep ; 14(1): 17118, 2024 07 25.
Artículo en Inglés | MEDLINE | ID: mdl-39054346

RESUMEN

In recent years, artificial intelligence has made remarkable strides, improving various aspects of our daily lives. One notable application is in intelligent chatbots that use deep learning models. These systems have shown tremendous promise in the medical sector, enhancing healthcare quality, treatment efficiency, and cost-effectiveness. However, their role in aiding disease diagnosis, particularly chronic conditions, remains underexplored. Addressing this issue, this study employs large language models from the GPT series, in conjunction with deep learning techniques, to design and develop a diagnostic system targeted at chronic diseases. Specifically, performed transfer learning and fine-tuning on the GPT-2 model, enabling it to assist in accurately diagnosing 24 common chronic diseases. To provide a user-friendly interface and seamless interactive experience, we further developed a dialog-based interface, naming it Chat Ella. This system can make precise predictions for chronic diseases based on the symptoms described by users. Experimental results indicate that our model achieved an accuracy rate of 97.50% on the validation set, and an area under the curve (AUC) value reaching 99.91%. Moreover, conducted user satisfaction tests, which revealed that 68.7% of participants approved of Chat Ella, while 45.3% of participants found the system made daily medical consultations more convenient. It can rapidly and accurately assess a patient's condition based on the symptoms described and provide timely feedback, making it of significant value in the design of medical auxiliary products for household use.


Asunto(s)
Aprendizaje Profundo , Humanos , Enfermedad Crónica , Inteligencia Artificial , Diagnóstico por Computador/métodos
20.
Nanoscale ; 16(30): 14213-14246, 2024 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-39021117

RESUMEN

Cancer is a major health concern due to its high incidence and mortality rates. Advances in cancer research, particularly in artificial intelligence (AI) and deep learning, have shown significant progress. The swift evolution of AI in healthcare, especially in tools like computer-aided diagnosis, has the potential to revolutionize early cancer detection. This technology offers improved speed, accuracy, and sensitivity, bringing a transformative impact on cancer diagnosis, treatment, and management. This paper provides a concise overview of the application of artificial intelligence in the realms of medicine and nanomedicine, with a specific emphasis on the significance and challenges associated with cancer diagnosis. It explores the pivotal role of AI in cancer diagnosis, leveraging structured, unstructured, and multimodal fusion data. Additionally, the article delves into the applications of AI in nanomedicine sensors and nano-oncology drugs. The fundamentals of deep learning and convolutional neural networks are clarified, underscoring their relevance to AI-driven cancer diagnosis. A comparative analysis is presented, highlighting the accuracy and efficiency of traditional methods juxtaposed with AI-based approaches. The discussion not only assesses the current state of AI in cancer diagnosis but also delves into the challenges faced by AI in this context. Furthermore, the article envisions the future development direction and potential application of artificial intelligence in cancer diagnosis, offering a hopeful prospect for enhanced cancer detection and improved patient prognosis.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Nanomedicina , Neoplasias , Humanos , Neoplasias/diagnóstico , Diagnóstico por Computador/métodos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...