Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
BMC Med Imaging ; 24(1): 122, 2024 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-38789963

RESUMEN

In response to the low real-time performance and accuracy of traditional sports injury monitoring, this article conducts research on a real-time injury monitoring system using the SVM model as an example. Video detection is performed to capture human movements, followed by human joint detection. Polynomial fitting analysis is used to extract joint motion patterns, and the average of training data is calculated as a reference point. The raw data is then normalized to adjust position and direction, and dimensionality reduction is achieved through singular value decomposition to enhance processing efficiency and model training speed. A support vector machine classifier is used to classify and identify the processed data. The experimental section monitors sports injuries and investigates the accuracy of the system's monitoring. Compared to mainstream models such as Random Forest and Naive Bayes, the SVM utilized demonstrates good performance in accuracy, sensitivity, and specificity, reaching 94.2%, 92.5%, and 96.0% respectively.


Asunto(s)
Traumatismos en Atletas , Aprendizaje Profundo , Máquina de Vectores de Soporte , Humanos , Traumatismos en Atletas/diagnóstico por imagen , Grabación en Video , Sensibilidad y Especificidad , Algoritmos
2.
Sensors (Basel) ; 24(11)2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38894319

RESUMEN

Region proposal-based detectors, such as Region-Convolutional Neural Networks (R-CNNs), Fast R-CNNs, Faster R-CNNs, and Region-Based Fully Convolutional Networks (R-FCNs), employ a two-stage process involving region proposal generation followed by classification. This approach is effective but computationally intensive and typically slower than proposal-free methods. Therefore, region proposal-free detectors are becoming popular to balance accuracy and speed. This paper proposes a proposal-free, fully convolutional network (PF-FCN) that outperforms other state-of-the-art, proposal-free methods. Unlike traditional region proposal-free methods, PF-FCN can generate a "box map" based on regression training techniques. This box map comprises a set of vectors, each designed to produce bounding boxes corresponding to the positions of objects in the input image. The channel and spatial contextualized sub-network are further designed to learn a "box map". In comparison to renowned proposal-free detectors such as CornerNet, CenterNet, and You Look Only Once (YOLO), PF-FCN utilizes a fully convolutional, single-pass method. By reducing the need for fully connected layers and filtering center points, the method considerably reduces the number of trained parameters and optimizes the scalability across varying input sizes. Evaluations of benchmark datasets suggest the effectiveness of PF-FCN: the proposed model achieved an mAP of 89.6% on PASCAL VOC 2012 and 71.7% on MS COCO, which are higher than those of the baseline Fully Convolutional One-Stage Detector (FCOS) and other classical proposal-free detectors. The results prove the significance of proposal-free detectors in both practical applications and future research.

3.
J Environ Manage ; 350: 119613, 2024 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-38007931

RESUMEN

Accurate forecasting of water quality variables in river systems is crucial for relevant administrators to identify potential water quality degradation issues and take countermeasures promptly. However, pure data-driven forecasting models are often insufficient to deal with the highly varying periodicity of water quality in today's more complex environment. This study presents a new holistic framework for time-series forecasting of water quality parameters by combining advanced deep learning algorithms (i.e., Long Short-Term Memory (LSTM) and Informer) with causal inference, time-frequency analysis, and uncertainty quantification. The framework was demonstrated for total nitrogen (TN) forecasting in the largest artificial lakes in Asia (i.e., the Danjiangkou Reservoir, China) with six-year monitoring data from January 2017 to June 2022. The results showed that the pre-processing techniques based on causal inference and wavelet decomposition can significantly improve the performance of deep learning algorithms. Compared to the individual LSTM and Informer models, wavelet-coupled approaches diminished well the apparent forecasting errors of TN concentrations, with 24.39%, 32.68%, and 41.26% reduction at most in the average, standard deviation, and maximum values of the errors, respectively. In addition, a post-processing algorithm based on the Copula function and Bayesian theory was designed to quantify the uncertainty of predictions. With the help of this algorithm, each deterministic prediction of our model can correspond to a range of possible outputs. The 95% forecast confidence interval covered almost all the observations, which proves a measure of the reliability and robustness of the predictions. This study provides rich scientific references for applying advanced data-driven methods in time-series forecasting tasks and a practical methodological framework for water resources management and similar projects.


Asunto(s)
Algoritmos , Calidad del Agua , Incertidumbre , Teorema de Bayes , Reproducibilidad de los Resultados , Predicción
4.
PeerJ Comput Sci ; 10: e1880, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38435594

RESUMEN

This article presents a hybrid recommender framework for smart medical systems by introducing two methods to improve service level evaluations and doctor recommendations for patients. The first method uses big data techniques and deep learning algorithms to develop a registration review system in medical institutions. This system outperforms conventional evaluation methods, thus achieving higher accuracy. The second method implements the term frequency and inverse document frequency (TF-IDF) algorithm to construct a model based on the patient's symptom vector space, incorporating score weighting, modified cosine similarity, and K-means clustering. Then, the alternating least squares (ALS) matrix decomposition and user collaborative filtering algorithm are applied to calculate patients' predicted scores for doctors and recommend top-performing doctors. Experimental results show significant improvements in metrics called precision and recall rates compared to conventional methods, making the proposed approach a practical solution for department triage and doctor recommendation in medical appointment platforms.

5.
Adv Sci (Weinh) ; 11(24): e2309781, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38610112

RESUMEN

Remote sensing technology, which conventionally employs spectrometers to capture hyperspectral images, allowing for the classification and unmixing based on the reflectance spectrum, has been extensively applied in diverse fields, including environmental monitoring, land resource management, and agriculture. However, miniaturization of remote sensing systems remains a challenge due to the complicated and dispersive optical components of spectrometers. Here, m-phase GaTe0.5Se0.5 with wide-spectral photoresponses (250-1064 nm) and stack it with WSe2 are utilizes to construct a two-dimensional van der Waals heterojunction (2D-vdWH), enabling the design of a gate-tunable wide-spectral photodetector. By utilizing the multi-photoresponses under varying gate voltages, high accuracy recognition can be achieved aided by deep learning algorithms without the original hyperspectral reflectance data. The proof-of-concept device, featuring dozens of tunable gate voltages, achieves an average classification accuracy of 87.00% on 6 prevalent hyperspectral datasets, which is competitive with the accuracy of 250-1000 nm hyperspectral data (88.72%) and far superior to the accuracy of non-tunable photoresponse (71.17%). Artificially designed gate-tunable wide-spectral 2D-vdWHs GaTe0.5Se0.5/WSe2-based photodetector present a promising pathway for the development of miniaturized and cost-effective remote sensing classification technology.

6.
Radiother Oncol ; 197: 110344, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38806113

RESUMEN

BACKGROUND: Accurate segmentation of lung tumors on chest computed tomography (CT) scans is crucial for effective diagnosis and treatment planning. Deep Learning (DL) has emerged as a promising tool in medical imaging, particularly for lung cancer segmentation. However, its efficacy across different clinical settings and tumor stages remains variable. METHODS: We conducted a comprehensive search of PubMed, Embase, and Web of Science until November 7, 2023. We assessed the quality of these studies by using the Checklist for Artificial Intelligence in Medical Imaging and the Quality Assessment of Diagnostic Accuracy Studies-2 tools. This analysis included data from various clinical settings and stages of lung cancer. Key performance metrics, such as the Dice similarity coefficient, were pooled, and factors affecting algorithm performance, such as clinical setting, algorithm type, and image processing techniques, were examined. RESULTS: Our analysis of 37 studies revealed a pooled Dice score of 79 % (95 % CI: 76 %-83 %), indicating moderate accuracy. Radiotherapy studies had a slightly lower score of 78 % (95 % CI: 74 %-82 %). A temporal increase was noted, with recent studies (post-2022) showing improvement from 75 % (95 % CI: 70 %-81 %). to 82 % (95 % CI: 81 %-84 %). Key factors affecting performance included algorithm type, resolution adjustment, and image cropping. QUADAS-2 assessments identified ambiguous risks in 78 % of studies due to data interval omissions and concerns about generalizability in 8 % due to nodule size exclusions, and CLAIM criteria highlighted areas for improvement, with an average score of 27.24 out of 42. CONCLUSION: This meta-analysis demonstrates DL algorithms' promising but varied efficacy in lung cancer segmentation, particularly higher efficacy noted in early stages. The results highlight the critical need for continued development of tailored DL models to improve segmentation accuracy across diverse clinical settings, especially in advanced cancer stages with greater challenges. As recent studies demonstrate, ongoing advancements in algorithmic approaches are crucial for future applications.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Neoplasias Pulmonares/patología , Tomografía Computarizada por Rayos X/métodos , Algoritmos
7.
Radiol Artif Intell ; 6(3): e230375, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38597784

RESUMEN

Purpose To explore the stand-alone breast cancer detection performance, at different risk score thresholds, of a commercially available artificial intelligence (AI) system. Materials and Methods This retrospective study included information from 661 695 digital mammographic examinations performed among 242 629 female individuals screened as a part of BreastScreen Norway, 2004-2018. The study sample included 3807 screen-detected cancers and 1110 interval breast cancers. A continuous examination-level risk score by the AI system was used to measure performance as the area under the receiver operating characteristic curve (AUC) with 95% CIs and cancer detection at different AI risk score thresholds. Results The AUC of the AI system was 0.93 (95% CI: 0.92, 0.93) for screen-detected cancers and interval breast cancers combined and 0.97 (95% CI: 0.97, 0.97) for screen-detected cancers. In a setting where 10% of the examinations with the highest AI risk scores were defined as positive and 90% with the lowest scores as negative, 92.0% (3502 of 3807) of the screen-detected cancers and 44.6% (495 of 1110) of the interval breast cancers were identified with AI. In this scenario, 68.5% (10 987 of 16 040) of false-positive screening results (negative recall assessment) were considered negative by AI. When 50% was used as the cutoff, 99.3% (3781 of 3807) of the screen-detected cancers and 85.2% (946 of 1110) of the interval breast cancers were identified as positive by AI, whereas 17.0% (2725 of 16 040) of the false-positive results were considered negative. Conclusion The AI system showed high performance in detecting breast cancers within 2 years of screening mammography and a potential for use to triage low-risk mammograms to reduce radiologist workload. Keywords: Mammography, Breast, Screening, Convolutional Neural Network (CNN), Deep Learning Algorithms Supplemental material is available for this article. © RSNA, 2024 See also commentary by Bahl and Do in this issue.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Mama , Detección Precoz del Cáncer , Mamografía , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/epidemiología , Neoplasias de la Mama/diagnóstico , Femenino , Mamografía/métodos , Noruega/epidemiología , Estudios Retrospectivos , Persona de Mediana Edad , Detección Precoz del Cáncer/métodos , Anciano , Adulto , Tamizaje Masivo/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
8.
Asian Pac J Cancer Prev ; 25(3): 1077-1085, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38546090

RESUMEN

Background &Objective: Carcinoma of the breast is one of the major issues causing death in women, especially in developing countries. Timely prediction, detection, diagnosis, and efficient therapies have become critical to reducing death rates. Increased use of artificial intelligence, machine, and deep learning techniques create more accurate and trustworthy models for predicting and detecting breast cancer. This study aims to examine the effectiveness of several machine and modern deep learning models for prediction and diagnosis of breast cancer. METHODS: This research compares traditional machine learning classification methods to innovative techniques that use deep learning models. Established usual classification models such as k-Nearest Neighbors (kNN), Gradient Boosting, Support Vector Machine (SVM), Neural Network, CN2 rule inducer, Naive Bayes, Stochastic Gradient Descent (SGD), and Tree, and deep learning models such as Neural Decision Forest and Multilayer Perceptron used. The investigation, which was carried out using the Orange and Python tools, evaluates their diagnostic effectiveness in breast cancer detection. The evaluation uses UCI's publicly accessible Wisconsin Diagnostic Data Set, enabling transparency and accessibility in the study approach. RESULT: The mean radius ranges from 6.981 to 28.110, while the mean texture runs from 9.71 to 39.28 in malignant and benign cases. Gradient boosting and CN2 rule inducer classifiers outperform SVM in accuracy and sensitivity, whereas SVM has the lowest accuracy and sensitivity at 88%. The CN2 rule inducer classifier achieves the greatest ROC curve score for benign and malignant breast cancer datasets, with an AUC score of 0.98%. MLP displays distinguish positive and negative classes, with a higher AUC-ROC of 0.9959. with accuracy of 96.49%, precision of 96.57%, recall of 96.49%, and an F1-Score of 96.50%. CONCLUSION: Among the most commonly used classifier models, CN2 rule and  GB performed better than other models. However, MLP from deep learning produced the greatest overall performance.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Humanos , Femenino , Inteligencia Artificial , Neoplasias de la Mama/diagnóstico , Teorema de Bayes , Aprendizaje Automático , Máquina de Vectores de Soporte , Algoritmos
9.
Radiol Artif Intell ; 6(1): e220231, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38197800

RESUMEN

Purpose To present results from a literature survey on practices in deep learning segmentation algorithm evaluation and perform a study on expert quality perception of brain tumor segmentation. Materials and Methods A total of 180 articles reporting on brain tumor segmentation algorithms were surveyed for the reported quality evaluation. Additionally, ratings of segmentation quality on a four-point scale were collected from medical professionals for 60 brain tumor segmentation cases. Results Of the surveyed articles, Dice score, sensitivity, and Hausdorff distance were the most popular metrics to report segmentation performance. Notably, only 2.8% of the articles included clinical experts' evaluation of segmentation quality. The experimental results revealed a low interrater agreement (Krippendorff α, 0.34) in experts' segmentation quality perception. Furthermore, the correlations between the ratings and commonly used quantitative quality metrics were low (Kendall tau between Dice score and mean rating, 0.23; Kendall tau between Hausdorff distance and mean rating, 0.51), with large variability among the experts. Conclusion The results demonstrate that quality ratings are prone to variability due to the ambiguity of tumor boundaries and individual perceptual differences, and existing metrics do not capture the clinical perception of segmentation quality. Keywords: Brain Tumor Segmentation, Deep Learning Algorithms, Glioblastoma, Cancer, Machine Learning Clinical trial registration nos. NCT00756106 and NCT00662506 Supplemental material is available for this article. © RSNA, 2023.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Glioblastoma , Humanos , Algoritmos , Benchmarking , Neoplasias Encefálicas/diagnóstico por imagen , Glioblastoma/diagnóstico por imagen
10.
Environ Sci Pollut Res Int ; 31(36): 49116-49140, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39046638

RESUMEN

Hydrological simulation in karstic areas is a hard task due to the intrinsic intricacy of these environments and the common lack of data related to their geometry. Hydrological dynamics of karstic sites in Mediterranean semiarid regions are difficult to be modelled mathematically owing to the existence of short wet episodes and long dry periods. In this paper, the suitability of an open-source SWAT method was checked to estimate the comportment of a karstic catchment in a Mediterranean semiarid domain (southeast of Spain), which wet and dry periods were evaluated using box-whisker plots and self-developed wavelet test. A novel expression of the Nash-Sutcliffe index for arid areas (ANSE) was considered through the calibration and validation of SWAT. Both steps were completed with 20- and 10-year discharge records of stream (1996-2015 to calibrate the model as this period depicts minimum gaps and 1985-1995 to validate it). Further, SWAT assessments were made with records of groundwater discharge and relating SWAT outputs with the SIMPA method, the Spain's national hydrological tool. These methods, along with recurrent neural network algorithms, were utilised to examine current and predicted water resources available to supply urban demands considering also groundwater abstractions from aquifers and the related exploitation index. According to the results, SWAT achieved a "very good" statistical performance (with ANSE of 0.96 and 0.78 in calibration and validation). Spatial distributions of the main hydrological processes, as surface runoff, evapotranspiration and aquifer recharge, were studied with SWAT and SIMPA obtaining similar results over the period with registers (1980-2016). During this period, the decreasing trend of rainfalls, characterised by short wet periods and long dry periods, has generated a progressive reduction of groundwater recharge. According to algorithms prediction (until 2050), this declining trend will continue reducing groundwater available to meet urban demands and increasing the exploitation index of aquifers. These results offer valuable information to authorities for assessing water accessibility and to provide water demands in karstic areas.


Asunto(s)
Redes Neurales de la Computación , Hidrología , Abastecimiento de Agua , España , Modelos Teóricos , Agua Subterránea , Monitoreo del Ambiente/métodos
11.
Front Bioeng Biotechnol ; 12: 1392269, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39100623

RESUMEN

Improvements in digital microscopy are critical for the development of a malaria diagnosis method that is accurate at the cellular level and exhibits satisfactory clinical performance. Digital microscopy can be enhanced by improving deep learning algorithms and achieving consistent staining results. In this study, a novel miLab™ device incorporating the solid hydrogel staining method was proposed for consistent blood film preparation, eliminating the use of complex equipment and liquid reagent maintenance. The miLab™ ensures consistent, high-quality, and reproducible blood films across various hematocrits by leveraging deformable staining patches. Embedded-deep-learning-enabled miLab™ was utilized to detect and classify malarial parasites from autofocused images of stained blood cells using an internal optical system. The results of this method were consistent with manual microscopy images. This method not only minimizes human error but also facilitates remote assistance and review by experts through digital image transmission. This method can set a new paradigm for on-site malaria diagnosis. The miLab™ algorithm for malaria detection achieved a total accuracy of 98.86% for infected red blood cell (RBC) classification. Clinical validation performed in Malawi demonstrated an overall percent agreement of 92.21%. Based on these results, miLab™ can become a reliable and efficient tool for decentralized malaria diagnosis.

12.
Radiol Artif Intell ; 6(4): e230383, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38717291

RESUMEN

Purpose To investigate the issues of generalizability and replication of deep learning models by assessing performance of a screening mammography deep learning system developed at New York University (NYU) on a local Australian dataset. Materials and Methods In this retrospective study, all individuals with biopsy or surgical pathology-proven lesions and age-matched controls were identified from a South Australian public mammography screening program (January 2010 to December 2016). The primary outcome was deep learning system performance-measured with area under the receiver operating characteristic curve (AUC)-in classifying invasive breast cancer or ductal carcinoma in situ (n = 425) versus no malignancy (n = 490) or benign lesions (n = 44). The NYU system, including models without (NYU1) and with (NYU2) heatmaps, was tested in its original form, after training from scratch (without transfer learning), and after retraining with transfer learning. Results The local test set comprised 959 individuals (mean age, 62.5 years ± 8.5 [SD]; all female). The original AUCs for the NYU1 and NYU2 models were 0.83 (95% CI: 0.82, 0.84) and 0.89 (95% CI: 0.88, 0.89), respectively. When NYU1 and NYU2 were applied in their original form to the local test set, the AUCs were 0.76 (95% CI: 0.73, 0.79) and 0.84 (95% CI: 0.82, 0.87), respectively. After local training without transfer learning, the AUCs were 0.66 (95% CI: 0.62, 0.69) and 0.86 (95% CI: 0.84, 0.88). After retraining with transfer learning, the AUCs were 0.82 (95% CI: 0.80, 0.85) and 0.86 (95% CI: 0.84, 0.88). Conclusion A deep learning system developed using a U.S. dataset showed reduced performance when applied "out of the box" to an Australian dataset. Local retraining with transfer learning using available model weights improved model performance. Keywords: Screening Mammography, Convolutional Neural Network (CNN), Deep Learning Algorithms, Breast Cancer Supplemental material is available for this article. © RSNA, 2024 See also commentary by Cadrin-Chênevert in this issue.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Mamografía , Humanos , Mamografía/métodos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Persona de Mediana Edad , Estudios Retrospectivos , Detección Precoz del Cáncer/métodos , Anciano , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
13.
Water Res ; 258: 121758, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38761592

RESUMEN

Fast quantification is the primary challenge in monitoring microplastic fiber (MPF) pollution in water. The process of quantifying the number of MPFs in water typically involves filtration, imaging on a filter membrane, and manual counting. However, this routine workflow has limitations in terms of speed and accuracy. Here, we present an alternative analysis strategy based on our high-resolution lensless shadow microscope (LSM) for rapid imaging of MPFs on a chip and modified deep learning algorithms for automatic counting. Our LSM system was equipped with wide field-of-view submicron-pixel imaging sensors (>1 cm2; ∼500 nm/pixel) and could simultaneously capture the projection image of >3-µm microplastic spheres within 90 s. The algorithms enabled accurate classification and detection of the number and length of >10-µm linear and branched MPFs derived from melamine cleaning sponges in each image (∼0.4 gigapixels) within 60 s. Importantly, neither MPF morphology (dispersed or aggregated) nor environmental matrix had a notable impact on the automatic recognition of the MPFs by the algorithms. This new strategy had a detection limit of 10 particles/mL and significantly reduced the time of MPF imaging and counting from several hours with membrane-based methods to just a few minutes per sample. The strategy could be employed to monitor water pollution caused by microplastics if an efficient sample separation and a comprehensive sample image database were available.


Asunto(s)
Monitoreo del Ambiente , Microplásticos , Microscopía , Contaminantes Químicos del Agua , Monitoreo del Ambiente/métodos , Contaminantes Químicos del Agua/análisis , Microscopía/métodos , Algoritmos , Agua/química
14.
Talanta ; 276: 126217, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-38759361

RESUMEN

In this manuscript, a 3D-printed analytical device has been successfully developed to classify illicit drugs using smartphone-based colorimetry. Representative compounds of different families, including cocaine, 3,4-methylenedioxy-methamphetamine (MDMA), amphetamine and cathinone derivatives, pyrrolidine cathinones, and 3,4-methylenedioxy cathinones, have been analyzed and classified after appropriate reaction with Marquis, gallic acid, sulfuric acid, Simon and Scott reagents. A picture of the colored products was acquired using a smartphone, and the corrected RGB values were used as input data in the chemometric treatment. ANN using two active layers of nodes (6 nodes in layer 1 and 2 nodes in layer 2) with a sigmoidal transfer function and a minimum strict threshold of 0.50 identified illicit drug samples with a sensitivity higher than 83.4 % and a specificity of 100 % with limits of detection in the microgram range. The 3D printed device can operate connected to a rechargeable lithium-ion cell portable battery, is inexpensive, and requires minimal training. The analytical device has been able to discriminate the analyzed psychoactive substances from cutting and mixing agents, being a useful tool for law enforcement agents to use as a screening method.


Asunto(s)
Drogas Ilícitas , Redes Neurales de la Computación , Impresión Tridimensional , Teléfono Inteligente , Drogas Ilícitas/análisis , Colorimetría/instrumentación , Colorimetría/métodos , Detección de Abuso de Sustancias/métodos , Detección de Abuso de Sustancias/instrumentación , Humanos
15.
ESC Heart Fail ; 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38700133

RESUMEN

AIMS: Electronic health records (EHR) linked to Digital Imaging and Communications in Medicine (DICOM), biological specimens, and deep learning (DL) algorithms could potentially improve patient care through automated case detection and surveillance. We hypothesized that by applying keyword searches to routinely stored EHR, in conjunction with AI-powered automated reading of DICOM echocardiography images and analysing biomarkers from routinely stored plasma samples, we were able to identify heart failure (HF) patients. METHODS AND RESULTS: We used EHR data between 1993 and 2021 from Tayside and Fife (~20% of the Scottish population). We implemented a keyword search strategy complemented by filtering based on International Classification of Diseases (ICD) codes and prescription data to EHR data set. We then applied DL for the automated interpretation of echocardiographic DICOM images. These methods were then integrated with the analysis of routinely stored plasma samples to identify and categorize patients into HF with reduced ejection fraction (HFrEF), HF with preserved ejection fraction (HFpEF), and controls without HF. The final diagnosis was verified through a manual review of medical records, measured natriuretic peptides in stored blood samples, and by comparing clinical outcomes among groups. In our study, we selected the patient cohort through an algorithmic workflow. This process started with 60 850 EHR data and resulted in a final cohort of 578 patients, divided into 186 controls, 236 with HFpEF, and 156 with HFrEF, after excluding individuals with mismatched data or significant valvular heart disease. The analysis of baseline characteristics revealed that compared with controls, patients with HFrEF and HFpEF were generally older, had higher BMI, and showed a greater prevalence of co-morbidities such as diabetes, COPD, and CKD. Echocardiographic analysis, enhanced by DL, provided high coverage, and detailed insights into cardiac function, showing significant differences in parameters such as left ventricular diameter, ejection fraction, and myocardial strain among the groups. Clinical outcomes highlighted a higher risk of hospitalization and mortality for HF patients compared with controls, with particularly elevated risk ratios for both HFrEF and HFpEF groups. The concordance between the algorithmic selection of patients and manual validation demonstrated high accuracy, supporting the effectiveness of our approach in identifying and classifying HF subtypes, which could significantly impact future HF diagnosis and management strategies. CONCLUSIONS: Our study highlights the feasibility of combining keyword searches in EHR, DL automated echocardiographic interpretation, and biobank resources to identify HF subtypes.

16.
Brief Funct Genomics ; 23(4): 452-463, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-38267081

RESUMEN

Protein methylation is a form of post-translational modifications of protein, which is crucial for various cellular processes, including transcription activity and DNA repair. Correctly predicting protein methylation sites is fundamental for research and drug discovery. Some experimental techniques, such as methyl-specific antibodies, chromatin immune precipitation and mass spectrometry, exist for predicting protein methylation sites, but these techniques are time-consuming and costly. The ability to predict methylation sites using in silico techniques may help researchers identify potential candidate sites for future examination and make it easier to carry out site-specific investigations and downstream characterizations. In this research, we proposed a novel deep learning-based predictor, named DeepPRMS, to identify protein methylation sites in primary sequences. The DeepPRMS utilizes the gated recurrent unit (GRU) and convolutional neural network (CNN) algorithms to extract the sequential and spatial information from the primary sequences. GRU is used to extract sequential information, while CNN is used for spatial information. We combined the latent representation of GRU and CNN models to have a better interaction among them. Based on the independent test data set, DeepPRMS obtained an accuracy of 85.32%, a specificity of 84.94%, Matthew's correlation coefficient of 0.71 and a sensitivity of 85.80%. The results indicate that DeepPRMS can predict protein methylation sites with high accuracy and outperform the state-of-the-art models. The DeepPRMS is expected to effectively guide future research experiments for identifying potential methylated protein sites. The web server is available at http://deepprms.nitsri.ac.in/.


Asunto(s)
Arginina , Aprendizaje Profundo , Metilación , Arginina/metabolismo , Procesamiento Proteico-Postraduccional , Proteínas/metabolismo , Proteínas/química , Redes Neurales de la Computación , Algoritmos , Biología Computacional/métodos , Humanos
17.
Sci Rep ; 14(1): 16879, 2024 07 23.
Artículo en Inglés | MEDLINE | ID: mdl-39043755

RESUMEN

This research endeavors to prognosticate gender by harnessing the potential of skull computed tomography (CT) images, given the seminal role of gender identification in the realm of identification. The study encompasses a corpus of CT images of cranial structures derived from 218 male and 203 female subjects, constituting a total cohort of 421 individuals within the age bracket of 25 to 65 years. Employing deep learning, a prominent subset of machine learning algorithms, the study deploys convolutional neural network (CNN) models to excavate profound attributes inherent in the skull CT images. In pursuit of the research objective, the focal methodology involves the exclusive application of deep learning algorithms to image datasets, culminating in an accuracy rate of 96.4%. The gender estimation process exhibits a precision of 96.1% for male individuals and 96.8% for female individuals. The precision performance varies across different selections of feature numbers, namely 100, 300, and 500, alongside 1000 features without feature selection. The respective precision rates for these selections are recorded as 95.0%, 95.5%, 96.2%, and 96.4%. It is notable that gender estimation via visual radiography mitigates the discrepancy in measurements between experts, concurrently yielding an expedited estimation rate. Predicated on the empirical findings of this investigation, it is inferred that the efficacy of the CNN model, the configurational intricacies of the classifier, and the judicious selection of features collectively constitute pivotal determinants in shaping the performance attributes of the proposed methodology.


Asunto(s)
Antropología Forense , Caracteres Sexuales , Cráneo , Tomografía Computarizada por Rayos X , Cráneo/diagnóstico por imagen , Tomografía Computarizada por Rayos X/normas , Antropología Forense/métodos , Aprendizaje Profundo , Humanos , Masculino , Femenino , Reproducibilidad de los Resultados , Adulto , Persona de Mediana Edad , Anciano , Redes Neurales de la Computación
18.
Radiol Artif Intell ; 6(1): e230095, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38166331

RESUMEN

Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high b value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ2 tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; P ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (P > .05). Conclusion The developed CNN (www.github.com/neuroAI-HD/HD-SEQ-ID) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. Keywords: MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2023.


Asunto(s)
Aprendizaje Profundo , Humanos , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neuroimagen , Estudios Retrospectivos , Estudios Multicéntricos como Asunto
19.
Prz Gastroenterol ; 18(4): 353-367, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38572457

RESUMEN

Colorectal cancer is one of the most prevalent types of cancer, with histopathologic examination of biopsied tissue samples remaining the gold standard for diagnosis. During the past years, artificial intelligence (AI) has steadily found its way into the field of medicine and pathology, especially with the introduction of whole slide imaging (WSI). The main outcome of interest was the composite balanced accuracy (ACC) as well as the F1 score. The average reported ACC from the collected studies was 95.8 ±3.8%. Reported F1 scores reached as high as 0.975, with an average of 89.7 ±9.8%, indicating that existing deep learning algorithms can achieve in silico distinction between malignant and benign. Overall, the available state-of-the-art algorithms are non-inferior to pathologists for image analysis and classification tasks. However, due to their inherent uniqueness in their training and lack of widely accepted external validation datasets, their generalization potential is still limited.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA