Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 65
Filtrar
1.
Int J Cardiol ; 412: 132339, 2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-38968972

RESUMEN

BACKGROUND: The study aimed to determine the most crucial parameters associated with CVD and employ a novel data ensemble refinement procedure to uncover the optimal pattern of these parameters that can result in a high prediction accuracy. METHODS AND RESULTS: Data were collected from 369 patients in total, 281 patients with CVD or at risk of developing it, compared to 88 otherwise healthy individuals. Within the group of 281 CVD or at-risk patients, 53 were diagnosed with coronary artery disease (CAD), 16 with end-stage renal disease, 47 newly diagnosed with diabetes mellitus 2 and 92 with chronic inflammatory disorders (21 rheumatoid arthritis, 41 psoriasis, 30 angiitis). The data were analyzed using an artificial intelligence-based algorithm with the primary objective of identifying the optimal pattern of parameters that define CVD. The study highlights the effectiveness of a six-parameter combination in discerning the likelihood of cardiovascular disease using DERGA and Extra Trees algorithms. These parameters, ranked in order of importance, include Platelet-derived Microvesicles (PMV), hypertension, age, smoking, dyslipidemia, and Body Mass Index (BMI). Endothelial and erythrocyte MVs, along with diabetes were the least important predictors. In addition, the highest prediction accuracy achieved is 98.64%. Notably, using PMVs alone yields a 91.32% accuracy, while the optimal model employing all ten parameters, yields a prediction accuracy of 0.9783 (97.83%). CONCLUSIONS: Our research showcases the efficacy of DERGA, an innovative data ensemble refinement greedy algorithm. DERGA accelerates the assessment of an individual's risk of developing CVD, allowing for early diagnosis, significantly reduces the number of required lab tests and optimizes resource utilization. Additionally, it assists in identifying the optimal parameters critical for assessing CVD susceptibility, thereby enhancing our understanding of the underlying mechanisms.

2.
Sci Rep ; 14(1): 16438, 2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39013941

RESUMEN

In regions like Oman, which are characterized by aridity, enhancing the water quality discharged from reservoirs poses considerable challenges. This predicament is notably pronounced at Wadi Dayqah Dam (WDD), where meeting the demand for ample, superior water downstream proves to be a formidable task. Thus, accurately estimating and mapping water quality indicators (WQIs) is paramount for sustainable planning of inland in the study area. Since traditional procedures to collect water quality data are time-consuming, labor-intensive, and costly, water resources management has shifted from gathering field measurement data to utilizing remote sensing (RS) data. WDD has been threatened by various driving forces in recent years, such as contamination from different sources, sedimentation, nutrient runoff, salinity intrusion, temperature fluctuations, and microbial contamination. Therefore, this study aimed to retrieve and map WQIs, namely dissolved oxygen (DO) and chlorophyll-a (Chl-a) of the Wadi Dayqah Dam (WDD) reservoir from Sentinel-2 (S2) satellite data using a new procedure of weighted averaging, namely Bayesian Maximum Entropy-based Fusion (BMEF). To do so, the outputs of four Machine Learning (ML) algorithms, namely Multilayer Regression (MLR), Random Forest Regression (RFR), Support Vector Regression (SVRs), and XGBoost, were combined using this approach together, considering uncertainty. Water samples from 254 systematic plots were obtained for temperature (T), electrical conductivity (EC), chlorophyll-a (Chl-a), pH, oxidation-reduction potential (ORP), and dissolved oxygen (DO) in WDD. The findings indicated that, throughout both the training and testing phases, the BMEF model outperformed individual machine learning models. Considering Chl-a, as WQI, and R-squared, as evaluation indices, BMEF outperformed MLR, SVR, RFR, and XGBoost by 6%, 9%, 2%, and 7%, respectively. Furthermore, the results were significantly enhanced when the best combination of various spectral bands was considered to estimate specific WQIs instead of using all S2 bands as input variables of the ML algorithms.

3.
BMC Med Inform Decis Mak ; 24(1): 195, 2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39014417

RESUMEN

BACKGROUND: Despite the significance and prevalence of acute respiratory distress syndrome (ARDS), its detection remains highly variable and inconsistent. In this work, we aim to develop an algorithm (ARDSFlag) to automate the diagnosis of ARDS based on the Berlin definition. We also aim to develop a visualization tool that helps clinicians efficiently assess ARDS criteria. METHODS: ARDSFlag applies machine learning (ML) and natural language processing (NLP) techniques to evaluate Berlin criteria by incorporating structured and unstructured data in an electronic health record (EHR) system. The study cohort includes 19,534 ICU admissions in the Medical Information Mart for Intensive Care III (MIMIC-III) database. The output is the ARDS diagnosis, onset time, and severity. RESULTS: ARDSFlag includes separate text classifiers trained using large training sets to find evidence of bilateral infiltrates in radiology reports (accuracy of 91.9%±0.5%) and heart failure/fluid overload in radiology reports (accuracy 86.1%±0.5%) and echocardiogram notes (accuracy 98.4%±0.3%). A test set of 300 cases, which was blindly and independently labeled for ARDS by two groups of clinicians, shows that ARDSFlag generates an overall accuracy of 89.0% (specificity = 91.7%, recall = 80.3%, and precision = 75.0%) in detecting ARDS cases. CONCLUSION: To our best knowledge, this is the first study to focus on developing a method to automate the detection of ARDS. Some studies have developed and used other methods to answer other research questions. Expectedly, ARDSFlag generates a significantly higher performance in all accuracy measures compared to those methods.


Asunto(s)
Algoritmos , Registros Electrónicos de Salud , Aprendizaje Automático , Procesamiento de Lenguaje Natural , Síndrome de Dificultad Respiratoria , Humanos , Síndrome de Dificultad Respiratoria/diagnóstico , Unidades de Cuidados Intensivos , Persona de Mediana Edad , Masculino , Femenino
4.
Heart Rhythm ; 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38823670

RESUMEN

BACKGROUND: It is unclear whether advances in management of acute coronary syndrome (ACS) and introduction of novel oral anticoagulants have changed outcomes in patients with ACS with concomitant atrial fibrillation (AF). OBJECTIVE: This study aimed to examine the incidence of AF in patients admitted for ACS and to evaluate its association with adverse outcomes, given the recent advances in management of both diseases. METHODS: Natural language processing search algorithms identified AF in patients admitted with ACS across 13 Northwell Health Hospitals from 2015 to 2021. Hierarchical generalized linear mixed modeling was used to assess the association between AF and in-hospital mortality, bleeding, and stroke outcomes; marginal Cox regression modeling was used to assess the association between AF and postdischarge mortality. RESULTS: Of 12,315 patients admitted for ACS, 3018 (24.5%) had AF with 1609 (53.3%) newly diagnosed. AF patients more commonly received anticoagulation with an oral anticoagulant (80.4% vs 12.3%) or heparin (61.9% vs 56.9%), had lengthier intensive care unit stay (72 vs 49 hours), and underwent fewer percutaneous coronary interventions (31.9% vs 53.1%). In-hospital bleeding, stroke, and mortality were higher in the AF group (15.3% vs 5.0%, 7.4% vs 2.4%, and 6.9% vs 2.1%, respectively). AF was an independent risk factor for all in-hospital outcomes (odds ratios of 2.5, 2.7, and 2.0 for bleeding, stroke, and mortality, respectively) as well as for postdischarge mortality (hazard ratio, 1.3; 95% CI, 1.2-1.5). CONCLUSION: AF is present in 25% of ACS patients and increases risk of in-hospital and postdischarge adverse outcomes. Additional data are required to direct optimal management.

5.
Artículo en Inglés | MEDLINE | ID: mdl-38923476

RESUMEN

In recent times, there has been a notable rise in the utilization of Internet of Medical Things (IoMT) frameworks particularly those based on edge computing, to enhance remote monitoring in healthcare applications. Most existing models in this field have been developed temperature screening methods using RCNN, face temperature encoder (FTE), and a combination of data from wearable sensors for predicting respiratory rate (RR) and monitoring blood pressure. These methods aim to facilitate remote screening and monitoring of Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) and COVID-19. However, these models require inadequate computing resources and are not suitable for lightweight environments. We propose a multimodal screening framework that leverages deep learning-inspired data fusion models to enhance screening results. A Variation Encoder (VEN) design proposes to measure skin temperature using Regions of Interest (RoI) identified by YoLo. Subsequently, the multi-data fusion model integrates electronic records features with data from wearable human sensors. To optimize computational efficiency, a data reduction mechanism is added to eliminate unnecessary features. Furthermore, we employ a contingent probability method to estimate distinct feature weights for each cluster, deepening our understanding of variations in thermal and sensory data to assess the prediction of abnormal COVID-19 instances. Simulation results using our lab dataset demonstrate a precision of 95.2%, surpassing state-of-the-art models due to the thoughtful design of the multimodal data-based feature fusion model, weight prediction factor, and feature selection model.

6.
Sci Rep ; 14(1): 13723, 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38877014

RESUMEN

This paper proposes a novel multi-hybrid algorithm named DHPN, using the best-known properties of dwarf mongoose algorithm (DMA), honey badger algorithm (HBA), prairie dog optimizer (PDO), cuckoo search (CS), grey wolf optimizer (GWO) and naked mole rat algorithm (NMRA). It follows an iterative division for extensive exploration and incorporates major parametric enhancements for improved exploitation operation. To counter the local optima problems, a stagnation phase using CS and GWO is added. Six new inertia weight operators have been analyzed to adapt algorithmic parameters, and the best combination of these parameters has been found. An analysis of the suitability of DHPN towards population variations and higher dimensions has been performed. For performance evaluation, the CEC 2005 and CEC 2019 benchmark data sets have been used. A comparison has been performed with differential evolution with active archive (JADE), self-adaptive DE (SaDE), success history based DE (SHADE), LSHADE-SPACMA, extended GWO (GWO-E), jDE100, and others. The DHPN algorithm is also used to solve the image fusion problem for four fusion quality metrics, namely, edge-based similarity index ( Q A B / F ), sum of correlation difference (SCD), structural similarity index measure (SSIM), and artifact measure ( N A B / F ). The average Q A B / F = 0.765508 , S C D = 1.63185 , S S I M = 0.726317 , and N A B / F = 0.006617 shows the best combination of results obtained by DHPN with respect to the existing algorithms such as DCH, CBF, GTF, JSR and others. Experimental and statistical Wilcoxon's and Friedman's tests show that the proposed DHPN algorithm performs significantly better in comparison to the other algorithms under test.

7.
Crit Care Med ; 52(7): 1021-1031, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38563609

RESUMEN

OBJECTIVES: Nonconventional ventilators (NCVs), defined here as transport ventilators and certain noninvasive positive pressure devices, were used extensively as crisis-time ventilators for intubated patients with COVID-19. We assessed whether there was an association between the use of NCV and higher mortality, independent of other factors. DESIGN: This is a multicenter retrospective observational study. SETTING: The sample was recruited from a single healthcare system in New York. The recruitment period spanned from March 1, 2020, to April 30, 2020. PATIENTS: The sample includes patients who were intubated for COVID-19 acute respiratory distress syndrome (ARDS). INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The primary outcome was 28-day in-hospital mortality. Multivariable logistic regression was used to derive the odds of mortality among patients managed exclusively with NCV throughout their ventilation period compared with the remainder of the sample while adjusting for other factors. A secondary analysis was also done, in which the mortality of a subset of the sample exclusively ventilated with NCV was compared with that of a propensity score-matched subset of the control group. Exclusive use of NCV was associated with a higher 28-day in-hospital mortality while adjusting for confounders in the regression analysis (odds ratio, 1.41; 95% CI [1.07-1.86]). In the propensity score matching analysis, the mortality of patients exclusively ventilated with NCV was 68.9%, and that of the control was 60.7% ( p = 0.02). CONCLUSIONS: Use of NCV was associated with increased mortality among patients with COVID-19 ARDS. More lives may be saved during future ventilator shortages if more full-feature ICU ventilators, rather than NCVs, are reserved in national and local stockpiles.


Asunto(s)
COVID-19 , Mortalidad Hospitalaria , Síndrome de Dificultad Respiratoria , Ventiladores Mecánicos , Humanos , COVID-19/terapia , COVID-19/mortalidad , Masculino , Femenino , Estudios Retrospectivos , Persona de Mediana Edad , Anciano , Síndrome de Dificultad Respiratoria/terapia , Síndrome de Dificultad Respiratoria/mortalidad , Ventiladores Mecánicos/provisión & distribución , Ventiladores Mecánicos/estadística & datos numéricos , New York/epidemiología , Respiración Artificial/estadística & datos numéricos
8.
J Environ Manage ; 358: 120756, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38599080

RESUMEN

Water quality indicators (WQIs), such as chlorophyll-a (Chl-a) and dissolved oxygen (DO), are crucial for understanding and assessing the health of aquatic ecosystems. Precise prediction of these indicators is fundamental for the efficient administration of rivers, lakes, and reservoirs. This research utilized two unique DL algorithms-namely, convolutional neural network (CNNs) and gated recurrent units (GRUs)-alongside their amalgamation, CNN-GRU, to precisely gauge the concentration of these indicators within a reservoir. Moreover, to optimize the outcomes of the developed hybrid model, we considered the impact of a decomposition technique, specifically the wavelet transform (WT). In addition to these efforts, we created two distinct machine learning (ML) algorithms-namely, random forest (RF) and support vector regression (SVR)-to demonstrate the superior performance of deep learning algorithms over individual ML ones. We initially gathered WQIs from diverse locations and varying depths within the reservoir using an AAQ-RINKO device in the study area to achieve this. It is important to highlight that, despite utilizing diverse data-driven models in water quality estimation, a significant gap persists in the existing literature regarding implementing a comprehensive hybrid algorithm. This algorithm integrates the wavelet transform, convolutional neural network (CNN), and gated recurrent unit (GRU) methodologies to estimate WQIs accurately within a spatiotemporal framework. Subsequently, the effectiveness of the models that were developed was assessed utilizing various statistical metrics, encompassing the correlation coefficient (r), root mean square error (RMSE), mean absolute error (MAE), and Nash-Sutcliffe efficiency (NSE) throughout both the training and testing phases. The findings demonstrated that the WT-CNN-GRU model exhibited better performance in comparison with the other algorithms by 13% (SVR), 13% (RF), 9% (CNN), and 8% (GRU) when R-squared and DO were considered as evaluation indices and WQIs, respectively.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Calidad del Agua , Aprendizaje Automático , Monitoreo del Ambiente/métodos , Lagos , Clorofila A/análisis , Análisis de Ondículas
9.
Sci Rep ; 14(1): 7833, 2024 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-38570560

RESUMEN

Heart disease is a major global cause of mortality and a major public health problem for a large number of individuals. A major issue raised by regular clinical data analysis is the recognition of cardiovascular illnesses, including heart attacks and coronary artery disease, even though early identification of heart disease can save many lives. Accurate forecasting and decision assistance may be achieved in an effective manner with machine learning (ML). Big Data, or the vast amounts of data generated by the health sector, may assist models used to make diagnostic choices by revealing hidden information or intricate patterns. This paper uses a hybrid deep learning algorithm to describe a large data analysis and visualization approach for heart disease detection. The proposed approach is intended for use with big data systems, such as Apache Hadoop. An extensive medical data collection is first subjected to an improved k-means clustering (IKC) method to remove outliers, and the remaining class distribution is then balanced using the synthetic minority over-sampling technique (SMOTE). The next step is to forecast the disease using a bio-inspired hybrid mutation-based swarm intelligence (HMSI) with an attention-based gated recurrent unit network (AttGRU) model after recursive feature elimination (RFE) has determined which features are most important. In our implementation, we compare four machine learning algorithms: SAE + ANN (sparse autoencoder + artificial neural network), LR (logistic regression), KNN (K-nearest neighbour), and naïve Bayes. The experiment results indicate that a 95.42% accuracy rate for the hybrid model's suggested heart disease prediction is attained, which effectively outperforms and overcomes the prescribed research gap in mentioned related work.


Asunto(s)
Enfermedad de la Arteria Coronaria , Aprendizaje Profundo , Cardiopatías , Humanos , Teorema de Bayes , Cardiopatías/diagnóstico , Cardiopatías/genética , Enfermedad de la Arteria Coronaria/diagnóstico , Enfermedad de la Arteria Coronaria/genética , Algoritmos , Inteligencia
10.
Eur J Intern Med ; 125: 67-73, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38458880

RESUMEN

It is important to determine the risk for admission to the intensive care unit (ICU) in patients with COVID-19 presenting at the emergency department. Using artificial neural networks, we propose a new Data Ensemble Refinement Greedy Algorithm (DERGA) based on 15 easily accessible hematological indices. A database of 1596 patients with COVID-19 was used; it was divided into 1257 training datasets (80 % of the database) for training the algorithms and 339 testing datasets (20 % of the database) to check the reliability of the algorithms. The optimal combination of hematological indicators that gives the best prediction consists of only four hematological indicators as follows: neutrophil-to-lymphocyte ratio (NLR), lactate dehydrogenase, ferritin, and albumin. The best prediction corresponds to a particularly high accuracy of 97.12 %. In conclusion, our novel approach provides a robust model based only on basic hematological parameters for predicting the risk for ICU admission and optimize COVID-19 patient management in the clinical practice.


Asunto(s)
Algoritmos , COVID-19 , Unidades de Cuidados Intensivos , Aprendizaje Automático , Índice de Severidad de la Enfermedad , Humanos , COVID-19/diagnóstico , COVID-19/sangre , Masculino , Femenino , Persona de Mediana Edad , Pronóstico , Anciano , SARS-CoV-2 , Ferritinas/sangre , Redes Neurales de la Computación , Neutrófilos , Adulto , L-Lactato Deshidrogenasa/sangre
11.
Sci Rep ; 14(1): 6942, 2024 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-38521848

RESUMEN

Watermarking is one of the crucial techniques in the domain of information security, preventing the exploitation of 3D Mesh models in the era of Internet. In 3D Mesh watermark embedding, moderately perturbing the vertices is commonly required to retain them in certain pre-arranged relationship with their neighboring vertices. This paper proposes a novel watermarking authentication method, called Nearest Centroid Discrete Gaussian and Levenberg-Marquardt (NCDG-LV), for distortion detection and recovery using salient point detection. In this method, the salient points are selected using the Nearest Centroid and Discrete Gaussian Geometric (NC-DGG) salient point detection model. Map segmentation is applied to the 3D Mesh model to segment into distinct sub regions according to the selected salient points. Finally, the watermark is embedded by employing the Multi-function Barycenter into each spatially selected and segmented region. In the extraction process, the embedded 3D Mesh image is extracted from each re-segmented region by means of Levenberg-Marquardt Deep Neural Network Watermark Extraction. In the authentication stage, watermark bits are extracted by analyzing the geometry via Levenberg-Marquardt back-propagation. Based on a performance evaluation, the proposed method exhibits high imperceptibility and tolerance against attacks, such as smoothing, cropping, translation, and rotation. The experimental results further demonstrate that the proposed method is superior in terms of salient point detection time, distortion rate, true positive rate, peak signal to noise ratio, bit error rate, and root mean square error compared to the state-of-the-art methods.

12.
J Cell Mol Med ; 28(4): e18105, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38339761

RESUMEN

Complement inhibition has shown promise in various disorders, including COVID-19. A prediction tool including complement genetic variants is vital. This study aims to identify crucial complement-related variants and determine an optimal pattern for accurate disease outcome prediction. Genetic data from 204 COVID-19 patients hospitalized between April 2020 and April 2021 at three referral centres were analysed using an artificial intelligence-based algorithm to predict disease outcome (ICU vs. non-ICU admission). A recently introduced alpha-index identified the 30 most predictive genetic variants. DERGA algorithm, which employs multiple classification algorithms, determined the optimal pattern of these key variants, resulting in 97% accuracy for predicting disease outcome. Individual variations ranged from 40 to 161 variants per patient, with 977 total variants detected. This study demonstrates the utility of alpha-index in ranking a substantial number of genetic variants. This approach enables the implementation of well-established classification algorithms that effectively determine the relevance of genetic variants in predicting outcomes with high accuracy.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , COVID-19/genética , Inteligencia Artificial , Algoritmos
13.
Sci Rep ; 14(1): 4816, 2024 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-38413614

RESUMEN

Many real-world optimization problems, particularly engineering ones, involve constraints that make finding a feasible solution challenging. Numerous researchers have investigated this challenge for constrained single- and multi-objective optimization problems. In particular, this work extends the boundary update (BU) method proposed by Gandomi and Deb (Comput. Methods Appl. Mech. Eng. 363:112917, 2020) for the constrained optimization problem. BU is an implicit constraint handling technique that aims to cut the infeasible search space over iterations to find the feasible region faster. In doing so, the search space is twisted, which can make the optimization problem more challenging. In response, two switching mechanisms are implemented that transform the landscape along with the variables to the original problem when the feasible region is found. To achieve this objective, two thresholds, representing distinct switching methods, are taken into account. In the first approach, the optimization process transitions to a state without utilizing the BU approach when constraint violations reach zero. In the second method, the optimization process shifts to a BU method-free optimization phase when there is no further change observed in the objective space. To validate, benchmarks and engineering problems are considered to be solved with well-known evolutionary single- and multi-objective optimization algorithms. Herein, the proposed method is benchmarked using with and without BU approaches over the whole search process. The results show that the proposed method can significantly boost the solutions in both convergence speed and finding better solutions for constrained optimization problems.

14.
Sci Rep ; 14(1): 4877, 2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38418500

RESUMEN

Differential evolution (DE) is a robust optimizer designed for solving complex domain research problems in the computational intelligence community. In the present work, a multi-hybrid DE (MHDE) is proposed for improving the overall working capability of the algorithm without compromising the solution quality. Adaptive parameters, enhanced mutation, enhanced crossover, reducing population, iterative division and Gaussian random sampling are some of the major characteristics of the proposed MHDE algorithm. Firstly, an iterative division for improved exploration and exploitation is used, then an adaptive proportional population size reduction mechanism is followed for reducing the computational complexity. It also incorporated Weibull distribution and Gaussian random sampling to mitigate premature convergence. The proposed framework is validated by using IEEE CEC benchmark suites (CEC 2005, CEC 2014 and CEC 2017). The algorithm is applied to four engineering design problems and for the weight minimization of three frame design problems. Experimental results are analysed and compared with recent hybrid algorithms such as laplacian biogeography based optimization, adaptive differential evolution with archive (JADE), success history based DE, self adaptive DE, LSHADE, MVMO, fractional-order calculus-based flower pollination algorithm, sine cosine crow search algorithm and others. Statistically, the Friedman and Wilcoxon rank sum tests prove that the proposed algorithm fares better than others.

15.
Sci Rep ; 14(1): 1333, 2024 01 16.
Artículo en Inglés | MEDLINE | ID: mdl-38228772

RESUMEN

In previous studies, replicated and multiple types of speech data have been used for Parkinson's disease (PD) detection. However, two main problems in these studies are lower PD detection accuracy and inappropriate validation methodologies leading to unreliable results. This study discusses the effects of inappropriate validation methodologies used in previous studies and highlights the use of appropriate alternative validation methods that would ensure generalization. To enhance PD detection accuracy, we propose a two-stage diagnostic system that refines the extracted set of features through [Formula: see text] regularized linear support vector machine and classifies the refined subset of features through a deep neural network. To rigorously evaluate the effectiveness of the proposed diagnostic system, experiments are performed on two different voice recording-based benchmark datasets. For both datasets, the proposed diagnostic system achieves 100% accuracy under leave-one-subject-out (LOSO) cross-validation (CV) and 97.5% accuracy under k-fold CV. The results show that the proposed system outperforms the existing methods regarding PD detection accuracy. The results suggest that the proposed diagnostic system is essential to improving non-invasive diagnostic decision support in PD.


Asunto(s)
Enfermedad de Parkinson , Voz , Humanos , Algoritmos , Enfermedad de Parkinson/diagnóstico , Máquina de Vectores de Soporte , Redes Neurales de la Computación
16.
Sci Rep ; 14(1): 534, 2024 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-38177156

RESUMEN

The most widely used method for detecting Coronavirus Disease 2019 (COVID-19) is real-time polymerase chain reaction. However, this method has several drawbacks, including high cost, lengthy turnaround time for results, and the potential for false-negative results due to limited sensitivity. To address these issues, additional technologies such as computed tomography (CT) or X-rays have been employed for diagnosing the disease. Chest X-rays are more commonly used than CT scans due to the widespread availability of X-ray machines, lower ionizing radiation, and lower cost of equipment. COVID-19 presents certain radiological biomarkers that can be observed through chest X-rays, making it necessary for radiologists to manually search for these biomarkers. However, this process is time-consuming and prone to errors. Therefore, there is a critical need to develop an automated system for evaluating chest X-rays. Deep learning techniques can be employed to expedite this process. In this study, a deep learning-based method called Custom Convolutional Neural Network (Custom-CNN) is proposed for identifying COVID-19 infection in chest X-rays. The Custom-CNN model consists of eight weighted layers and utilizes strategies like dropout and batch normalization to enhance performance and reduce overfitting. The proposed approach achieved a classification accuracy of 98.19% and aims to accurately classify COVID-19, normal, and pneumonia samples.


Asunto(s)
COVID-19 , Humanos , Rayos X , Radiografía , COVID-19/diagnóstico por imagen , Redes Neurales de la Computación , Biomarcadores
17.
Sci Rep ; 14(1): 676, 2024 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-38182607

RESUMEN

Melanoma is a severe skin cancer that involves abnormal cell development. This study aims to provide a new feature fusion framework for melanoma classification that includes a novel 'F' Flag feature for early detection. This novel 'F' indicator efficiently distinguishes benign skin lesions from malignant ones known as melanoma. The article proposes an architecture that is built in a Double Decker Convolutional Neural Network called DDCNN future fusion. The network's deck one, known as a Convolutional Neural Network (CNN), finds difficult-to-classify hairy images using a confidence factor termed the intra-class variance score. These hirsute image samples are combined to form a Baseline Separated Channel (BSC). By eliminating hair and using data augmentation techniques, the BSC is ready for analysis. The network's second deck trains the pre-processed BSC and generates bottleneck features. The bottleneck features are merged with features generated from the ABCDE clinical bio indicators to promote classification accuracy. Different types of classifiers are fed to the resulting hybrid fused features with the novel 'F' Flag feature. The proposed system was trained using the ISIC 2019 and ISIC 2020 datasets to assess its performance. The empirical findings expose that the DDCNN feature fusion strategy for exposing malignant melanoma achieved a specificity of 98.4%, accuracy of 93.75%, precision of 98.56%, and Area Under Curve (AUC) value of 0.98. This study proposes a novel approach that can accurately identify and diagnose fatal skin cancer and outperform other state-of-the-art techniques, which is attributed to the DDCNN 'F' Feature fusion framework. Also, this research ascertained improvements in several classifiers when utilising the 'F' indicator, resulting in the highest specificity of + 7.34%.


Asunto(s)
Melanoma , Neoplasias Cutáneas , Humanos , Melanoma/diagnóstico por imagen , Neoplasias Cutáneas/diagnóstico por imagen , Piel , Área Bajo la Curva , Redes Neurales de la Computación
18.
iScience ; 27(1): 108709, 2024 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-38269095

RESUMEN

The increasing demand for food production due to the growing population is raising the need for more food-productive environments for plants. The genetic behavior of plant traits remains different in different growing environments. However, it is tedious and impossible to look after the individual plant component traits manually. Plant breeders need computer vision-based plant monitoring systems to analyze different plants' productivity and environmental suitability. It leads to performing feasible quantitative analysis, geometric analysis, and yield rate analysis of the plants. Many of the data collection methods have been used by plant breeders according to their needs. In the presented review, most of them are discussed with their corresponding challenges and limitations. Furthermore, the traditional approaches of segmentation and classification of plant phenotyping are also discussed. The data limitation problems and their currently adapted solutions in the computer vision aspect are highlighted, which somehow solve the problem but are not genuine. The available datasets and current issues are enlightened. The presented study covers the plants phenotyping problems, suggested solutions, and current challenges from data collection to classification steps.

19.
BMC Bioinformatics ; 25(1): 33, 2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38253993

RESUMEN

Breast cancer remains a major public health challenge worldwide. The identification of accurate biomarkers is critical for the early detection and effective treatment of breast cancer. This study utilizes an integrative machine learning approach to analyze breast cancer gene expression data for superior biomarker and drug target discovery. Gene expression datasets, obtained from the GEO database, were merged post-preprocessing. From the merged dataset, differential expression analysis between breast cancer and normal samples revealed 164 differentially expressed genes. Meanwhile, a separate gene expression dataset revealed 350 differentially expressed genes. Additionally, the BGWO_SA_Ens algorithm, integrating binary grey wolf optimization and simulated annealing with an ensemble classifier, was employed on gene expression datasets to identify predictive genes including TOP2A, AKR1C3, EZH2, MMP1, EDNRB, S100B, and SPP1. From over 10,000 genes, BGWO_SA_Ens identified 1404 in the merged dataset (F1 score: 0.981, PR-AUC: 0.998, ROC-AUC: 0.995) and 1710 in the GSE45827 dataset (F1 score: 0.965, PR-AUC: 0.986, ROC-AUC: 0.972). The intersection of DEGs and BGWO_SA_Ens selected genes revealed 35 superior genes that were consistently significant across methods. Enrichment analyses uncovered the involvement of these superior genes in key pathways such as AMPK, Adipocytokine, and PPAR signaling. Protein-protein interaction network analysis highlighted subnetworks and central nodes. Finally, a drug-gene interaction investigation revealed connections between superior genes and anticancer drugs. Collectively, the machine learning workflow identified a robust gene signature for breast cancer, illuminated their biological roles, interactions and therapeutic associations, and underscored the potential of computational approaches in biomarker discovery and precision oncology.


Asunto(s)
Biomarcadores de Tumor , Neoplasias de la Mama , Humanos , Femenino , Biomarcadores de Tumor/genética , Medicina de Precisión , Algoritmos , Sistemas de Liberación de Medicamentos , Neoplasias de la Mama/tratamiento farmacológico , Neoplasias de la Mama/genética
20.
Sci Rep ; 14(1): 2215, 2024 Jan 26.
Artículo en Inglés | MEDLINE | ID: mdl-38278836

RESUMEN

Detecting potholes and traffic signs is crucial for driver assistance systems and autonomous vehicles, emphasizing real-time and accurate recognition. In India, approximately 2500 fatalities occur annually due to accidents linked to hidden potholes and overlooked traffic signs. Existing methods often overlook water-filled and illuminated potholes, as well as those shaded by trees. Additionally, they neglect the perspective and illuminated (nighttime) traffic signs. To address these challenges, this study introduces a novel approach employing a cascade classifier along with a vision transformer. A cascade classifier identifies patterns associated with these elements, and Vision Transformers conducts detailed analysis and classification. The proposed approach undergoes training and evaluation on ICTS, GTSRDB, KAGGLE, and CCSAD datasets. Model performance is assessed using precision, recall, and mean Average Precision (mAP) metrics. Compared to state-of-the-art techniques like YOLOv3, YOLOv4, Faster RCNN, and SSD, the method achieves impressive recognition with a mAP of 97.14% for traffic sign detection and 98.27% for pothole detection.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...