Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 327
Filter
1.
Ann Hepatol ; 29(6): 101540, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39151891

ABSTRACT

INTRODUCTION AND OBJECTIVES: The increasing incidence of hepatocellular carcinoma (HCC) in China is an urgent issue, necessitating early diagnosis and treatment. This study aimed to develop personalized predictive models by combining machine learning (ML) technology with a demographic, medical history, and noninvasive biomarker data. These models can enhance the decision-making capabilities of physicians for HCC in hepatitis B virus (HBV)-related cirrhosis patients with low serum alpha-fetoprotein (AFP) levels. PATIENTS AND METHODS: A total of 6,980 patients treated between January 2012 and December 2018 were included. Pre-treatment laboratory tests and clinical data were obtained. The significant risk factors for HCC were identified, and the relative risk of each variable affecting its diagnosis was calculated using ML and univariate regression analysis. The data set was then randomly partitioned into validation (20 %) and training sets (80 %) to develop the ML models. RESULTS: Twelve independent risk factors for HCC were identified using Gaussian naïve Bayes, extreme gradient boosting (XGBoost), random forest, and least absolute shrinkage and selection operation regression models. Multivariate analysis revealed that male sex, age >60 years, alkaline phosphate >150 U/L, AFP >25 ng/mL, carcinoembryonic antigen >5 ng/mL, and fibrinogen >4 g/L were the risk factors, whereas hypertension, calcium <2.25 mmol/L, potassium ≤3.5 mmol/L, direct bilirubin >6.8 µmol/L, hemoglobin <110 g/L, and glutamic-pyruvic transaminase >40 U/L were the protective factors in HCC patients. Based on these factors, a nomogram was constructed, showing an area under the curve (AUC) of 0.746 (sensitivity = 0.710, specificity=0.646), which was significantly higher than AFP AUC of 0.658 (sensitivity = 0.462, specificity=0.766). Compared with several ML algorithms, the XGBoost model had an AUC of 0.832 (sensitivity = 0.745, specificity=0.766) and an independent validation AUC of 0.829 (sensitivity = 0.766, specificity = 0.737), making it the top-performing model in both sets. The external validation results have proven the accuracy of the XGBoost model. CONCLUSIONS: The proposed XGBoost demonstrated a promising ability for individualized prediction of HCC in HBV-related cirrhosis patients with low-level AFP.

2.
Clin Transl Oncol ; 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38902493

ABSTRACT

BACKGROUND: Colorectal cancer has a high incidence and mortality rate due to a low rate of early diagnosis. Therefore, efficient diagnostic methods are urgently needed. PURPOSE: This study assesses the diagnostic effectiveness of Carbohydrate Antigen 19-9 (CA19-9), Carcinoembryonic Antigen (CEA), Alpha-fetoprotein (AFP), and Cancer Antigen 125 (CA125) serum tumor markers for colorectal cancer (CRC) and investigates a machine learning-based diagnostic model incorporating these markers with blood biochemical indices for improved CRC detection. METHOD: Between January 2019 and December 2021, data from 800 CRC patients and 697 controls were collected; 52 patients and 63 controls attending the same hospital in 2022 were collected as an external validation set. Markers' effectiveness was analyzed individually and collectively, using metrics like ROC curve AUC and F1 score. Variables chosen through backward regression, including demographics and blood tests, were tested on six machine learning models using these metrics. RESULT: In the case group, the levels of CEA, CA199, and CA125 were found to be higher than those in the control group. Combining these with a fourth serum marker significantly improved predictive efficacy over using any single marker alone, achieving an Area Under the Curve (AUC) value of 0.801. Using stepwise regression (backward), 17 variables were meticulously selected for evaluation in six machine learning models. Among these models, the Gradient Boosting Machine (GBM) emerged as the top performer in the training set, test set, and external validation set, boasting an AUC value of over 0.9, indicating its superior predictive power. CONCLUSION: Machine learning models integrating tumor markers and blood indices offer superior CRC diagnostic accuracy, potentially enhancing clinical practice.

3.
Heliyon ; 10(10): e31152, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38784542

ABSTRACT

Image segmentation is a computer vision technique that involves dividing an image into distinct and meaningful regions or segments. The objective was to partition the image into areas that share similar visual characteristics. Noise and undesirable artifacts introduce inconsistencies and irregularities in image data. These inconsistencies severely affect the ability of most segmentation algorithms to distinguish between true image features, leading to less reliable and lower-quality results. Cellular Automata (CA) is a computational concept that consists of a grid of cells, each of which can be in a finite number of states. These cells evolve over discrete time steps based on a set of predefined rules that dictate how a cell's state changes according to its own state and the states of its neighboring cells. In this paper, a new segmentation approach based on the CA model was introduced. The proposed approach consisted of three phases. In the initial two phases of the process, the primary objective was to eliminate noise and undesirable artifacts that can interfere with the identification of regions exhibiting similar visual characteristics. To achieve this, a set of rules is designed to modify the state value of each cell or pixel based on the states of its neighboring elements. In the third phase, each element is assigned a state that is chosen from a set of predefined states. These states directly represent the final segmentation values for the corresponding elements. The proposed method was evaluated using different images, considering important quality indices. The experimental results indicated that the proposed approach produces better-segmented images in terms of quality and robustness.

4.
Proc Natl Acad Sci U S A ; 121(14): e2316616121, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38551839

ABSTRACT

Motivated by the implementation of a SARS-Cov-2 sewer surveillance system in Chile during the COVID-19 pandemic, we propose a set of mathematical and algorithmic tools that aim to identify the location of an outbreak under uncertainty in the network structure. Given an upper bound on the number of samples we can take on any given day, our framework allows us to detect an unknown infected node by adaptively sampling different network nodes on different days. Crucially, despite the uncertainty of the network, the method allows univocal detection of the infected node, albeit at an extra cost in time. This framework relies on a specific and well-chosen strategy that defines new nodes to test sequentially, with a heuristic that balances the granularity of the information obtained from the samples. We extensively tested our model in real and synthetic networks, showing that the uncertainty of the underlying graph only incurs a limited increase in the number of iterations, indicating that the methodology is applicable in practice.


Subject(s)
COVID-19 , Pandemics , Humans , Uncertainty , COVID-19/epidemiology , Disease Outbreaks , SARS-CoV-2
5.
Int J Stroke ; 19(7): 747-753, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38346937

ABSTRACT

BACKGROUND: Global access to acute stroke treatment is variable worldwide, with notable gaps in low and middle-income countries (LMIC), especially in rural areas. Ensuring a standardized method for pinpointing the existing regional coverage and proposing potential sites for new stroke centers is essential to change this scenario. AIMS: To create and apply computational strategies (CSs) to determine optimal locations for new acute stroke centers (ASCs), with a pilot application in nine Latin American regions/countries. METHODS: Hospitals treating acute ischemic stroke (AIS) with intravenous thrombolysis (IVT) and meeting the minimum infrastructure requirements per structured protocols were categorized as ASCs. Hospitals with emergency departments, noncontrast computed tomography (NCCT) scanners, and 24/7 laboratories were identified as potential acute stroke centers (PASCs). Hospital geolocation data were collected and mapped using the OpenStreetMap data set. A 45-min drive radius was considered the ideal coverage area for each hospital based on the drive speeds from the OpenRouteService database. Population data, including demographic density, were obtained from the Kontur Population data sets. The proposed CS assessed the population covered by ASCs and proposed new ASCs or artificial points (APs) settled in densely populated areas to achieve a target population coverage (TPC) of 95%. RESULTS: The observed coverage in the region presented significant disparities, ranging from 0% in the Bahamas to 73.92% in Trinidad and Tobago. No country/region reached the 95% TPC using only its current ASCs or PASCs, leading to the proposal of APs. For example, in Rio Grande do Sul, Brazil, the introduction of 132 new centers was suggested. Furthermore, it was observed that most ASCs were in major urban hubs or university hospitals, leaving rural areas largely underserved. CONCLUSIONS: The MAPSTROKE project has the potential to provide a systematic approach to identify areas with limited access to stroke centers and propose solutions for increasing access to AIS treatment. DATA ACCESS STATEMENT: Data used for this publication are available from the authors upon reasonable request.


Subject(s)
Health Services Accessibility , Thrombolytic Therapy , Humans , Thrombolytic Therapy/methods , Stroke/therapy , Latin America , Ischemic Stroke/therapy
6.
MethodsX ; 12: 102575, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38313697

ABSTRACT

The Ordered Weighted Averaging (OWA) operator is a multicriteria method that has conquered space among researchers in the composite indicators field. Typically, OWA operator weights are defined by the decision maker. This type of weighting is highly criticized, as decision-makers are susceptible to errors and bias in judgment. Some methods have been used to define OWA operator weights objectively. However, none of them is concerned about the quality of the composite indicator. This paper introduces a method that defines the weights of the OWA operator based on two quality parameters of the composite indicator: the ability to capture the concept of the multidimensional phenomenon and the informational loss. The method can be implemented in Microsoft Excel Solver and has a high degree of flexibility and applicability in problems of a multidimensional nature and a high degree of appropriation by researchers and practitioners in the area.•Defines weights that maximize the ability of the composite indicator to capture the concept of the multidimensional phenomenon.•Considers restrictions to limit the informational loss of the composite indicator or emphasize positive or negative aspects of the multidimensional phenomenon.•Offers flexibility in setting the objective and constraints of the optimization algorithm.

7.
Gigascience ; 132024 01 02.
Article in English | MEDLINE | ID: mdl-38206589

ABSTRACT

BACKGROUND: Structural variants (SVs) are genomic polymorphisms defined by their length (>50 bp). The usual types of SVs are deletions, insertions, translocations, inversions, and copy number variants. SV detection and genotyping is fundamental given the role of SVs in phenomena such as phenotypic variation and evolutionary events. Thus, methods to identify SVs using long-read sequencing data have been recently developed. FINDINGS: We present an accurate and efficient algorithm to predict germline SVs from long-read sequencing data. The algorithm starts collecting evidence (signatures) of SVs from read alignments. Then, signatures are clustered based on a Euclidean graph with coordinates calculated from lengths and genomic positions. Clustering is performed by the DBSCAN algorithm, which provides the advantage of delimiting clusters with high resolution. Clusters are transformed into SVs and a Bayesian model allows to precisely genotype SVs based on their supporting evidence. This algorithm is integrated into the single sample variants detector of the Next Generation Sequencing Experience Platform, which facilitates the integration with other functionalities for genomics analysis. We performed multiple benchmark experiments, including simulation and real data, representing different genome profiles, sequencing technologies (PacBio HiFi, ONT), and read depths. CONCLUSION: The results show that our approach outperformed state-of-the-art tools on germline SV calling and genotyping, especially at low depths, and in error-prone repetitive regions. We believe this work significantly contributes to the development of bioinformatic strategies to maximize the use of long-read sequencing technologies.


Subject(s)
Algorithms , Benchmarking , Bayes Theorem , Genotype , Cluster Analysis
8.
Bioengineering (Basel) ; 11(1)2024 Jan 13.
Article in English | MEDLINE | ID: mdl-38247954

ABSTRACT

Accurate classification of electromyographic (EMG) signals is vital in biomedical applications. This study evaluates different architectures of recurrent neural networks for the classification of EMG signals associated with five movements of the right upper extremity. A Butterworth filter was implemented for signal preprocessing, followed by segmentation into 250 ms windows, with an overlap of 190 ms. The resulting dataset was divided into training, validation, and testing subsets. The Grey Wolf Optimization algorithm was applied to the gated recurrent unit (GRU), long short-term memory (LSTM) architectures, and bidirectional recurrent neural networks. In parallel, a performance comparison with support vector machines (SVMs) was performed. The results obtained in the first experimental phase revealed that all the RNN networks evaluated reached a 100% accuracy, standing above the 93% achieved by the SVM. Regarding classification speed, LSTM ranked as the fastest architecture, recording a time of 0.12 ms, followed by GRU with 0.134 ms. Bidirectional recurrent neural networks showed a response time of 0.2 ms, while SVM had the longest time at 2.7 ms. In the second experimental phase, a slight decrease in the accuracy of the RNN models was observed, standing at 98.46% for LSTM, 96.38% for GRU, and 97.63% for the bidirectional network. The findings of this study highlight the effectiveness and speed of recurrent neural networks in the EMG signal classification task.

9.
BMC Health Serv Res ; 24(1): 37, 2024 Jan 05.
Article in English | MEDLINE | ID: mdl-38183029

ABSTRACT

BACKGROUND: No-show to medical appointments has significant adverse effects on healthcare systems and their clients. Using machine learning to predict no-shows allows managers to implement strategies such as overbooking and reminders targeting patients most likely to miss appointments, optimizing the use of resources. METHODS: In this study, we proposed a detailed analytical framework for predicting no-shows while addressing imbalanced datasets. The framework includes a novel use of z-fold cross-validation performed twice during the modeling process to improve model robustness and generalization. We also introduce Symbolic Regression (SR) as a classification algorithm and Instance Hardness Threshold (IHT) as a resampling technique and compared their performance with that of other classification algorithms, such as K-Nearest Neighbors (KNN) and Support Vector Machine (SVM), and resampling techniques, such as Random under Sampling (RUS), Synthetic Minority Oversampling Technique (SMOTE) and NearMiss-1. We validated the framework using two attendance datasets from Brazilian hospitals with no-show rates of 6.65% and 19.03%. RESULTS: From the academic perspective, our study is the first to propose using SR and IHT to predict the no-show of patients. Our findings indicate that SR and IHT presented superior performances compared to other techniques, particularly IHT, which excelled when combined with all classification algorithms and led to low variability in performance metrics results. Our results also outperformed sensitivity outcomes reported in the literature, with values above 0.94 for both datasets. CONCLUSION: This is the first study to use SR and IHT methods to predict patient no-shows and the first to propose performing z-fold cross-validation twice. Our study highlights the importance of avoiding relying on few validation runs for imbalanced datasets as it may lead to biased results and inadequate analysis of the generalization and stability of the models obtained during the training stage.


Subject(s)
Algorithms , Benchmarking , Humans , Brazil , Machine Learning , Decision Support Techniques
10.
PeerJ Comput Sci ; 10: e1773, 2024.
Article in English | MEDLINE | ID: mdl-38259892

ABSTRACT

This article proposes an evolutionary algorithm integrating Erdos-Rényi complex networks to regulate population crossovers, enhancing candidate solution refinement across generations. In this context, the population is conceptualized as a set of interrelated solutions, resembling a complex network. The algorithm enhances solutions by introducing new connections between them, thereby influencing population dynamics and optimizing the problem-solving process. The study conducts experiments comparing four instances of the traditional optimization problem known as the Traveling Salesman Problem (TSP). These experiments employ the traditional evolutionary algorithm, alternative algorithms utilizing different types of complex networks, and the proposed algorithm. The findings suggest that the approach guided by an Erdos-Rényi dynamic network surpasses the performance of the other algorithms. The proposed model exhibits improved convergence rates and shorter execution times. Thus, strategies based on complex networks reveal that network characteristics provide valuable information for solving optimization problems. Therefore, complex networks can regulate the decision-making process, similar to optimizing problems. This work emphasizes that the network structure is crucial in adding value to decision-making.

11.
J Diabetes Sci Technol ; 18(2): 287-301, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38047451

ABSTRACT

BACKGROUND: The use of machine learning and deep learning techniques in the research on diabetes has garnered attention in recent times. Nonetheless, few studies offer a thorough picture of the knowledge generation landscape in this field. To address this, a bibliometric analysis of scientific articles published from 2000 to 2022 was conducted to discover global research trends and networks and to emphasize the most prominent countries, institutions, journals, articles, and key topics in this domain. METHODS: The Scopus database was used to identify and retrieve high-quality scientific documents. The results were classified into categories of detection (covering diagnosis, screening, identification, segmentation, among others), prediction (prognosis, forecasting, estimation), and management (treatment, control, monitoring, education, telemedicine integration). Biblioshiny and RStudio were used to analyze the data. RESULTS: A total of 1773 articles were collected and analyzed. The number of publications and citations increased substantially since 2012, with a notable increase in the last 3 years. Of the 3 categories considered, detection was the most dominant, followed by prediction and management. Around 53.2% of the total journals started disseminating articles on this subject in 2020. China, India, and the United States were the most productive countries. Although no evidence of outstanding leadership by specific authors was found, the University of California emerged as the most influential institution for the development of scientific production. CONCLUSION: This is an evolving field that has experienced a rapid increase in productivity, especially over the last years with exponential growth. This trend is expected to continue in the coming years.


Subject(s)
Deep Learning , Diabetes Mellitus , Humans , Bibliometrics , Diabetes Mellitus/diagnosis , Diabetes Mellitus/therapy , Machine Learning , China
12.
Eur Radiol ; 34(3): 2024-2035, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37650967

ABSTRACT

OBJECTIVES: Evaluate the performance of a deep learning (DL)-based model for multiple sclerosis (MS) lesion segmentation and compare it to other DL and non-DL algorithms. METHODS: This ambispective, multicenter study assessed the performance of a DL-based model for MS lesion segmentation and compared it to alternative DL- and non-DL-based methods. Models were tested on internal (n = 20) and external (n = 18) datasets from Latin America, and on an external dataset from Europe (n = 49). We also examined robustness by rescanning six patients (n = 6) from our MS clinical cohort. Moreover, we studied inter-human annotator agreement and discussed our findings in light of these results. Performance and robustness were assessed using intraclass correlation coefficient (ICC), Dice coefficient (DC), and coefficient of variation (CV). RESULTS: Inter-human ICC ranged from 0.89 to 0.95, while spatial agreement among annotators showed a median DC of 0.63. Using expert manual segmentations as ground truth, our DL model achieved a median DC of 0.73 on the internal, 0.66 on the external, and 0.70 on the challenge datasets. The performance of our DL model exceeded that of the alternative algorithms on all datasets. In the robustness experiment, our DL model also achieved higher DC (ranging from 0.82 to 0.90) and lower CV (ranging from 0.7 to 7.9%) when compared to the alternative methods. CONCLUSION: Our DL-based model outperformed alternative methods for brain MS lesion segmentation. The model also proved to generalize well on unseen data and has a robust performance and low processing times both on real-world and challenge-based data. CLINICAL RELEVANCE STATEMENT: Our DL-based model demonstrated superior performance in accurately segmenting brain MS lesions compared to alternative methods, indicating its potential for clinical application with improved accuracy, robustness, and efficiency. KEY POINTS: • Automated lesion load quantification in MS patients is valuable; however, more accurate methods are still necessary. • A novel deep learning model outperformed alternative MS lesion segmentation methods on multisite datasets. • Deep learning models are particularly suitable for MS lesion segmentation in clinical scenarios.


Subject(s)
Magnetic Resonance Imaging , Multiple Sclerosis , Humans , Magnetic Resonance Imaging/methods , Multiple Sclerosis/diagnostic imaging , Multiple Sclerosis/pathology , Neural Networks, Computer , Algorithms , Brain/diagnostic imaging , Brain/pathology
13.
Arq. bras. oftalmol ; Arq. bras. oftalmol;87(3): e2022, 2024. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1520228

ABSTRACT

ABSTRACT Purpose: The emergency medical service is a fundamental part of healthcare, albeit crowded emergency rooms lead to delayed and low-quality assistance in actual urgent cases. Machine-learning algorithms can provide a smart and effective estimation of emergency patients' volume, which was previously restricted to artificial intelligence (AI) experts in coding and computer science but is now feasible by anyone without any coding experience through auto machine learning. This study aimed to create a machine-learning model designed by an ophthalmologist without any coding experience using AutoML to predict the influx in the emergency department and trauma cases. Methods: A dataset of 356,611 visits at Hospital da Universidade Federal de São Paulo from January 01, 2014 to December 31, 2019 was included in the model training, which included visits/day and the international classification disease code. The training and prediction were made with the Amazon Forecast by 2 ophthalmologists with no prior coding experience. Results: The forecast period predicted a mean emergency patient volume of 216.27/day in p90, 180.75/day in p50, and 140.35/day in p10, and a mean of 7.42 trauma cases/ day in p90, 3.99/day in p50, and 0.56/day in p10. In January of 2020, there were a total of 6,604 patient visits and a mean of 206.37 patients/day, which is 13.5% less than the p50 prediction. This period involved a total of 199 trauma cases and a mean of 6.21 cases/day, which is 55.77% more traumas than that by the p50 prediction. Conclusions: The development of models was previously restricted to data scientists' experts in coding and computer science, but transfer learning autoML has enabled AI development by any person with no code experience mandatory. This study model showed a close value to the actual 2020 January visits, and the only factors that may have influenced the results between the two approaches are holidays and dataset size. This is the first study to apply AutoML in hospital visits forecast, showing a close prediction of the actual hospital influx.


RESUMO Objetivo: Esse estudo tem como objetivo criar um modelo de Machine Learning por um oftalmologista sem experiência em programação utilizando auto Machine Learning predizendo influxo de pacientes em serviço de emergência e casos de trauma. Métodos: Um dataset de 366,610 visitas em Hospital Universitário da Universidade Federal de São Paulo de 01 de janeiro de 2014 até 31 de dezembro de 2019 foi incluído no treinamento do modelo, incluindo visitas/dia e código internacional de doenças. O treinamento e predição foram realizados com o Amazon Forecast por dois oftalmologistas sem experiência com programação. Resultados: O período de previsão estimou um volume de 206,37 pacientes/dia em p90, 180,75 em p50, 140,35 em p10 e média de 7,42 casos de trauma/dia em p90, 3,99 em p50 e 0,56 em p10. Janeiro de 2020 teve um total de 6.604 pacientes e média de 206,37 pacientes/dia, 13,5% menos do que a predição em p50. O período teve um total de 199 casos de trauma e média de 6,21 casos/dia, 55,77% mais casos do que a predição em p50. Conclusão: O desenvolvimento de modelos era restrito a cientistas de dados com experiencia em programação, porém a transferência de ensino com a tecnologia de auto Machine Learning permite o desenvolvimento de algoritmos por qualquer pessoa sem experiencia em programação. Esse estudo mostra um modelo com valores preditos próximos ao que ocorreram em janeiro de 2020. Fatores que podem ter influenciados no resultado foram feriados e tamanho do banco de dados. Esse é o primeiro estudo que aplicada auto Machine Learning em predição de visitas hospitalares com resultados próximos aos que ocorreram.

14.
Bull Environ Contam Toxicol ; 112(1): 6, 2023 Dec 08.
Article in English | MEDLINE | ID: mdl-38063862

ABSTRACT

The aim of this study is to assess and identify the most suitable geospatial interpolation algorithm for environmental sciences. The research focuses on evaluating six different interpolation methods using annual average PM10 concentrations as a reference dataset. The dataset includes measurements obtained from a target air quality network (scenery 1) and a sub-dataset derived from a partitive clustering technique (scenery 2). By comparing the performance of each interpolation algorithm using various indicators, the study aims to determine the most reliable method. The findings reveal that the kriging method demonstrates the highest performance within environmental sciences, with a spatial similarity of approximately 70% between the two scenery datasets. The performance indicators for the kriging method, including RMSE (root mean square error), MAE (mean absolute error), and MAPE (mean absolute percentage error), are measured at 3.2 µg/m3, 10.2 µg/m3, and 7.3%, respectively.This study addresses the existing gap in scientific knowledge regarding the comparison of geospatial interpolation techniques. The findings provide valuable insights for environmental managers and decision-makers, enabling them to implement effective control and mitigation strategies based on reliable geospatial information and data. In summary, this research evaluates and identifies the most suitable geospatial interpolation algorithm for environmental sciences, with the kriging method emerging as the most reliable option. The study's findings contribute to the advancement of knowledge in the field and offer practical implications for environmental management and planning.


Subject(s)
Air Pollution , Environmental Science , Environmental Monitoring/methods , Algorithms , Spatial Analysis
15.
Estima (Online) ; 21(1): e1311, jan-dez. 2023.
Article in English, Portuguese | LILACS, BDENF - Nursing | ID: biblio-1443204

ABSTRACT

Objetivo:Relatar a experiência de uma equipe de enfermeiros estomaterapeutas na construção de um algoritmo para a indicação de equipamento coletor para estomias de eliminação. Método: Relato de experiência, do período de janeiro de 2018 a setembro de 2019, sobre o processo de construção de um algoritmo para indicação de equipamento coletor para estomias de eliminação. Resultados: A partir de determinadas características clínicas (parâmetros de avaliação) e da categorização dos equipamentos coletores (solução), foi desenvolvido um algoritmo para indicação de equipamento coletor para estomias de eliminação. Conclusão: Espera-se que esse instrumento possa auxiliar os enfermeiros na sua prática profissional quanto à escolha do equipamento coletor e na construção de protocolos clínicos.


Objective:To report the experience of a team of enterostomal therapists in the construction of an algorithm for the indication of collecting equipment for elimination stomas. Method: Experience report, from January 2018 to September 2019, on the process of building an algorithm to indicate collecting equipment for elimination stomas. Results: Based on certain clinical characteristics (assessment parameters) and the categorization of collecting equipment (solution), an algorithm was developed to indicate collecting equipment for elimination stomas. Conclusion: It is expected that this instrument can help nurses in their professional practice regarding the choice of collecting equipment and the construction of clinical protocols.


Objetivo:Relatar la experiencia de un equipo de enfermeros estomaterapeutas en la construcción de un algoritmo para la indicación de equipos recolectores para estomas de eliminación. Método: Informe de experiencia, de enero de 2018 a septiembre de 2019, sobre el proceso de construcción de un algoritmo para indicar equipos colectores para estomas de eliminación. Resultado: A partir de ciertas características clínicas (parámetros de evaluación) y la categorización de los equipos colectores (solución), se desarrolló un algoritmo para indicar equipos colectores para estomas de eliminación. Conclusión: Se espera que este instrumento pueda ayudar a los enfermeros en su práctica profesional en cuanto a la elección de equipos de recolección y la construcción de protocolos clínicos.


Subject(s)
Humans , Algorithms , Ostomy/instrumentation , Ostomy/nursing , Nurse Specialists , Enterostomal Therapy
16.
Arch Cardiol Mex ; 93(Supl): 1-12, 2023.
Article in English | MEDLINE | ID: mdl-37913795

ABSTRACT

OBJECTIVE: Generate recommendations for the diagnosis, management, and follow-up of chronic hyperkalemia. METHOD: This consensus was made by nephrologists and cardiologists following the GRADE methodology. RESULTS: Chronic hyperkalemia can be defined as a biochemical condition with or without clinical manifestations characterized by a recurrent elevation of serum potassium levels that may require pharmacological and or non-pharmacological intervention. It can be classified as mild (K+ 5.0 to < 5.5 mEq/L), moderate (K+ 5.5 to 6.0 mEq/L) or severe (K+ > 6.0 mEq/L). Its incidence and prevalence have yet to be determined. Risk factors: chronic kidney disease, chronic heart failure, diabetes mellitus, age ≥ 65 years, hypertension, and drugs that inhibit the renin angiotensin aldosterone system (RAASi), among others. There is no consensus for the management of chronic hyperkalemia. The suggested pattern for patients is to identify and eliminate or control risk factors, provide advice on potassium intake and, for whom it is indicated, optimize RAASi therapy, administer oral potassium binders and correct metabolic acidosis. CONCLUSIONS: The recommendation is to pay attention to the diagnosis, management, and follow-up of chronic hyperkalemia, especially in patients with risk factors.


OBJETIVO: Generar recomendaciones para el diagnóstico, el manejo y el seguimiento de la hiperkalemia crónica. MÉTODO: Este consenso fue realizado por nefrólogos y cardiólogos siguiendo la metodología GRADE. RESULTADOS: La hiperkalemia crónica puede definirse como una condición bioquímica, con o sin manifestaciones clínicas, caracterizada por una elevación recurrente de las concentraciones séricas de potasio que puede requerir una intervención farmacológica, no farmacológica o ambas. Puede clasificarse en leve (K+ 5,0 a < 5,5 mEq/l), moderada (K+ 5,5 a 6,0 mEq/l) o grave (K+ > 6,0 mEq/l). Su incidencia y prevalencia no han sido claramente determinadas. Se consideran factores de riesgo la enfermedad renal crónica, la insuficiencia cardiaca crónica, la diabetes mellitus, la edad ≥ 65 años, la hipertensión arterial y el tratamiento con inhibidores del sistema renina-angiotensina-aldosterona (iSRAA), entre otros. No hay consenso sobre el manejo de la hiperkalemia crónica. Se sugiere identificar y eliminar o controlar los factores de riesgo, brindar asesoramiento sobre la ingesta de potasio y, para quien esté indicado, optimizar la terapia con iSRAA, administrar aglutinantes orales del potasio y corregir la acidosis metabólica. CONCLUSIONES: Se recomienda prestar atención al diagnóstico, el manejo y el seguimiento de la hiperkalemia crónica, en especial en los pacientes con factores de riesgo.


Subject(s)
Heart Failure , Hyperkalemia , Humans , Aged , Hyperkalemia/diagnosis , Hyperkalemia/etiology , Hyperkalemia/therapy , Angiotensin-Converting Enzyme Inhibitors/therapeutic use , Colombia , Consensus , Potassium/therapeutic use , Heart Failure/drug therapy
17.
Antibiotics (Basel) ; 12(10)2023 Sep 30.
Article in English | MEDLINE | ID: mdl-37887203

ABSTRACT

FTIR (Fourier transform infrared spectroscopy) is one analytical technique of the absorption of infrared radiation. FTIR can also be used as a tool to characterize profiles of biomolecules in bacterial cells, which can be useful in differentiating different bacteria. Considering that different bacterial species have different molecular compositions, it will then result in unique FTIR spectra for each species and even bacterial strains. Having this important tool, here, we have developed a methodology aimed at refining the analysis and classification of the FTIR absorption spectra obtained from samples of Staphylococcus aureus, with the implementation of machine learning algorithms. In the first stage, the system conforming to four specified species groups, Control, Amoxicillin induced (AMO), Gentamicin induced (GEN), and Erythromycin induced (ERY), was analyzed. Then, in the second stage, five hidden samples were identified and correctly classified as with/without resistance to induced antibiotics. The total analyses were performed in three windows, Carbohydrates, Fatty Acids, and Proteins, of five hundred spectra. The protocol for acquiring the spectral data from the antibiotic-resistant bacteria via FTIR spectroscopy developed by Soares et al. was implemented here due to demonstrating high accuracy and sensitivity. The present study focuses on the prediction of antibiotic-induced samples through the implementation of the hierarchical cluster analysis (HCA), principal component analysis (PCA) algorithm, and calculation of confusion matrices (CMs) applied to the FTIR absorption spectra data. The data analysis process developed here has the main objective of obtaining knowledge about the intrinsic behavior of S. aureus samples within the analysis regions of the FTIR absorption spectra. The results yielded values with 0.7 to 1 accuracy and high values of sensitivity and specificity for the species identification in the CM calculations. Such results provide important information on antibiotic resistance in samples of S. aureus bacteria for potential application in the detection of antibiotic resistance in clinical use.

18.
Plants (Basel) ; 12(19)2023 Sep 28.
Article in English | MEDLINE | ID: mdl-37836163

ABSTRACT

Reflectance hyperspectroscopy is recognised for its potential to elucidate biochemical changes, thereby enhancing the understanding of plant biochemistry. This study used the UV-VIS-NIR-SWIR spectral range to identify the different biochemical constituents in Hibiscus and Geranium plants. Hyperspectral vegetation indices (HVIs), principal component analysis (PCA), and correlation matrices provided in-depth insights into spectral differences. Through the application of advanced algorithms-such as PLS, VIP, iPLS-VIP, GA, RF, and CARS-the most responsive wavelengths were discerned. PLSR models consistently achieved R2 values above 0.75, presenting noteworthy predictions of 0.86 for DPPH and 0.89 for lignin. The red-edge and SWIR bands displayed strong associations with pivotal plant pigments and structural molecules, thus expanding the perspectives on leaf spectral dynamics. These findings highlight the efficacy of spectroscopy coupled with multivariate analysis in evaluating the management of biochemical compounds. A technique was introduced to measure the photosynthetic pigments and structural compounds via hyperspectroscopy across UV-VIS-NIR-SWIR, underpinned by rapid multivariate PLSR. Collectively, our results underscore the burgeoning potential of hyperspectroscopy in precision agriculture. This indicates a promising paradigm shift in plant phenotyping and biochemical evaluation.

19.
Acta Otorhinolaryngol Ital ; 43(6): 409-416, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37814975

ABSTRACT

Purpose: To evaluate the correlation between several presumed candidate genes for obstructive sleep apnoea (OSA) and clinical OSA phenotypes and propose a predictive comprehensive model for diagnosis of OSA. Methods: This case-control study compared polysomnographic patterns, clinical data, morbidities, dental factors and genetic data for polymorphisms in PER3, BDNF, NRXN3, APOE, HCRTR2, MC4R between confirmed OSA cases and ethnically matched clinically unaffected controls. A logistic regression model was developed to predict OSA using the combined data. Results: The cohort consisted of 161 OSA cases and 81 controls. Mean age of cases was 53.5 ± 14.0 years, mostly males (57%) and mean body mass index (BMI) of 27.5 ± 4.3 kg/m2. None of the genotyped markers showed a statistically significant association with OSA after adjusting for age and BMI. A predictive algorithm included the variables gender, age, snoring, hypertension, mouth breathing and number of T alleles of PER3 (rs228729) presenting 76.5% specificity and 71.6% sensitivity. Conclusions: No genetic variant tested showed a statistically significant association with OSA phenotype. Logistic regression analysis resulted in a predictive model for diagnosing OSA that, if validated by larger prospective studies, could be applied clinically to allow risk stratification for OSA.


Subject(s)
Sleep Apnea, Obstructive , Male , Humans , Adult , Middle Aged , Aged , Female , Case-Control Studies , Prospective Studies , Sleep Apnea, Obstructive/diagnosis , Body Mass Index , Phenotype
20.
Bioorg Med Chem ; 94: 117475, 2023 Oct 30.
Article in English | MEDLINE | ID: mdl-37741120

ABSTRACT

The emergence of artificial intelligence (AI) tools has transformed the landscape of drug discovery, providing unprecedented speed, efficiency, and cost-effectiveness in the search for new therapeutics. From target identification to drug formulation and delivery, AI-driven algorithms have revolutionized various aspects of medicinal chemistry, significantly accelerating the drug design process. Despite the transformative power of AI, this perspective article emphasizes the limitations of AI tools in drug discovery, requiring inventive skills of medicinal chemists. However, the article highlighted that there is a need for a harmonious integration of AI-based tools and human expertise in drug discovery. Such a synergistic approach promises to lead to groundbreaking therapies that address unmet medical needs and benefit humankind. As the world evolves technologically, the question remains: When will AI tools effectively design and develop drugs? The answer may lie in the seamless collaboration between AI and human researchers, unlocking transformative therapies that combat diseases effectively.

SELECTION OF CITATIONS
SEARCH DETAIL