Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 316.517
Filtrar
1.
Rev. esp. patol ; 57(2): 77-83, Abr-Jun, 2024. tab, ilus
Artigo em Espanhol | IBECS | ID: ibc-232410

RESUMO

Introducción: En un servicio de anatomía patológica se analiza la carga laboral en tiempo médico en función de la complejidad de las muestras recibidas, y se valora su distribución entre los patólogos, presentado un nuevo algoritmo informático que favorece una distribución equitativa. Métodos: Siguiendo las directrices para la «Estimación de la carga de trabajo en citopatología e histopatología (tiempo médico) atendiendo al catálogo de muestras y procedimientos de la SEAP-IAP (2.ª edición)» se determinan las unidades de carga laboral (UCL) por patólogo y UCL global del servicio, la carga media laboral que soporta el servicio (factor MU), el tiempo de dedicación de cada patólogo a la actividad asistencial y el número de patólogos óptimo según la carga laboral del servicio. Resultados: Determinamos 12.197 UCL totales anuales para el patólogo jefe de servicio, así como 14.702 y 13.842 para los patólogos adjuntos, con una UCL global del servicio de 40.742. El factor MU calculado es 4,97. El jefe ha dedicado el 72,25% de su jornada a la asistencia y los adjuntos el 87,09 y 82,01%. El número de patólogos óptimo para el servicio es de 3,55. Conclusiones: Todos los resultados obtenidos demuestran la sobrecarga laboral médica, y la distribución de las UCL entre los patólogos no resulta equitativa. Se propone un algoritmo informático capaz de distribuir la carga laboral de manera equitativa, asociado al sistema de información del laboratorio, y que tenga en cuenta el tipo de muestra, su complejidad y la dedicación asistencial de cada patólogo.(AU)


Introduction: In a pathological anatomy service, the workload in medical time is analyzed based on the complexity of the samples received and its distribution among pathologists is assessed, presenting a new computer algorithm that favors an equitable distribution. Methods: Following the second edition of the Spanish guidelines for the estimation of workload in cytopathology and histopathology (medical time) according to the Spanish Pathology Society-International Academy of Pathology (SEAP-IAP) catalog of samples and procedures, we determined the workload units (UCL) per pathologist and the overall UCL of the service, the average workload of the service (MU factor), the time dedicated by each pathologist to healthcare activity and the optimal number of pathologists according to the workload of the service. Results: We determined 12 197 total annual UCL for the chief pathologist, as well as 14 702 and 13 842 UCL for associate pathologists, with an overall of 40 742 UCL for the whole service. The calculated MU factor is 4.97. The chief pathologist devoted 72.25% of his working day to healthcare activity while associate pathologists dedicated 87.09% and 82.01% of their working hours. The optimal number of pathologists for the service is found to be 3.55. Conclusions: The results demonstrate medical work overload and a non-equitable distribution of UCLs among pathologists. We propose a computer algorithm capable of distributing the workload in an equitable manner. It would be associated with the laboratory information system and take into account the type of specimen, its complexity and the dedication of each pathologist to healthcare activity.(AU)


Assuntos
Humanos , Masculino , Feminino , Patologia , Carga de Trabalho , Patologistas , Serviço Hospitalar de Patologia , Algoritmos
2.
Proc Natl Acad Sci U S A ; 121(19): e2403384121, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38691585

RESUMO

Macromolecular complexes are often composed of diverse subunits. The self-assembly of these subunits is inherently nonequilibrium and must avoid kinetic traps to achieve high yield over feasible timescales. We show how the kinetics of self-assembly benefits from diversity in subunits because it generates an expansive parameter space that naturally improves the "expressivity" of self-assembly, much like a deeper neural network. By using automatic differentiation algorithms commonly used in deep learning, we searched the parameter spaces of mass-action kinetic models to identify classes of kinetic protocols that mimic biological solutions for productive self-assembly. Our results reveal how high-yield complexes that easily become kinetically trapped in incomplete intermediates can instead be steered by internal design of rate-constants or external and active control of subunits to efficiently assemble. Internal design of a hierarchy of subunit binding rates generates self-assembly that can robustly avoid kinetic traps for all concentrations and energetics, but it places strict constraints on selection of relative rates. External control via subunit titration is more versatile, avoiding kinetic traps for any system without requiring molecular engineering of binding rates, albeit less efficiently and robustly. We derive theoretical expressions for the timescales of kinetic traps, and we demonstrate our optimization method applies not just for design but inference, extracting intersubunit binding rates from observations of yield-vs.-time for a heterotetramer. Overall, we identify optimal kinetic protocols for self-assembly as a powerful mechanism to achieve efficient and high-yield assembly in synthetic systems whether robustness or ease of "designability" is preferred.


Assuntos
Algoritmos , Cinética , Substâncias Macromoleculares/química , Substâncias Macromoleculares/metabolismo
3.
Microb Genom ; 10(5)2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38717808

RESUMO

Improvements in the accuracy and availability of long-read sequencing mean that complete bacterial genomes are now routinely reconstructed using hybrid (i.e. short- and long-reads) assembly approaches. Complete genomes allow a deeper understanding of bacterial evolution and genomic variation beyond single nucleotide variants. They are also crucial for identifying plasmids, which often carry medically significant antimicrobial resistance genes. However, small plasmids are often missed or misassembled by long-read assembly algorithms. Here, we present Hybracter which allows for the fast, automatic and scalable recovery of near-perfect complete bacterial genomes using a long-read first assembly approach. Hybracter can be run either as a hybrid assembler or as a long-read only assembler. We compared Hybracter to existing automated hybrid and long-read only assembly tools using a diverse panel of samples of varying levels of long-read accuracy with manually curated ground truth reference genomes. We demonstrate that Hybracter as a hybrid assembler is more accurate and faster than the existing gold standard automated hybrid assembler Unicycler. We also show that Hybracter with long-reads only is the most accurate long-read only assembler and is comparable to hybrid methods in accurately recovering small plasmids.


Assuntos
Algoritmos , Genoma Bacteriano , Software , Plasmídeos/genética , Análise de Sequência de DNA/métodos , Genômica/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Bactérias/genética , Bactérias/classificação
4.
Artigo em Inglês | MEDLINE | ID: mdl-38722721

RESUMO

Advancements in network science have facilitated the study of brain communication networks. Existing techniques for identifying event-related brain functional networks (BFNs) often result in fully connected networks. However, determining the optimal and most significant network representation for event-related BFNs is crucial for understanding complex brain networks. The presence of both false and genuine connections in the fully connected network requires network thresholding to eliminate false connections. However, a generalized framework for thresholding in network neuroscience is currently lacking. To address this, we propose four novel methods that leverage network properties, energy, and efficiency to select a generalized threshold level. This threshold serves as the basis for identifying the optimal and most significant event-related BFN. We validate our methods on an openly available emotion dataset and demonstrate their effectiveness in identifying multiple events. Our proposed approach can serve as a versatile thresholding technique to represent the fully connected network as an event-related BFN.


Assuntos
Algoritmos , Encéfalo , Eletroencefalografia , Emoções , Rede Nervosa , Humanos , Rede Nervosa/fisiologia , Eletroencefalografia/métodos , Encéfalo/fisiologia , Emoções/fisiologia , Reprodutibilidade dos Testes , Masculino , Mapeamento Encefálico/métodos , Adulto , Feminino
5.
Aquat Toxicol ; 271: 106936, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38723470

RESUMO

In recent years, with the rapid development of society, organic compounds have been released into aquatic environments in various forms, posing a significant threat to the survival of aquatic organisms. The assessment of developmental toxicity is an important part of environmental safety risk systems, helping to identify the potential impacts of organic compounds on the embryonic development of aquatic organisms and enabling early detection and warning of potential ecological risks. Additionally, binary classification models cannot accurately classify organic compounds. Therefore, it is crucial to construct a multiclassification model for predicting the developmental toxicity of organic compounds. In this study, binary and multiclassification models were developed based on the ToxCast™ Phase I chemical library and literature data. The random forest, support vector machine, extreme gradient boosting, adaptive gradient boosting, and C5.0 decision tree algorithms, as well as 8 types of molecular fingerprint were used to establish a multiclassification base model for predicting developmental toxicity through 5-fold cross-validation and external validation. Ultimately, a multiclassification ensemble model was derived through a voting method. The performance of the binary ensemble model, as measured by the balanced accuracy, was 0.918, while that of the multiclassification model was 0.819. The developmental toxicity voting ensemble model (DT-VEM) achieved accuracies of 0.804, 0.834, and 0.855. Furthermore, by utilizing the XGBoost machine learning algorithm to construct separate models for molecular descriptors and substructure molecular fingerprints, we identified several substructures and physical properties related to developmental toxicity. Our research contributes to a more detailed classification of developmental toxicity, providing a new and valuable tool for predicting the developmental toxicity effects of unknown compounds. This supplement addresses the limitations of previous tools, as it offers an enhanced ability to predict potential developmental toxicity in novel compounds.


Assuntos
Poluentes Químicos da Água , Peixe-Zebra , Animais , Poluentes Químicos da Água/toxicidade , Embrião não Mamífero/efeitos dos fármacos , Testes de Toxicidade , Desenvolvimento Embrionário/efeitos dos fármacos , Modelos Biológicos , Algoritmos , Máquina de Vetores de Suporte , Compostos Orgânicos/toxicidade
6.
Nat Commun ; 15(1): 3909, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724493

RESUMO

Aberrant signaling pathway activity is a hallmark of tumorigenesis and progression, which has guided targeted inhibitor design for over 30 years. Yet, adaptive resistance mechanisms, induced by rapid, context-specific signaling network rewiring, continue to challenge therapeutic efficacy. Leveraging progress in proteomic technologies and network-based methodologies, we introduce Virtual Enrichment-based Signaling Protein-activity Analysis (VESPA)-an algorithm designed to elucidate mechanisms of cell response and adaptation to drug perturbations-and use it to analyze 7-point phosphoproteomic time series from colorectal cancer cells treated with clinically-relevant inhibitors and control media. Interrogating tumor-specific enzyme/substrate interactions accurately infers kinase and phosphatase activity, based on their substrate phosphorylation state, effectively accounting for signal crosstalk and sparse phosphoproteome coverage. The analysis elucidates time-dependent signaling pathway response to each drug perturbation and, more importantly, cell adaptive response and rewiring, experimentally confirmed by CRISPR knock-out assays, suggesting broad applicability to cancer and other diseases.


Assuntos
Neoplasias do Colo , Resistencia a Medicamentos Antineoplásicos , Fosfoproteínas , Proteômica , Transdução de Sinais , Humanos , Resistencia a Medicamentos Antineoplásicos/genética , Resistencia a Medicamentos Antineoplásicos/efeitos dos fármacos , Proteômica/métodos , Fosfoproteínas/metabolismo , Transdução de Sinais/efeitos dos fármacos , Neoplasias do Colo/tratamento farmacológico , Neoplasias do Colo/metabolismo , Neoplasias do Colo/genética , Linhagem Celular Tumoral , Fosforilação , Algoritmos , Proteoma/metabolismo , Antineoplásicos/farmacologia , Antineoplásicos/uso terapêutico
7.
Brief Bioinform ; 25(3)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38725155

RESUMO

Single-cell RNA sequencing (scRNA-seq) experiments have become instrumental in developmental and differentiation studies, enabling the profiling of cells at a single or multiple time-points to uncover subtle variations in expression profiles reflecting underlying biological processes. Benchmarking studies have compared many of the computational methods used to reconstruct cellular dynamics; however, researchers still encounter challenges in their analysis due to uncertainty with respect to selecting the most appropriate methods and parameters. Even among universal data processing steps used by trajectory inference methods such as feature selection and dimension reduction, trajectory methods' performances are highly dataset-specific. To address these challenges, we developed Escort, a novel framework for evaluating a dataset's suitability for trajectory inference and quantifying trajectory properties influenced by analysis decisions. Escort evaluates the suitability of trajectory analysis and the combined effects of processing choices using trajectory-specific metrics. Escort navigates single-cell trajectory analysis through these data-driven assessments, reducing uncertainty and much of the decision burden inherent to trajectory inference analyses. Escort is implemented in an accessible R package and R/Shiny application, providing researchers with the necessary tools to make informed decisions during trajectory analysis and enabling new insights into dynamic biological processes at single-cell resolution.


Assuntos
RNA-Seq , Análise de Célula Única , Análise de Célula Única/métodos , RNA-Seq/métodos , Humanos , Biologia Computacional/métodos , Análise de Sequência de RNA/métodos , Software , Algoritmos , Perfilação da Expressão Gênica/métodos , Análise da Expressão Gênica de Célula Única
8.
Artigo em Inglês | MEDLINE | ID: mdl-38743552

RESUMO

Physical therapists play a crucial role in guiding patients through effective and safe rehabilitation processes according to medical guidelines. However, due to the therapist-patient imbalance, it is neither economical nor feasible for therapists to provide guidance to every patient during recovery sessions. Automated assessment of physical rehabilitation can help with this problem, but accurately quantifying patients' training movements and providing meaningful feedback poses a challenge. In this paper, an Expert-knowledge-based Graph Convolutional approach is proposed to automate the assessment of the quality of physical rehabilitation exercises. This approach utilizes experts' knowledge to improve the spatial feature extraction ability of the Graph Convolutional module and a Gated pooling module for feature aggregation. Additionally, a Transformer module is employed to capture long-range temporal dependencies in the movements. The attention scores and weight matrix obtained through this approach can serve as interpretability tools to help therapists understand the assessment model and assist patients in improving their exercises. The effectiveness of the proposed method is verified on the KIMORE dataset, achieving state-of-the-art performance compared to existing models. Experimental results also illustrate the interpretability of the method in both spatial and temporal dimensions.


Assuntos
Algoritmos , Terapia por Exercício , Redes Neurais de Computação , Humanos , Terapia por Exercício/métodos , Masculino , Reabilitação/métodos , Bases de Conhecimento , Movimento/fisiologia , Sistemas Inteligentes , Feminino , Adulto
9.
Sci Rep ; 14(1): 11233, 2024 05 16.
Artigo em Inglês | MEDLINE | ID: mdl-38755269

RESUMO

Automated disease diagnosis and prediction, powered by AI, play a crucial role in enabling medical professionals to deliver effective care to patients. While such predictive tools have been extensively explored in resource-rich languages like English, this manuscript focuses on predicting disease categories automatically from symptoms documented in the Afaan Oromo language, employing various classification algorithms. This study encompasses machine learning techniques such as support vector machines, random forests, logistic regression, and Naïve Bayes, as well as deep learning approaches including LSTM, GRU, and Bi-LSTM. Due to the unavailability of a standard corpus, we prepared three data sets with different numbers of patient symptoms arranged into 10 categories. The two feature representations, TF-IDF and word embedding, were employed. The performance of the proposed methodology has been evaluated using accuracy, recall, precision, and F1 score. The experimental results show that, among machine learning models, the SVM model using TF-IDF had the highest accuracy and F1 score of 94.7%, while the LSTM model using word2vec embedding showed an accuracy rate of 95.7% and F1 score of 96.0% from deep learning models. To enhance the optimal performance of each model, several hyper-parameter tuning settings were used. This study shows that the LSTM model verifies to be the best of all the other models over the entire dataset.


Assuntos
Aprendizado Profundo , Humanos , Etiópia , Máquina de Vetores de Suporte , Idioma , Algoritmos , Aprendizado de Máquina , Teorema de Bayes , Inteligência Artificial
10.
PLoS One ; 19(5): e0302741, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38758774

RESUMO

In the context of integrating sports and medicine domains, the urgent resolution of elderly health supervision requires effective data clustering algorithms. This paper introduces a novel higher-order hybrid clustering algorithm that combines density values and the particle swarm optimization (PSO) algorithm. Initially, the traditional PSO algorithm is enhanced by integrating the Global Evolution Dynamic Model (GEDM) into the Distribution Estimation Algorithm (EDA), constructing a weighted covariance matrix-based GEDM. This adapted PSO algorithm dynamically selects between the Global Evolution Dynamic Model and the standard PSO algorithm to update population information, significantly enhancing convergence speed while mitigating the risk of local optima entrapment. Subsequently, the higher-order hybrid clustering algorithm is formulated based on the density value and the refined PSO algorithm. The PSO clustering algorithm is adopted in the initial clustering phase, culminating in class clusters after a finite number of iterations. These clusters then undergo the application of the density peak search algorithm to identify candidate centroids. The final centroids are determined through a fusion of the initial class clusters and the identified candidate centroids. Results showcase remarkable improvements: achieving 99.13%, 82.22%, and 99.22% for F-measure, recall, and precision on dataset S1, and 75.22%, 64.0%, and 64.4% on dataset CMC. Notably, the proposed algorithm yields a 75.22%, 64.4%, and 64.6% rate on dataset S, significantly surpassing the comparative schemes' performance. Moreover, employing the text vector representation of the LDA topic vector model underscores the efficacy of the higher-order hybrid clustering algorithm in efficiently clustering text information. This innovative approach facilitates swift and accurate clustering of elderly health data from the perspective of sports and medicine integration. It enables the identification of patterns and regularities within the data, facilitating the formulation of personalized health management strategies and addressing latent health concerns among the elderly population.


Assuntos
Algoritmos , Humanos , Análise por Conglomerados , Idoso , Gestão da Informação em Saúde/métodos , Medicina Esportiva/métodos , Esportes
11.
PLoS One ; 19(5): e0303076, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38758825

RESUMO

STUDY OBJECTIVE: This study aimed to prospectively validate the performance of an artificially augmented home sleep apnea testing device (WVU-device) and its patented technology. METHODOLOGY: The WVU-device, utilizing patent pending (US 20210001122A) technology and an algorithm derived from cardio-pulmonary physiological parameters, comorbidities, and anthropological information was prospectively compared with a commercially available and Center for Medicare and Medicaid Services (CMS) approved home sleep apnea testing (HSAT) device. The WVU-device and the HSAT device were applied on separate hands of the patient during a single night study. The oxygen desaturation index (ODI) obtained from the WVU-device was compared to the respiratory event index (REI) derived from the HSAT device. RESULTS: A total of 78 consecutive patients were included in the prospective study. Of the 78 patients, 38 (48%) were women and 9 (12%) had a Fitzpatrick score of 3 or higher. The ODI obtained from the WVU-device corelated well with the HSAT device, and no significant bias was observed in the Bland-Altman curve. The accuracy for ODI > = 5 and REI > = 5 was 87%, for ODI> = 15 and REI > = 15 was 89% and for ODI> = 30 and REI of > = 30 was 95%. The sensitivity and specificity for these ODI /REI cut-offs were 0.92 and 0.78, 0.91 and 0.86, and 0.94 and 0.95, respectively. CONCLUSION: The WVU-device demonstrated good accuracy in predicting REI when compared to an approved HSAT device, even in patients with darker skin tones.


Assuntos
Inteligência Artificial , Síndromes da Apneia do Sono , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Síndromes da Apneia do Sono/diagnóstico , Síndromes da Apneia do Sono/fisiopatologia , Idoso , Polissonografia/instrumentação , Polissonografia/métodos , Algoritmos , Adulto
12.
PLoS One ; 19(5): e0300961, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38758938

RESUMO

The stable and site-specific operation of transmission lines is a crucial safeguard for grid functionality. This study introduces a comprehensive optimization design method for transmission line crossing frame structures based on the Biogeography-Based Optimization (BBO) algorithm, which integrates size, shape, and topology optimization. By utilizing the BBO algorithm to optimize the truss structure's design variables, the method ensures the structure's economic and practical viability while enhancing its performance. The optimization process is validated through finite element analysis, confirming the optimized structure's compliance with strength, stiffness, and stability requirements. The results demonstrate that the integrated design of size, shape, and topology optimization, as opposed to individual optimizations of size or shape and topology, yields the lightest structure mass and a maximum stress of 151.4 MPa under construction conditions. These findings also satisfy the criteria for strength, stiffness, and stability, verifying the method's feasibility, effectiveness, and practicality. This approach surpasses traditional optimization methods, offering a more effective solution for complex structural optimization challenges, thereby enhancing the sustainable utilization of structures.


Assuntos
Algoritmos , Análise de Elementos Finitos
13.
PLoS One ; 19(5): e0298572, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38758947

RESUMO

Aiming at the problem of load increase in distribution network and low satisfaction of vehicle owners caused by disorderly charging of electric vehicles, an optimal scheduling model of electric vehicles considering the comprehensive satisfaction of vehicle owners is proposed. In this model, the dynamic electricity price and charging and discharging state of electric vehicles are taken as decision variables, and the income of electric vehicle charging stations, the comprehensive satisfaction of vehicle owners considering economic benefits and the load fluctuation of electric vehicles are taken as optimization objectives. The improved NSGA-III algorithm (DJM-NSGA-III) based on dynamic opposition-based learning strategy, Jaya algorithm and Manhattan distance is used to solve the problems of low initial population quality, easy to fall into local optimal solution and ignoring potential optimal solution when NSGA-III algorithm is used to solve the multi-objective and high-dimensional scheduling model. The experimental results show that the proposed method can improve the owner's satisfaction while improving the income of the charging station, effectively alleviate the conflict of interest between the two, and maintain the safe and stable operation of the distribution network.


Assuntos
Algoritmos , Eletricidade , Automóveis , Humanos , Modelos Teóricos
14.
J Environ Manage ; 359: 120968, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38703643

RESUMO

Planning under complex uncertainty often asks for plans that can adapt to changing future conditions. To inform plan development during this process, exploration methods have been used to explore the performance of candidate policies given uncertainties. Nevertheless, these methods hardly enable adaptation by themselves, so extra efforts are required to develop the final adaptive plans, hence compromising the overall decision-making efficiency. This paper introduces Reinforcement Learning (RL) that employs closed-loop control as a new exploration method that enables automated adaptive policy-making for planning under uncertainty. To investigate its performance, we compare RL with a widely-used exploration method, Multi-Objective Evolutionary Algorithm (MOEA), in two hypothetical problems via computational experiments. Our results indicate the complementarity of the two methods. RL makes better use of its exploration history, hence always providing higher efficiency and providing better policy robustness in the presence of parameter uncertainty. MOEA quantifies objective uncertainty in a more intuitive way, hence providing better robustness to objective uncertainty. These findings will help researchers choose appropriate methods in different applications.


Assuntos
Algoritmos , Tomada de Decisões , Incerteza , Reforço Psicológico
15.
Chemosphere ; 358: 142232, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38714244

RESUMO

The Virtual Extensive Read-Across software (VERA) is a new tool for read-across using a global similarity score, molecular groups, and structural alerts to find clusters of similar substances; these clusters are then used to identify suitable similar substances and make an assessment for the target substance. A beta version of VERA GUI is free and available at vegahub.eu; the source code of the VERA algorithm is available on GitHub. In the past we described its use to assess carcinogenicity, a classification endpoint. The aim here is to extend the automated read-across approach to assess continuous endpoints as well. We addressed acute fish toxicity. VERA evaluation on the acute fish toxicity endpoint was done on a dataset containing general substances (pesticides, industrial products, biocides, etc.), obtaining an overall R2 of 0.68. We employed the VERA algorithm also on active pharmaceutical ingredients (APIs). We included a portion of the APIs in the training dataset to predict APIs, successfully achieving an overall R2 of 0.63. VERA evaluates the assessment's reliability, and we reached an R2 of 0.78 and Root Mean Square Error (RMSE) of 0.44 for predictions with high reliability.


Assuntos
Algoritmos , Peixes , Software , Animais , Testes de Toxicidade Aguda/métodos , Poluentes Químicos da Água/toxicidade , Preparações Farmacêuticas/química , Reprodutibilidade dos Testes
16.
J Environ Manage ; 359: 121040, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38718609

RESUMO

This study aims to analyze comprehensively the impact of different economic and demographic factors, which affect economic development, on environmental performance. In this context, the study considers the Environmental Performance Index as the response variable, uses GDP per capita, tariff rate, tax burden, government expenditure, inflation, unemployment, population, income tax rate, public debt, FDI inflow, and corporate tax rate as the explanatory variables, examines 181 countries, performs a novel Super Learner (SL) algorithm, which includes a total of six machine learning (ML) algorithms, and uses data for the years 2018, 2020, and 2022. The results demonstrate that (i) the SL algorithm has a superior capacity with regard to other ML algorithms; (ii) gross domestic product per capita is the most crucial factor in the environmental performance followed by tariff rates, tax burden, government expenditure, and inflation, in order; (iii) among all, the corporate tax rate has the lowest importance on the environmental performance followed by also foreign direct investment, public debt, income tax rate, population, and unemployment; (iv) there are some critical thresholds, which imply that the impact of the factors on the environmental performance change according to these barriers. Overall, the study reveals the nonlinear impact of the variables on environmental performance as well as their relative importance and critical threshold. Thus, the study provides policymakers valuable insights in re-formulating their environmental policies to increase environmental performance. Accordingly, various policy options are discussed.


Assuntos
Algoritmos , Aprendizado de Máquina , Meio Ambiente , Desenvolvimento Econômico , Produto Interno Bruto
17.
Biomed Phys Eng Express ; 10(4)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38718764

RESUMO

Evaluation of skin recovery is an important step in the treatment of burns. However, conventional methods only observe the surface of the skin and cannot quantify the injury volume. Optical coherence tomography (OCT) is a non-invasive, non-contact, real-time technique. Swept source OCT uses near infrared light and analyzes the intensity of light echo at different depths to generate images from optical interference signals. To quantify the dynamic recovery of skin burns over time, laser induced skin burns in mice were evaluated using deep learning of Swept source OCT images. A laser-induced mouse skin thermal injury model was established in thirty Kunming mice, and OCT images of normal and burned areas of mouse skin were acquired at day 0, day 1, day 3, day 7, and day 14 after laser irradiation. This resulted in 7000 normal and 1400 burn B-scan images which were divided into training, validation, and test sets at 8:1.5:0.5 ratio for the normal data and 8:1:1 for the burn data. Normal images were manually annotated, and the deep learning U-Net model (verified with PSPNe and HRNet models) was used to segment the skin into three layers: the dermal epidermal layer, subcutaneous fat layer, and muscle layer. For the burn images, the models were trained to segment just the damaged area. Three-dimensional reconstruction technology was then used to reconstruct the damaged tissue and calculate the damaged tissue volume. The average IoU value and f-score of the normal tissue layer U-Net segmentation model were 0.876 and 0.934 respectively. The IoU value of the burn area segmentation model reached 0.907 and f-score value reached 0.951. Compared with manual labeling, the U-Net model was faster with higher accuracy for skin stratification. OCT and U-Net segmentation can provide rapid and accurate analysis of tissue changes and clinical guidance in the treatment of burns.


Assuntos
Queimaduras , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Lasers , Pele , Tomografia de Coerência Óptica , Tomografia de Coerência Óptica/métodos , Animais , Queimaduras/diagnóstico por imagem , Camundongos , Pele/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
18.
Physiol Meas ; 45(5)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38722552

RESUMO

Objective.Perinatal asphyxia poses a significant risk to neonatal health, necessitating accurate fetal heart rate monitoring for effective detection and management. The current gold standard, cardiotocography, has inherent limitations, highlighting the need for alternative approaches. The emerging technology of non-invasive fetal electrocardiography shows promise as a new sensing technology for fetal cardiac activity, offering potential advancements in the detection and management of perinatal asphyxia. Although algorithms for fetal QRS detection have been developed in the past, only a few of them demonstrate accurate performance in the presence of noise and artifacts.Approach.In this work, we proposePower-MF, a new algorithm for fetal QRS detection combining power spectral density and matched filter techniques. We benchmarkPower-MFagainst three open-source algorithms on two recently published datasets (Abdominal and Direct Fetal ECG Database: ADFECG, subsets B1 Pregnancy and B2 Labour; Non-invasive Multimodal Foetal ECG-Doppler Dataset for Antenatal Cardiology Research: NInFEA).Main results.Our results show thatPower-MFoutperforms state-of-the-art algorithms on ADFECG (B1 Pregnancy: 99.5% ± 0.5% F1-score, B2 Labour: 98.0% ± 3.0% F1-score) and on NInFEA in three of six electrode configurations by being more robust against noise.Significance.Through this work, we contribute to improving the accuracy and reliability of fetal cardiac monitoring, an essential step toward early detection of perinatal asphyxia with the long-term goal of reducing costs and making prenatal care more accessible.


Assuntos
Algoritmos , Eletrocardiografia , Processamento de Sinais Assistido por Computador , Humanos , Eletrocardiografia/métodos , Feminino , Gravidez , Monitorização Fetal/métodos , Feto/fisiologia
19.
Artigo em Inglês | MEDLINE | ID: mdl-38722723

RESUMO

Quantifying muscle strength is an important measure in clinical settings; however, there is a lack of practical tools that can be deployed for routine assessment. The purpose of this study is to propose a deep learning model for ankle plantar flexion torque prediction from time-series mechanomyogram (MMG) signals recorded during isometric contractions (i.e., a similar form to manual muscle testing procedure in clinical practice) and to evaluate its performance. Four different deep learning models in terms of model architecture (based on a stacked bidirectional long short-term memory and dense layers) were designed with different combinations of the number of units (from 32 to 512) and dropout ratio (from 0.0 to 0.8), and then evaluated for prediction performance by conducting the leave-one-subject-out cross-validation method from the 10-subject dataset. As a result, the models explained more variance in the untrained test dataset as the error metrics (e.g., root-mean-square error) decreased and as the slope of the relationship between the measured and predicted joint torques became closer to 1.0. Although the slope estimates appear to be sensitive to an individual dataset, >70% of the variance in nine out of 10 datasets was explained by the optimal model. These results demonstrated the feasibility of the proposed model as a potential tool to quantify average joint torque during a sustained isometric contraction.


Assuntos
Articulação do Tornozelo , Contração Isométrica , Torque , Humanos , Contração Isométrica/fisiologia , Masculino , Adulto , Articulação do Tornozelo/fisiologia , Adulto Jovem , Estudo de Prova de Conceito , Aprendizado Profundo , Algoritmos , Miografia/métodos , Força Muscular/fisiologia , Feminino , Músculo Esquelético/fisiologia , Redes Neurais de Computação , Reprodutibilidade dos Testes , Fenômenos Biomecânicos
20.
Artigo em Inglês | MEDLINE | ID: mdl-38722724

RESUMO

The olfactory system enables humans to smell different odors, which are closely related to emotions. The high temporal resolution and non-invasiveness of Electroencephalogram (EEG) make it suitable to objectively study human preferences for odors. Effectively learning the temporal dynamics and spatial information from EEG is crucial for detecting odor-induced emotional valence. In this paper, we propose a deep learning architecture called Temporal Attention with Spatial Autoencoder Network (TASA) for predicting odor-induced emotions using EEG. TASA consists of a filter-bank layer, a spatial encoder, a time segmentation layer, a Long Short-Term Memory (LSTM) module, a multi-head self-attention (MSA) layer, and a fully connected layer. We improve upon the previous work by utilizing a two-phase learning framework, using the autoencoder module to learn the spatial information among electrodes by reconstructing the given input with a latent representation in the spatial dimension, which aims to minimize information loss compared to spatial filtering with CNN. The second improvement is inspired by the continuous nature of the olfactory process; we propose to use LSTM-MSA in TASA to capture its temporal dynamics by learning the intercorrelation among the time segments of the EEG. TASA is evaluated on an existing olfactory EEG dataset and compared with several existing deep learning architectures to demonstrate its effectiveness in predicting olfactory-triggered emotional responses. Interpretability analyses with DeepLIFT also suggest that TASA learns spatial-spectral features that are relevant to olfactory-induced emotion recognition.


Assuntos
Algoritmos , Atenção , Aprendizado Profundo , Eletroencefalografia , Emoções , Redes Neurais de Computação , Odorantes , Humanos , Eletroencefalografia/métodos , Emoções/fisiologia , Atenção/fisiologia , Masculino , Adulto , Feminino , Olfato/fisiologia , Memória de Curto Prazo/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...