Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 279
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Brief Bioinform ; 25(3)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38711370

RESUMEN

Across many scientific disciplines, the development of computational models and algorithms for generating artificial or synthetic data is gaining momentum. In biology, there is a great opportunity to explore this further as more and more big data at multi-omics level are generated recently. In this opinion, we discuss the latest trends in biological applications based on process-driven and data-driven aspects. Moving ahead, we believe these methodologies can help shape novel multi-omics-scale cellular inferences.


Asunto(s)
Algoritmos , Biología Computacional , Biología Computacional/métodos , Genómica/métodos , Humanos , Macrodatos , Proteómica/métodos , Multiómica
2.
Stat Appl Genet Mol Biol ; 23(1)2024 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-38563699

RESUMEN

Simulation frameworks are useful to stress-test predictive models when data is scarce, or to assert model sensitivity to specific data distributions. Such frameworks often need to recapitulate several layers of data complexity, including emergent properties that arise implicitly from the interaction between simulation components. Antibody-antigen binding is a complex mechanism by which an antibody sequence wraps itself around an antigen with high affinity. In this study, we use a synthetic simulation framework for antibody-antigen folding and binding on a 3D lattice that include full details on the spatial conformation of both molecules. We investigate how emergent properties arise in this framework, in particular the physical proximity of amino acids, their presence on the binding interface, or the binding status of a sequence, and relate that to the individual and pairwise contributions of amino acids in statistical models for binding prediction. We show that weights learnt from a simple logistic regression model align with some but not all features of amino acids involved in the binding, and that predictive sequence binding patterns can be enriched. In particular, main effects correlated with the capacity of a sequence to bind any antigen, while statistical interactions were related to sequence specificity.


Asunto(s)
Anticuerpos , Antifibrinolíticos , Estudios de Factibilidad , Vacunas Sintéticas , Aminoácidos
3.
BMC Bioinformatics ; 25(1): 175, 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38702609

RESUMEN

BACKGROUD: Modelling discrete-time cause-specific hazards in the presence of competing events and non-proportional hazards is a challenging task in many domains. Survival analysis in longitudinal cohorts often requires such models; notably when the data is gathered at discrete points in time and the predicted events display complex dynamics. Current models often rely on strong assumptions of proportional hazards, that is rarely verified in practice; or do not handle sequential data in a meaningful way. This study proposes a Transformer architecture for the prediction of cause-specific hazards in discrete-time competing risks. Contrary to Multilayer perceptrons that were already used for this task (DeepHit), the Transformer architecture is especially suited for handling complex relationships in sequential data, having displayed state-of-the-art performance in numerous tasks with few underlying assumptions on the task at hand. RESULTS: Using synthetic datasets of 2000-50,000 patients, we showed that our Transformer model surpassed the CoxPH, PyDTS, and DeepHit models for the prediction of cause-specific hazard, especially when the proportional assumption did not hold. The error along simulated time outlined the ability of our model to anticipate the evolution of cause-specific hazards at later time steps where few events are observed. It was also superior to current models for prediction of dementia and other psychiatric conditions in the English longitudinal study of ageing cohort using the integrated brier score and the time-dependent concordance index. We also displayed the explainability of our model's prediction using the integrated gradients method. CONCLUSIONS: Our model provided state-of-the-art prediction of cause-specific hazards, without adopting prior parametric assumptions on the hazard rates. It outperformed other models in non-proportional hazards settings for both the synthetic dataset and the longitudinal cohort study. We also observed that basic models such as CoxPH were more suited to extremely simple settings than deep learning models. Our model is therefore especially suited for survival analysis on longitudinal cohorts with complex dynamics of the covariate-to-outcome relationship, which are common in clinical practice. The integrated gradients provided the importance scores of input variables, which indicated variables guiding the model in its prediction. This model is ready to be utilized for time-to-event prediction in longitudinal cohorts.


Asunto(s)
Modelos de Riesgos Proporcionales , Humanos , Análisis de Supervivencia
4.
Pflugers Arch ; 2024 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-39377960

RESUMEN

Recently, deep generative modelling has become an increasingly powerful tool with seminal work in a myriad of disciplines. This powerful modelling approach is supposed to not only have the potential to solve current problems in the medical field but also to enable personalised precision medicine and revolutionise healthcare through applications such as digital twins of patients. Here, the core concepts of generative modelling and popular modelling approaches are first introduced to consider the potential based on methodological concepts for the generation of synthetic data and the ability to learn a representation of observed data. These potentials will be reviewed using current applications in neuroimaging for data synthesis and disease decomposition in Alzheimer's disease and multiple sclerosis. Finally, challenges for further research and applications will be discussed, including computational and data requirements, model evaluation, and potential privacy risks.

5.
Lab Invest ; 104(8): 102095, 2024 08.
Artículo en Inglés | MEDLINE | ID: mdl-38925488

RESUMEN

In our rapidly expanding landscape of artificial intelligence, synthetic data have become a topic of great promise and also some concern. This review aimed to provide pathologists and laboratory professionals with a primer on the role of synthetic data and how it may soon shape the landscape within our field. Using synthetic data presents many advantages but also introduces a milieu of new obstacles and limitations. This review aimed to provide pathologists and laboratory professionals with a primer on the general concept of synthetic data and its potential to transform our field. By leveraging synthetic data, we can help accelerate the development of various machine learning models and enhance our medical education and research/quality study needs. This review explored the methods for generating synthetic data, including rule-based, machine learning model-based and hybrid approaches, as they apply to applications within pathology and laboratory medicine. We also discussed the limitations and challenges associated with such synthetic data, including data quality, malicious use, and ethical bias/concerns and challenges. By understanding the potential benefits (ie, medical education, training artificial intelligence programs, and proficiency testing, etc) and limitations of this new data realm, we can not only harness its power to improve patient outcomes, advance research, and enhance the practice of pathology but also become readily aware of their intrinsic limitations.


Asunto(s)
Aprendizaje Automático , Humanos , Patología , Inteligencia Artificial
6.
Magn Reson Med ; 92(3): 1205-1218, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38623911

RESUMEN

PURPOSE: To propose the simulation-based physics-informed neural network for deconvolution of dynamic susceptibility contrast (DSC) MRI (SPINNED) as an alternative for more robust and accurate deconvolution compared to existing methods. METHODS: The SPINNED method was developed by generating synthetic tissue residue functions and arterial input functions through mathematical simulations and by using them to create synthetic DSC MRI time series. The SPINNED model was trained using these simulated data to learn the underlying physical relation (deconvolution) between the DSC-MRI time series and the arterial input functions. The accuracy and robustness of the proposed SPINNED method were assessed by comparing it with two common deconvolution methods in DSC MRI data analysis, circulant singular value decomposition, and Volterra singular value decomposition, using both simulation data and real patient data. RESULTS: The proposed SPINNED method was more accurate than the conventional methods across all SNR levels and showed better robustness against noise in both simulation and real patient data. The SPINNED method also showed much faster processing speed than the conventional methods. CONCLUSION: These results support that the proposed SPINNED method can be a good alternative to the existing methods for resolving the deconvolution problem in DSC MRI. The proposed method does not require any separate ground-truth measurement for training and offers additional benefits of quick processing time and coverage of diverse clinical scenarios. Consequently, it will contribute to more reliable, accurate, and rapid diagnoses in clinical applications compared with the previous methods including those based on supervised learning.


Asunto(s)
Algoritmos , Simulación por Computador , Medios de Contraste , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Humanos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Medios de Contraste/química , Encéfalo/diagnóstico por imagen , Relación Señal-Ruido
7.
BMC Med Res Methodol ; 24(1): 198, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39251921

RESUMEN

In settings requiring synthetic data generation based on a clinical cohort, e.g., due to data protection regulations, heterogeneity across individuals might be a nuisance that we need to control or faithfully preserve. The sources of such heterogeneity might be known, e.g., as indicated by sub-groups labels, or might be unknown and thus reflected only in properties of distributions, such as bimodality or skewness. We investigate how such heterogeneity can be preserved and controlled when obtaining synthetic data from variational autoencoders (VAEs), i.e., a generative deep learning technique that utilizes a low-dimensional latent representation. To faithfully reproduce unknown heterogeneity reflected in marginal distributions, we propose to combine VAEs with pre-transformations. For dealing with known heterogeneity due to sub-groups, we complement VAEs with models for group membership, specifically from propensity score regression. The evaluation is performed with a realistic simulation design that features sub-groups and challenging marginal distributions. The proposed approach faithfully recovers the latter, compared to synthetic data approaches that focus purely on marginal distributions. Propensity scores add complementary information, e.g., when visualized in the latent space, and enable sampling of synthetic data with or without sub-group specific characteristics. We also illustrate the proposed approach with real data from an international stroke trial that exhibits considerable distribution differences between study sites, in addition to bimodality. These results indicate that describing heterogeneity by statistical approaches, such as propensity score regression, might be more generally useful for complementing generative deep learning for obtaining synthetic data that faithfully reflects structure from clinical cohorts.


Asunto(s)
Puntaje de Propensión , Humanos , Aprendizaje Profundo , Algoritmos , Simulación por Computador
8.
BMC Med Res Methodol ; 24(1): 181, 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39143466

RESUMEN

BACKGROUND: Synthetic Electronic Health Records (EHRs) are becoming increasingly popular as a privacy enhancing technology. However, for longitudinal EHRs specifically, little research has been done into how to properly evaluate synthetically generated samples. In this article, we provide a discussion on existing methods and recommendations when evaluating the quality of synthetic longitudinal EHRs. METHODS: We recommend to assess synthetic EHR quality through similarity to real EHRs in low-dimensional projections, accuracy of a classifier discriminating synthetic from real samples, performance of synthetic versus real trained algorithms in clinical tasks, and privacy risk through risk of attribute inference. For each metric we discuss strengths and weaknesses, next to showing how it can be applied on a longitudinal dataset. RESULTS: To support the discussion on evaluation metrics, we apply discussed metrics on a dataset of synthetic EHRs generated from the Medical Information Mart for Intensive Care-IV (MIMIC-IV) repository. CONCLUSIONS: The discussion on evaluation metrics provide guidance for researchers on how to use and interpret different metrics when evaluating the quality of synthetic longitudinal EHRs.


Asunto(s)
Algoritmos , Registros Electrónicos de Salud , Registros Electrónicos de Salud/estadística & datos numéricos , Registros Electrónicos de Salud/normas , Humanos , Estudios Longitudinales , Privacidad
9.
BMC Med Res Methodol ; 24(1): 136, 2024 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-38909216

RESUMEN

BACKGROUND: Generating synthetic patient data is crucial for medical research, but common approaches build up on black-box models which do not allow for expert verification or intervention. We propose a highly available method which enables synthetic data generation from real patient records in a privacy preserving and compliant fashion, is interpretable and allows for expert intervention. METHODS: Our approach ties together two established tools in medical informatics, namely OMOP as a data standard for electronic health records and Synthea as a data synthetization method. For this study, data pipelines were built which extract data from OMOP, convert them into time series format, learn temporal rules by 2 statistical algorithms (Markov chain, TARM) and 3 algorithms of causal discovery (DYNOTEARS, J-PCMCI+, LiNGAM) and map the outputs into Synthea graphs. The graphs are evaluated quantitatively by their individual and relative complexity and qualitatively by medical experts. RESULTS: The algorithms were found to learn qualitatively and quantitatively different graph representations. Whereas the Markov chain results in extremely large graphs, TARM, DYNOTEARS, and J-PCMCI+ were found to reduce the data dimension during learning. The MultiGroupDirect LiNGAM algorithm was found to not be applicable to the problem statement at hand. CONCLUSION: Only TARM and DYNOTEARS are practical algorithms for real-world data in this use case. As causal discovery is a method to debias purely statistical relationships, the gradient-based causal discovery algorithm DYNOTEARS was found to be most suitable.


Asunto(s)
Algoritmos , Registros Electrónicos de Salud , Humanos , Registros Electrónicos de Salud/estadística & datos numéricos , Registros Electrónicos de Salud/normas , Cadenas de Markov , Informática Médica/métodos , Informática Médica/estadística & datos numéricos
10.
Pharmacoepidemiol Drug Saf ; 33(10): e70019, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39375947

RESUMEN

PURPOSE: To assess the validity of privacy-preserving synthetic data by comparing results from synthetic versus original EHR data analysis. METHODS: A published retrospective cohort study on real-world effectiveness of COVID-19 vaccines by Maccabi Healthcare Services in Israel was replicated using synthetic data generated from the same source, and the results were compared between synthetic versus original datasets. The endpoints included COVID-19 infection, symptomatic COVID-19 infection and hospitalization due to infection and were also assessed in several demographic and clinical subgroups. In comparing synthetic versus original data estimates, several metrices were utilized: standardized mean differences (SMD), decision agreement, estimate agreement, confidence interval overlap, and Wald test. Synthetic data were generated five times to assess the stability of results. RESULTS: The distribution of demographic and clinical characteristics demonstrated very small difference (< 0.01 SMD). In the comparison of vaccine effectiveness assessed in relative risk reduction between synthetic versus original data, there was a 100% decision agreement, 100% estimate agreement, and a high level of confidence interval overlap (88.7%-99.7%) in all five replicates across all subgroups. Similar findings were achieved in the assessment of vaccine effectiveness against symptomatic COVID-19 Infection. In the comparison of hazard ratios for COVID 19-related hospitalization and odds ratio for symptomatic COVID-19 Infection, the Wald tests suggested no significant difference between respective effect estimates in all five replicates for all patient subgroups but there were disagreements in estimate and decision metrices in some subgroups and replicates. CONCLUSIONS: Overall, comparison of synthetic versus original real-world data demonstrated good validity and reliability. Transparency on the process to generate high fidelity synthetic data and assurances of patient privacy are warranted.


Asunto(s)
Vacunas contra la COVID-19 , COVID-19 , Registros Electrónicos de Salud , Humanos , COVID-19/prevención & control , COVID-19/epidemiología , Vacunas contra la COVID-19/administración & dosificación , Israel/epidemiología , Estudios Retrospectivos , Masculino , Femenino , Eficacia de las Vacunas , Persona de Mediana Edad , Hospitalización/estadística & datos numéricos , Reproducibilidad de los Resultados , Adulto , Anciano , Privacidad , Estudios de Cohortes
11.
BMC Pregnancy Childbirth ; 24(1): 628, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39354367

RESUMEN

OBJECTIVE: This study introduces the complete blood count (CBC), a standard prenatal screening test, as a biomarker for diagnosing preeclampsia with severe features (sPE), employing machine learning models. METHODS: We used a boosting machine learning model fed with synthetic data generated through a new methodology called DAS (Data Augmentation and Smoothing). Using data from a Brazilian study including 132 pregnant women, we generated 3,552 synthetic samples for model training. To improve interpretability, we also provided a ridge regression model. RESULTS: Our boosting model obtained an AUROC of 0.90±0.10, sensitivity of 0.95, and specificity of 0.79 to differentiate sPE and non-PE pregnant women, using CBC parameters of neutrophils count, mean corpuscular hemoglobin (MCH), and the aggregate index of systemic inflammation (AISI). In addition, we provided a ridge regression equation using the same three CBC parameters, which is fully interpretable and achieved an AUROC of 0.79±0.10 to differentiate the both groups. Moreover, we also showed that a monocyte count lower than 490 / m m 3 yielded a sensitivity of 0.71 and specificity of 0.72. CONCLUSION: Our study showed that ML-powered CBC could be used as a biomarker for sPE diagnosis support. In addition, we showed that a low monocyte count alone could be an indicator of sPE. SIGNIFICANCE: Although preeclampsia has been extensively studied, no laboratory biomarker with favorable cost-effectiveness has been proposed. Using artificial intelligence, we proposed to use the CBC, a low-cost, fast, and well-spread blood test, as a biomarker for sPE.


Asunto(s)
Biomarcadores , Aprendizaje Automático , Preeclampsia , Humanos , Preeclampsia/diagnóstico , Preeclampsia/sangre , Femenino , Embarazo , Biomarcadores/sangre , Recuento de Células Sanguíneas/métodos , Adulto , Sensibilidad y Especificidad , Brasil , Índice de Severidad de la Enfermedad , Curva ROC , Diagnóstico Prenatal/métodos
12.
MAGMA ; 2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39207581

RESUMEN

OBJECT: Deep learning has shown great promise for fast reconstruction of accelerated MRI acquisitions by learning from large amounts of raw data. However, raw data is not always available in sufficient quantities. This study investigates synthetic data generation to complement small datasets and improve reconstruction quality. MATERIALS AND METHODS: An adversarial auto-encoder was trained to generate phase and coil sensitivity maps from magnitude images, which were combined into synthetic raw data. On a fourfold accelerated MR reconstruction task, deep-learning-based reconstruction networks were trained with varying amounts of training data (20 to 160 scans). Test set performance was compared between baseline experiments and experiments that incorporated synthetic training data. RESULTS: Training with synthetic raw data showed decreasing reconstruction errors with increasing amounts of training data, but importantly this was magnitude-only data, rather than real raw data. For small training sets, training with synthetic data decreased the mean absolute error (MAE) by up to 7.5%, whereas for larger training sets the MAE increased by up to 2.6%. DISCUSSION: Synthetic raw data generation improved reconstruction quality in scenarios with limited training data. A major advantage of synthetic data generation is that it allows for the reuse of magnitude-only datasets, which are more readily available than raw datasets.

13.
Proc Natl Acad Sci U S A ; 118(27)2021 07 06.
Artículo en Inglés | MEDLINE | ID: mdl-34210794

RESUMEN

As it becomes possible to simulate increasingly complex neural networks, it becomes correspondingly important to model the sensory information that animals actively acquire: the biomechanics of sensory acquisition directly determines the sensory input and therefore neural processing. Here, we exploit the tractable mechanics of the well-studied rodent vibrissal ("whisker") system to present a model that can simulate the signals acquired by a full sensor array actively sampling the environment. Rodents actively "whisk" ∼60 vibrissae (whiskers) to obtain tactile information, and this system is therefore ideal to study closed-loop sensorimotor processing. The simulation framework presented here, WHISKiT Physics, incorporates realistic morphology of the rat whisker array to predict the time-varying mechanical signals generated at each whisker base during sensory acquisition. Single-whisker dynamics were optimized based on experimental data and then validated against free tip oscillations and dynamic responses to collisions. The model is then extrapolated to include all whiskers in the array, incorporating each whisker's individual geometry. Simulation examples in laboratory and natural environments demonstrate that WHISKiT Physics can predict input signals during various behaviors, currently impossible in the biological animal. In one exemplary use of the model, the results suggest that active whisking increases in-plane whisker bending compared to passive stimulation and that principal component analysis can reveal the relative contributions of whisker identity and mechanics at each whisker base to the vibrissotactile response. These results highlight how interactions between array morphology and individual whisker geometry and dynamics shape the signals that the brain must process.


Asunto(s)
Conducta Animal/fisiología , Modelos Neurológicos , Tacto/fisiología , Animales , Estimulación Física , Ratas , Transducción de Señal , Factores de Tiempo , Vibrisas/fisiología
14.
BMC Med Inform Decis Mak ; 24(1): 90, 2024 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-38549123

RESUMEN

Class imbalance remains a large problem in high-throughput omics analyses, causing bias towards the over-represented class when training machine learning-based classifiers. Oversampling is a common method used to balance classes, allowing for better generalization of the training data. More naive approaches can introduce other biases into the data, being especially sensitive to inaccuracies in the training data, a problem considering the characteristically noisy data obtained in healthcare. This is especially a problem with high-dimensional data. A generative adversarial network-based method is proposed for creating synthetic samples from small, high-dimensional data, to improve upon other more naive generative approaches. The method was compared with 'synthetic minority over-sampling technique' (SMOTE) and 'random oversampling' (RO). Generative methods were validated by training classifiers on the balanced data.


Asunto(s)
Aprendizaje Automático , Sesgo
15.
BMC Med Inform Decis Mak ; 24(1): 167, 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38877563

RESUMEN

BACKGROUND: Consider a setting where multiple parties holding sensitive data aim to collaboratively learn population level statistics, but pooling the sensitive data sets is not possible due to privacy concerns and parties are unable to engage in centrally coordinated joint computation. We study the feasibility of combining privacy preserving synthetic data sets in place of the original data for collaborative learning on real-world health data from the UK Biobank. METHODS: We perform an empirical evaluation based on an existing prospective cohort study from the literature. Multiple parties were simulated by splitting the UK Biobank cohort along assessment centers, for which we generate synthetic data using differentially private generative modelling techniques. We then apply the original study's Poisson regression analysis on the combined synthetic data sets and evaluate the effects of 1) the size of local data set, 2) the number of participating parties, and 3) local shifts in distributions, on the obtained likelihood scores. RESULTS: We discover that parties engaging in the collaborative learning via shared synthetic data obtain more accurate estimates of the regression parameters compared to using only their local data. This finding extends to the difficult case of small heterogeneous data sets. Furthermore, the more parties participate, the larger and more consistent the improvements become up to a certain limit. Finally, we find that data sharing can especially help parties whose data contain underrepresented groups to perform better-adjusted analysis for said groups. CONCLUSIONS: Based on our results we conclude that sharing of synthetic data is a viable method for enabling learning from sensitive data without violating privacy constraints even if individual data sets are small or do not represent the overall population well. Lack of access to distributed sensitive data is often a bottleneck in biomedical research, which our study shows can be alleviated with privacy-preserving collaborative learning methods.


Asunto(s)
Difusión de la Información , Humanos , Reino Unido , Conducta Cooperativa , Confidencialidad/normas , Privacidad , Bancos de Muestras Biológicas , Estudios Prospectivos
16.
BMC Med Inform Decis Mak ; 24(1): 27, 2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-38291386

RESUMEN

BACKGROUND: Synthetic data is an emerging approach for addressing legal and regulatory concerns in biomedical research that deals with personal and clinical data, whether as a single tool or through its combination with other privacy enhancing technologies. Generating uncompromised synthetic data could significantly benefit external researchers performing secondary analyses by providing unlimited access to information while fulfilling pertinent regulations. However, the original data to be synthesized (e.g., data acquired in Living Labs) may consist of subjects' metadata (static) and a longitudinal component (set of time-dependent measurements), making it challenging to produce coherent synthetic counterparts. METHODS: Three synthetic time series generation approaches were defined and compared in this work: only generating the metadata and coupling it with the real time series from the original data (A1), generating both metadata and time series separately to join them afterwards (A2), and jointly generating both metadata and time series (A3). The comparative assessment of the three approaches was carried out using two different synthetic data generation models: the Wasserstein GAN with Gradient Penalty (WGAN-GP) and the DöppelGANger (DGAN). The experiments were performed with three different healthcare-related longitudinal datasets: Treadmill Maximal Effort Test (TMET) measurements from the University of Malaga (1), a hypotension subset derived from the MIMIC-III v1.4 database (2), and a lifelogging dataset named PMData (3). RESULTS: Three pivotal dimensions were assessed on the generated synthetic data: resemblance to the original data (1), utility (2), and privacy level (3). The optimal approach fluctuates based on the assessed dimension and metric. CONCLUSION: The initial characteristics of the datasets to be synthesized play a crucial role in determining the best approach. Coupling synthetic metadata with real time series (A1), as well as jointly generating synthetic time series and metadata (A3), are both competitive methods, while separately generating time series and metadata (A2) appears to perform more poorly overall.


Asunto(s)
Metadatos , Privacidad , Humanos , Factores de Tiempo , Bases de Datos Factuales
17.
J Nurs Scholarsh ; 2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-38961517

RESUMEN

BACKGROUND: Identifying health problems in audio-recorded patient-nurse communication is important to improve outcomes in home healthcare patients who have complex conditions with increased risks of hospital utilization. Training machine learning classifiers for identifying problems requires resource-intensive human annotation. OBJECTIVE: To generate synthetic patient-nurse communication and to automatically annotate for common health problems encountered in home healthcare settings using GPT-4. We also examined whether augmenting real-world patient-nurse communication with synthetic data can improve the performance of machine learning to identify health problems. DESIGN: Secondary data analysis of patient-nurse verbal communication data in home healthcare settings. METHODS: The data were collected from one of the largest home healthcare organizations in the United States. We used 23 audio recordings of patient-nurse communications from 15 patients. The audio recordings were transcribed verbatim and manually annotated for health problems (e.g., circulation, skin, pain) indicated in the Omaha System Classification scheme. Synthetic data of patient-nurse communication were generated using the in-context learning prompting method, enhanced by chain-of-thought prompting to improve the automatic annotation performance. Machine learning classifiers were applied to three training datasets: real-world communication, synthetic communication, and real-world communication augmented by synthetic communication. RESULTS: Average F1 scores improved from 0.62 to 0.63 after training data were augmented with synthetic communication. The largest increase was observed using the XGBoost classifier where F1 scores improved from 0.61 to 0.64 (about 5% improvement). When trained solely on either real-world communication or synthetic communication, the classifiers showed comparable F1 scores of 0.62-0.61, respectively. CONCLUSION: Integrating synthetic data improves machine learning classifiers' ability to identify health problems in home healthcare, with performance comparable to training on real-world data alone, highlighting the potential of synthetic data in healthcare analytics. CLINICAL RELEVANCE: This study demonstrates the clinical relevance of leveraging synthetic patient-nurse communication data to enhance machine learning classifier performances to identify health problems in home healthcare settings, which will contribute to more accurate and efficient problem identification and detection of home healthcare patients with complex health conditions.

18.
Sensors (Basel) ; 24(14)2024 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-39066062

RESUMEN

Marker-less hand-eye calibration permits the acquisition of an accurate transformation between an optical sensor and a robot in unstructured environments. Single monocular cameras, despite their low cost and modest computation requirements, present difficulties for this purpose due to their incomplete correspondence of projected coordinates. In this work, we introduce a hand-eye calibration procedure based on the rotation representations inferred by an augmented autoencoder neural network. Learning-based models that attempt to directly regress the spatial transform of objects such as the links of robotic manipulators perform poorly in the orientation domain, but this can be overcome through the analysis of the latent space vectors constructed in the autoencoding process. This technique is computationally inexpensive and can be run in real time in markedly varied lighting and occlusion conditions. To evaluate the procedure, we use a color-depth camera and perform a registration step between the predicted and the captured point clouds to measure translation and orientation errors and compare the results to a baseline based on traditional checkerboard markers.

19.
Sensors (Basel) ; 24(12)2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38931658

RESUMEN

This article describes a novel fusion of a generative formal model for three-dimensional (3D) shapes with deep learning (DL) methods to understand the geometric structure of 3D objects and the relationships between their components, given a collection of unorganized point cloud measurements. Formal 3D shape models are implemented as shape grammar programs written in Procedural Shape Modeling Language (PSML). Users write PSML programs to describe complex objects, and DL networks estimate the configured free parameters of the program to generate 3D shapes. Users write PSML programs to enforce fundamental rules that define an object class and encode object attributes, including shapes, components, size, position, etc., into a parametric representation of objects. This fusion of the generative model with DL offers artificial intelligence (AI) models an opportunity to better understand the geometric organization of objects in terms of their components and their relationships to other objects. This approach allows human-in-the-loop control over DL estimates by specifying lists of candidate objects, the shape variations that each object can exhibit, and the level of detail or, equivalently, dimension of the latent representation of the shape. The results demonstrate the advantages of the proposed method over competing approaches.

20.
Sensors (Basel) ; 24(19)2024 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-39409465

RESUMEN

Air pollution poses significant public health risks, necessitating accurate and efficient monitoring of particulate matter (PM). These organic compounds may be released from natural sources like trees and vegetation, as well as from anthropogenic, or human-made sources including industrial activities and motor vehicle emissions. Therefore, measuring PM concentrations is paramount to understanding people's exposure levels to pollutants. This paper introduces a novel image processing technique utilizing photographs/pictures of Do-it-Yourself (DiY) sensors for the detection and quantification of PM10 particles, enhancing community involvement and data collection accuracy in Citizen Science (CS) projects. A synthetic data generation algorithm was developed to overcome the challenge of data scarcity commonly associated with citizen-based data collection to validate the image processing technique. This algorithm generates images by precisely defining parameters such as image resolution, image dimension, and PM airborne particle density. To ensure these synthetic images mimic real-world conditions, variations like Gaussian noise, focus blur, and white balance adjustments and combinations were introduced, simulating the environmental and technical factors affecting image quality in typical smartphone digital cameras. The detection algorithm for PM10 particles demonstrates robust performance across varying levels of noise, maintaining effectiveness in realistic mobile imaging conditions. Therefore, the methodology retains sufficient accuracy, suggesting its practical applicability for environmental monitoring in diverse real-world conditions using mobile devices.


Asunto(s)
Contaminantes Atmosféricos , Contaminación del Aire , Algoritmos , Ciencia Ciudadana , Monitoreo del Ambiente , Procesamiento de Imagen Asistido por Computador , Material Particulado , Material Particulado/análisis , Monitoreo del Ambiente/métodos , Monitoreo del Ambiente/instrumentación , Humanos , Contaminación del Aire/análisis , Procesamiento de Imagen Asistido por Computador/métodos , Contaminantes Atmosféricos/análisis , Teléfono Inteligente
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA