Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Mol Psychiatry ; 28(2): 553-563, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-35701598

RESUMEN

People recovered from COVID-19 may still present complications including respiratory and neurological sequelae. In other viral infections, cognitive impairment occurs due to brain damage or dysfunction caused by vascular lesions and inflammatory processes. Persistent cognitive impairment compromises daily activities and psychosocial adaptation. Some level of neurological and psychiatric consequences were expected and described in severe cases of COVID-19. However, it is debatable whether neuropsychiatric complications are related to COVID-19 or to unfoldings from a severe infection. Nevertheless, the majority of cases recorded worldwide were mild to moderate self-limited illness in non-hospitalized people. Thus, it is important to understand what are the implications of mild COVID-19, which is the largest and understudied pool of COVID-19 cases. We aimed to investigate adults at least four months after recovering from mild COVID-19, which were assessed by neuropsychological, ocular and neurological tests, immune markers assay, and by structural MRI and 18FDG-PET neuroimaging to shed light on putative brain changes and clinical correlations. In approximately one-quarter of mild-COVID-19 individuals, we detected a specific visuoconstructive deficit, which was associated with changes in molecular and structural brain imaging, and correlated with upregulation of peripheral immune markers. Our findings provide evidence of neuroinflammatory burden causing cognitive deficit, in an already large and growing fraction of the world population. While living with a multitude of mild COVID-19 cases, action is required for a more comprehensive assessment and follow-up of the cognitive impairment, allowing to better understand symptom persistence and the necessity of rehabilitation of the affected individuals.


Asunto(s)
COVID-19 , Disfunción Cognitiva , Adulto , Humanos , COVID-19/complicaciones , Neuroimagen , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico , Imagen por Resonancia Magnética
2.
J Med Internet Res ; 25: e44209, 2023 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-36787223

RESUMEN

BACKGROUND: During the COVID-19 pandemic, telehealth was expanded without the opportunity to extensively evaluate the adopted technology's usability. OBJECTIVE: We aimed to synthesize evidence on health professionals' perceptions regarding the usability of telehealth systems in the primary care of individuals with noncommunicable diseases (NCDs; hypertension and diabetes) from the COVID-19 pandemic onward. METHODS: A systematic review was performed of clinical trials, prospective cohort studies, retrospective observational studies, and studies that used qualitative data collection and analysis methods published in English, Spanish, and Portuguese from March 2020 onward. The databases queried were MEDLINE, Embase, BIREME, IEEE Xplore, BVS, Google Scholar, and grey literature. Studies involving health professionals who used telehealth systems in primary care and managed patients with NCDs from the COVID-19 pandemic onward were considered eligible. Titles, abstracts, and full texts were reviewed. Data were extracted to provide a narrative qualitative evidence synthesis of the included articles. The risk of bias and methodological quality of the included studies were analyzed. The primary outcome was the usability of telehealth systems, while the secondary outcomes were satisfaction and the contexts in which the telehealth system was used. RESULTS: We included 11 of 417 retrieved studies, which had data from 248 health care professionals. These health care professionals were mostly doctors and nurses with prior experience in telehealth in high- and middle-income countries. Overall, 9 studies (82%) were qualitative studies and 2 (18%) were quasiexperimental or multisite trial studies. Moreover, 7 studies (64%) addressed diabetes, 1 (9%) addressed diabetes and hypertension, and 3 (27%) addressed chronic diseases. Most studies used a survey to assess usability. With a moderate confidence level, we concluded that health professionals considered the usability of telehealth systems to be good and felt comfortable and satisfied. Patients felt satisfied using telehealth. The most important predictor for using digital health technologies was ease of use. The main barriers were technological challenges, connectivity issues, low computer literacy, inability to perform complete physical examination, and lack of training. Although the usability of telehealth systems was considered good, there is a need for research that investigates factors that may influence the perceptions of telehealth usability, such as differences between private and public services; differences in the level of experience of professionals, including professional experience and experience with digital tools; and differences in gender, age groups, occupations, and settings. CONCLUSIONS: The COVID-19 pandemic has generated incredible demand for virtual care. Professionals' favorable perceptions of the usability of telehealth indicate that it can facilitate access to quality care. Although there are still challenges to telehealth, more than infrastructure challenges, the most reported challenges were related to empowering people for digital health. TRIAL REGISTRATION: PROSPERO International Prospective Register of Systematic Reviews CRD42021296887; https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=296887. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.21801/ppcrj.2022.82.6.


Asunto(s)
COVID-19 , Enfermedades no Transmisibles , Telemedicina , Humanos , COVID-19/epidemiología , Pandemias , Atención Primaria de Salud , Estudios Prospectivos , Estudios Retrospectivos , Telemedicina/métodos
3.
PLoS Comput Biol ; 12(6): e1005001, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-27348631

RESUMEN

As increasingly more genomes are sequenced, the vast majority of proteins may only be annotated computationally, given experimental investigation is extremely costly. This highlights the need for computational methods to determine protein functions quickly and reliably. We believe dividing a protein family into subtypes which share specific functions uncommon to the whole family reduces the function annotation problem's complexity. Hence, this work's purpose is to detect isofunctional subfamilies inside a family of unknown function, while identifying differentiating residues. Similarity between protein pairs according to various properties is interpreted as functional similarity evidence. Data are integrated using genetic programming and provided to a spectral clustering algorithm, which creates clusters of similar proteins. The proposed framework was applied to well-known protein families and to a family of unknown function, then compared to ASMC. Results showed our fully automated technique obtained better clusters than ASMC for two families, besides equivalent results for other two, including one whose clusters were manually defined. Clusters produced by our framework showed great correspondence with the known subfamilies, besides being more contrasting than those produced by ASMC. Additionally, for the families whose specificity determining positions are known, such residues were among those our technique considered most important to differentiate a given group. When run with the crotonase and enolase SFLD superfamilies, the results showed great agreement with this gold-standard. Best results consistently involved multiple data types, thus confirming our hypothesis that similarities according to different knowledge domains may be used as functional similarity evidence. Our main contributions are the proposed strategy for selecting and integrating data types, along with the ability to work with noisy and incomplete data; domain knowledge usage for detecting subfamilies in a family with different specificities, thus reducing the complexity of the experimental function characterization problem; and the identification of residues responsible for specificity.


Asunto(s)
Biología Computacional/métodos , Proteínas/clasificación , Proteínas/fisiología , Análisis de Secuencia de Proteína/métodos , Algoritmos , Secuencia de Aminoácidos , Análisis por Conglomerados , Bases de Datos de Proteínas , Proteínas/análisis , Alineación de Secuencia
4.
J Med Internet Res ; 19(1): e17, 2017 01 16.
Artículo en Inglés | MEDLINE | ID: mdl-28093378

RESUMEN

BACKGROUND: Recent research has shown that of the 72% of American Internet users who have looked for health information online, 22% have searched for help to lose or control weight. This demand for information has given rise to many online weight management communities, where users support one another throughout their weight loss process. Whether and how user engagement in online communities relates to weight change is not totally understood. OBJECTIVE: We investigated the activity behavior and analyze the semantic content of the messages of active users in LoseIt (r/loseit), a weight management community of the online social network Reddit. We then explored whether these features are associated with weight loss in this online social network. METHODS: A data collection tool was used to collect English posts, comments, and other public metadata of active users (ie, users with at least one post or comment) on LoseIt from August 2010 to November 2014. Analyses of frequency and intensity of user interaction in the community were performed together with a semantic analysis of the messages, done by a latent Dirichlet allocation method. The association between weight loss and online user activity patterns, the semantics of the messages, and real-world variables was found by a linear regression model using 30-day weight change as the dependent variable. RESULTS: We collected posts and comments of 107,886 unique users. Among these, 101,003 (93.62%) wrote at least one comment and 38,981 (36.13%) wrote at least one post. Median percentage of days online was 3.81 (IQR 9.51). The 10 most-discussed semantic topics on posts were related to healthy food, clothing, calorie counting, workouts, looks, habits, support, and unhealthy food. In the subset of 754 users who had gender, age, and 30-day weight change data available, women were predominant and 92.9% (701/754) lost weight. Female gender, body mass index (BMI) at baseline, high levels of online activity, the number of upvotes received per post, and topics discussed within the community were independently associated with weight change. CONCLUSIONS: Our findings suggest that among active users of a weight management community, self-declaration of higher BMI levels (which may represent greater dissatisfaction with excess weight), high online activity, and engagement in discussions that might provide social support are associated with greater weight loss. These findings have the potential to aid health professionals to assist patients in online interventions by focusing efforts on increasing engagement and/or starting discussions on topics of higher impact on weight change.


Asunto(s)
Internet , Obesidad/psicología , Obesidad/terapia , Medios de Comunicación Sociales , Adulto , Femenino , Humanos , Masculino , Apoyo Social , Pérdida de Peso
5.
Bioinformatics ; 31(17): 2894-6, 2015 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-25910698

RESUMEN

UNLABELLED: PDBest (PDB Enhanced Structures Toolkit) is a user-friendly, freely available platform for acquiring, manipulating and normalizing protein structures in a high-throughput and seamless fashion. With an intuitive graphical interface it allows users with no programming background to download and manipulate their files. The platform also exports protocols, enabling users to easily share PDB searching and filtering criteria, enhancing analysis reproducibility. AVAILABILITY AND IMPLEMENTATION: PDBest installation packages are freely available for several platforms at http://www.pdbest.dcc.ufmg.br CONTACT: wellisson@dcc.ufmg.br, dpires@dcc.ufmg.br, raquelcm@dcc.ufmg.br SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Bases de Datos de Proteínas , Proteínas/química , Programas Informáticos , Interfaz Usuario-Computador , Gráficos por Computador , Humanos , Conformación Proteica , Reproducibilidad de los Resultados
6.
Bioinformatics ; 29(7): 855-61, 2013 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-23396119

RESUMEN

MOTIVATION: Receptor-ligand interactions are a central phenomenon in most biological systems. They are characterized by molecular recognition, a complex process mainly driven by physicochemical and structural properties of both receptor and ligand. Understanding and predicting these interactions are major steps towards protein ligand prediction, target identification, lead discovery and drug design. RESULTS: We propose a novel graph-based-binding pocket signature called aCSM, which proved to be efficient and effective in handling large-scale protein ligand prediction tasks. We compare our results with those described in the literature and demonstrate that our algorithm overcomes the competitor's techniques. Finally, we predict novel ligands for proteins from Trypanosoma cruzi, the parasite responsible for Chagas disease, and validate them in silico via a docking protocol, showing the applicability of the method in suggesting ligands for pockets in a real-world scenario. AVAILABILITY AND IMPLEMENTATION: Datasets and the source code are available at http://www.dcc.ufmg.br/∼dpires/acsm. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Algoritmos , Ligandos , Proteínas/química , Sitios de Unión , Enzimas/química , Enzimas/metabolismo , Humanos , Modelos Moleculares , Conformación Molecular , Simulación del Acoplamiento Molecular , Unión Proteica , Conformación Proteica , Proteínas/metabolismo , Proteínas Protozoarias/química , Proteínas Protozoarias/metabolismo , Trypanosoma cruzi
7.
Scientometrics ; 127(8): 5005-5026, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35844248

RESUMEN

Recent efforts have focused on identifying multidisciplinary teams and detecting co-Authorship Networks based on exploring topic modeling to identify researchers' expertise. Though promising, none of these efforts perform a real-life evaluation of the quality of the built topics. This paper proposes a Semantic Academic Profiler (SAP) framework that allows summarizing articles written by researchers to automatically build research profiles and perform online evaluations regarding these built profiles. SAP exploits and extends state-of-the-art Topic Modeling strategies based on Cluwords considering n-grams and introduces a new visual interface able to highlight the main topics related to articles, researchers and institutions. To evaluate SAP's capability of summarizing the profile of such entities as well as its usefulness for supporting online assessments of the topics' quality, we perform and contrast two types of evaluation, considering an extensive repository of Brazilian curricula vitae: (1) an offline evaluation, in which we exploit a traditional metric (NPMI) to measure the quality of several data representations strategies including (i) TFIDF, (ii) TFIDF with Bi-grams, (iii) Cluwords, and (iv) CluWords with Bi-grams; and (2) an online evaluation through an A/B test where researchers evaluate their own built profiles. We also perform an online assessment of SAP user interface through a usability test following the SUS methodology. Our experiments indicate that the CluWords with Bi-grams is the best solution and the SAP interface is very useful. We also observed essential differences in the online and offline assessments, indicating that using both together is very important for a comprehensive quality evaluation. Such type of study is scarce in the literature and our findings open space for new lines of investigation in the Topic Modeling area.

8.
Soc Netw Anal Min ; 12(1): 140, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36187717

RESUMEN

The debate over the COVID-19 pandemic is constantly trending at online conversations since its beginning in 2019. The discussions in many social media platforms is related not only to health aspects of the disease, but also public policies and non-pharmacological measures to mitigate the spreading of the virus and propose alternative treatments. Divergent opinions regarding these measures are leading to heated discussions and polarization. Particularly in highly politically polarized countries, users tend to be divided in those in-favor or against government policies. In this work we present a computational method to analyze Twitter data and: (i) identify users with a high probability of being bots using only COVID-19 related messages; (ii) quantify the political polarization of the Brazilian general public in the context of the COVID-19 pandemic; (iii) analyze how bots tweet and affect political polarization. We collected over 100 million tweets from 26 April 2020 to 3 January 2021, and observed in general a highly polarized population (with polarization index varying from 0.57 to 0.86), which focuses on very different topics of discussions over the most polarized weeks-but all related to government and health-related events.

9.
Data Min Knowl Discov ; 36(2): 811-840, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35125931

RESUMEN

This paper deals with the problem of modeling counterfactual reasoning in scenarios where, apart from the observed endogenous variables, we have a latent variable that affects the outcomes and, consequently, the results of counterfactuals queries. This is a common setup in healthcare problems, including mental health. We propose a new framework where the aforementioned problem is modeled as a multivariate regression and the counterfactual model accounts for both observed and a latent variable, where the latter represents what we call the patient individuality factor ( φ ). In mental health, focusing on individuals is paramount, as past experiences can change how people see or deal with situations, but individuality cannot be directly measured. To the best of our knowledge, this is the first counterfactual approach that considers both observational and latent variables to provide deterministic answers to counterfactual queries, such as: what if I change the social support of a patient, to what extent can I change his/her anxiety? The framework combines concepts from deep representation learning and causal inference to infer the value of φ and capture both non-linear and multiplicative effects of causal variables. Experiments are performed with both synthetic and real-world datasets, where we predict how changes in people's actions may lead to different outcomes in terms of symptoms of mental illness and quality of life. Results show the model learns the individually factor with errors lower than 0.05 and answers counterfactual queries that are supported by the medical literature. The model has the potential to recommend small changes in people's lives that may completely change their relationship with mental illness.

10.
JMIR Public Health Surveill ; 8(6): e34020, 2022 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-35704360

RESUMEN

BACKGROUND: Human behavior is crucial in health outcomes. Particularly, individual behavior is a determinant of the success of measures to overcome critical conditions, such as a pandemic. In addition to intrinsic public health challenges associated with COVID-19, in many countries, some individuals decided not to get vaccinated, streets were crowded, parties were happening, and businesses struggling to survive were partially open, despite lockdown or stay-at-home instructions. These behaviors contrast with the instructions for potential benefits associated with social distancing, use of masks, and vaccination to manage collective and individual risks. OBJECTIVE: Considering that human behavior is a result of individuals' social and economic conditions, we investigated the social and working characteristics associated with reports of appropriate protective behavior in Brazil. METHODS: We analyzed data from a large web survey of individuals reporting their behavior during the pandemic. We selected 3 common self-care measures: use of protective masks, distancing by at least 1 m when out of the house, and handwashing or use of alcohol, combined with assessment of the social context of respondents. We measured the frequency of the use of these self-protective measures. Using a frequent pattern-mining perspective, we generated association rules from a set of answers to questions that co-occur with at least a given frequency, identifying the pattern of characteristics of the groups divided according to protective behavior reports. RESULTS: The rationale was to identify a pool of working and social characteristics that might have better adhesion to behaviors and self-care measures, showing these are more socially determined than previously thought. We identified common patterns of socioeconomic and working determinants of compliance with protective self-care measures. Data mining showed that social determinants might be important to shape behavior in different stages of the pandemic. CONCLUSIONS: Identification of context determinants might be helpful to identify unexpected facilitators and constraints to fully follow public policies. The context of diseases contributes to psychological and physical health outcomes, and context understanding might change the approach to a disease. Hidden social determinants might change protective behavior, and social determinants of protective behavior related to COVID-19 are related to work and economic conditions. TRIAL REGISTRATION: Not applicable.


Asunto(s)
COVID-19 , COVID-19/epidemiología , COVID-19/prevención & control , Control de Enfermedades Transmisibles , Humanos , Pandemias/prevención & control , SARS-CoV-2 , Determinantes Sociales de la Salud
11.
BMC Genomics ; 12 Suppl 4: S12, 2011 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-22369665

RESUMEN

BACKGROUND: The unforgiving pace of growth of available biological data has increased the demand for efficient and scalable paradigms, models and methodologies for automatic annotation. In this paper, we present a novel structure-based protein function prediction and structural classification method: Cutoff Scanning Matrix (CSM). CSM generates feature vectors that represent distance patterns between protein residues. These feature vectors are then used as evidence for classification. Singular value decomposition is used as a preprocessing step to reduce dimensionality and noise. The aspect of protein function considered in the present work is enzyme activity. A series of experiments was performed on datasets based on Enzyme Commission (EC) numbers and mechanistically different enzyme superfamilies as well as other datasets derived from SCOP release 1.75. RESULTS: CSM was able to achieve a precision of up to 99% after SVD preprocessing for a database derived from manually curated protein superfamilies and up to 95% for a dataset of the 950 most-populated EC numbers. Moreover, we conducted experiments to verify our ability to assign SCOP class, superfamily, family and fold to protein domains. An experiment using the whole set of domains found in last SCOP version yielded high levels of precision and recall (up to 95%). Finally, we compared our structural classification results with those in the literature to place this work into context. Our method was capable of significantly improving the recall of a previous study while preserving a compatible precision level. CONCLUSIONS: We showed that the patterns derived from CSMs could effectively be used to predict protein function and thus help with automatic function annotation. We also demonstrated that our method is effective in structural classification tasks. These facts reinforce the idea that the pattern of inter-residue distances is an important component of family structural signatures. Furthermore, singular value decomposition provided a consistent increase in precision and recall, which makes it an important preprocessing step when dealing with noisy data.


Asunto(s)
Enzimas/metabolismo , Programas Informáticos , Bases de Datos de Proteínas , Enzimas/química , Enzimas/clasificación , Pliegue de Proteína , Estructura Terciaria de Proteína
12.
J Am Med Inform Assoc ; 28(9): 1834-1842, 2021 08 13.
Artículo en Inglés | MEDLINE | ID: mdl-34279636

RESUMEN

OBJECTIVE: Rheumatic heart disease (RHD) affects an estimated 39 million people worldwide and is the most common acquired heart disease in children and young adults. Echocardiograms are the gold standard for diagnosis of RHD, but there is a shortage of skilled experts to allow widespread screenings for early detection and prevention of the disease progress. We propose an automated RHD diagnosis system that can help bridge this gap. MATERIALS AND METHODS: Experiments were conducted on a dataset with 11 646 echocardiography videos from 912 exams, obtained during screenings in underdeveloped areas of Brazil and Uganda. We address the challenges of RHD identification with a 3D convolutional neural network (C3D), comparing its performance with a 2D convolutional neural network (VGG16) that is commonly used in the echocardiogram literature. We also propose a supervised aggregation technique to combine video predictions into a single exam diagnosis. RESULTS: The proposed approach obtained an accuracy of 72.77% for exam diagnosis. The results for the C3D were significantly better than the ones obtained by the VGG16 network for videos, showing the importance of considering the temporal information during the diagnostic. The proposed aggregation model showed significantly better accuracy than the majority voting strategy and also appears to be capable of capturing underlying biases in the neural network output distribution, balancing them for a more correct diagnosis. CONCLUSION: Automatic diagnosis of echo-detected RHD is feasible and, with further research, has the potential to reduce the workload of experts, enabling the implementation of more widespread screening programs worldwide.


Asunto(s)
Aprendizaje Profundo , Cardiopatía Reumática , Niño , Diagnóstico Precoz , Ecocardiografía , Humanos , Tamizaje Masivo , Cardiopatía Reumática/diagnóstico por imagen , Adulto Joven
13.
Nat Commun ; 12(1): 5117, 2021 08 25.
Artículo en Inglés | MEDLINE | ID: mdl-34433816

RESUMEN

The electrocardiogram (ECG) is the most commonly used exam for the evaluation of cardiovascular diseases. Here we propose that the age predicted by artificial intelligence (AI) from the raw ECG (ECG-age) can be a measure of cardiovascular health. A deep neural network is trained to predict a patient's age from the 12-lead ECG in the CODE study cohort (n = 1,558,415 patients). On a 15% hold-out split, patients with ECG-age more than 8 years greater than the chronological age have a higher mortality rate (hazard ratio (HR) 1.79, p < 0.001), whereas those with ECG-age more than 8 years smaller, have a lower mortality rate (HR 0.78, p < 0.001). Similar results are obtained in the external cohorts ELSA-Brasil (n = 14,236) and SaMi-Trop (n = 1,631). Moreover, even for apparent normal ECGs, the predicted ECG-age gap from the chronological age remains a statistically significant risk predictor. These results show that the AI-enabled analysis of the ECG can add prognostic information.


Asunto(s)
Enfermedades Cardiovasculares/mortalidad , Redes Neurales de la Computación , Adolescente , Adulto , Factores de Edad , Anciano , Enfermedades Cardiovasculares/diagnóstico , Niño , Estudios de Cohortes , Electrocardiografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
14.
Nutrition ; 79-80: 110961, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32919184

RESUMEN

OBJECTIVES: The Global Leadership Initiative on Malnutrition (GLIM) was proposed to provide a common malnutrition diagnostic framework. The aims of this study were to evaluate the applicability and validity of the GLIM and use machine-learning techniques to help provide the best malnutrition-related variables/combinations to predict complications in patients undergoing gastrointestinal (GI) surgeries. METHOD: This was a prospective cohort study enrolling surgical patients with GI diseases. Malnutrition prevalence was classified by the GLIM, subjective global assessment (SGA), and various anthropometric parameters. The various combination of the phenotypic criteria generated 10 different models. Sensibility (SE) and specificity (SP) were calculated using SGA as the reference criterion. Machine-learning approaches were used to predict complications. P < 0.05 was set as statistically significant. RESULTS: We evaluated 206 patients. Half of the patients were malnourished according SGA, and 16.5% had postoperative complications. The prevalence of malnutrition using GLIM varied from 10.7% to 41.3% among the whole population, 11.7% and 43.6% in the elderly, from 0 to 24% in overweight non-obese and from 0 to 19.6% in obese patients. SE and SP values varied between 61.2% and 100% and 55.3% and 98.1%, respectively, for the general population. Machine-learning models indicated that midarm circumference, one of the GLIM models, and midarm muscle area were the most relevant criteria to predict complications. CONCLUSIONS: The various GLIM combinations provided different rates of malnutrition according to the population. Machine-learning techniques supported the use of common single variables and one GLIM model to predict postoperative complications.


Asunto(s)
Liderazgo , Desnutrición , Anciano , Antropometría , Humanos , Desnutrición/diagnóstico , Desnutrición/epidemiología , Evaluación Nutricional , Estado Nutricional , Proyectos Piloto , Estudios Prospectivos
15.
Nat Commun ; 11(1): 1760, 2020 04 09.
Artículo en Inglés | MEDLINE | ID: mdl-32273514

RESUMEN

The role of automatic electrocardiogram (ECG) analysis in clinical practice is limited by the accuracy of existing models. Deep Neural Networks (DNNs) are models composed of stacked transformations that learn tasks by examples. This technology has recently achieved striking success in a variety of task and there are great expectations on how it might improve clinical practice. Here we present a DNN model trained in a dataset with more than 2 million labeled exams analyzed by the Telehealth Network of Minas Gerais and collected under the scope of the CODE (Clinical Outcomes in Digital Electrocardiology) study. The DNN outperform cardiology resident medical doctors in recognizing 6 types of abnormalities in 12-lead ECG recordings, with F1 scores above 80% and specificity over 99%. These results indicate ECG analysis based on DNNs, previously studied in a single-lead setup, generalizes well to 12-lead exams, taking the technology closer to the standard clinical practice.


Asunto(s)
Fibrilación Atrial/diagnóstico , Cardiología/métodos , Aprendizaje Profundo , Electrocardiografía , Redes Neurales de la Computación , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Fibrilación Atrial/fisiopatología , Humanos , Persona de Mediana Edad , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
16.
17.
Proteins ; 74(3): 727-43, 2009 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-18704933

RESUMEN

In this study, we carried out a comparative analysis between two classical methodologies to prospect residue contacts in proteins: the traditional cutoff dependent (CD) approach and cutoff free Delaunay tessellation (DT). In addition, two alternative coarse-grained forms to represent residues were tested: using alpha carbon (CA) and side chain geometric center (GC). A database was built, comprising three top classes: all alpha, all beta, and alpha/beta. We found that the cutoff value at about 7.0 A emerges as an important distance parameter. Up to 7.0 A, CD and DT properties are unified, which implies that at this distance all contacts are complete and legitimate (not occluded). We also have shown that DT has an intrinsic missing edges problem when mapping the first layer of neighbors. In proteins, it may produce systematic errors affecting mainly the contact network in beta chains with CA. The almost-Delaunay (AD) approach has been proposed to solve this DT problem. We found that even AD may not be an advantageous solution. As a consequence, in the strict range up to 7.0 A, the CD approach revealed to be a simpler, more complete, and reliable technique than DT or AD. Finally, we have shown that coarse-grained residue representations may introduce bias in the analysis of neighbors in cutoffs up to 6.8 A, with CA favoring alpha proteins and GC favoring beta proteins. This provides an additional argument pointing to the value of 7.0 A as an important lower bound cutoff to be used in contact analysis of proteins.


Asunto(s)
Proteínas/química , Sitios de Unión , Bases de Datos de Proteínas , Modelos Moleculares , Conformación Proteica , Pliegue de Proteína , Proteínas/metabolismo
18.
Spat Spatiotemporal Epidemiol ; 29: 163-175, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-31128626

RESUMEN

Typical spatial disease surveillance systems associate a single address to each disease case reported, usually the residence address. Social network data offers a unique opportunity to obtain information on the spatial movements of individuals as well as their disease status as cases or controls. This provides information to identify visit locations with high risk of infection, even in regions where no one lives such as parks and entertainment zones. We develop two probability models to characterize the high-risk regions. We use a large Twitter dataset from Brazilian users to search for spatial clusters through analysis of the tweets' locations and textual content. We apply our models to both real-world and simulated data, demonstrating the advantage of our models as compared to the usual spatial scan statistic for this type of data.


Asunto(s)
Dengue/epidemiología , Vigilancia de la Población , Red Social , Aedes/fisiología , Animales , Brasil/epidemiología , Análisis por Conglomerados , Dengue/etiología , Dengue/prevención & control , Humanos , Factores de Riesgo , Análisis Espacial
19.
Int J Parallel Program ; 36(2): 250-266, 2008 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22582009

RESUMEN

Scientific workflow systems have been introduced in response to the demand of researchers from several domains of science who need to process and analyze increasingly larger datasets. The design of these systems is largely based on the observation that data analysis applications can be composed as pipelines or networks of computations on data. In this work, we present a runtime support system that is designed to facilitate this type of computation in distributed computing environments. Our system is optimized for data-intensive workflows, in which efficient management and retrieval of data, coordination of data processing and data movement, and check-pointing of intermediate results are critical and challenging issues. Experimental evaluation of our system shows that linear speedups can be achieved for sophisticated applications, which are implemented as a network of multiple data processing components.

20.
Artículo en Inglés | MEDLINE | ID: mdl-28636811

RESUMEN

The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy.


Asunto(s)
Algoritmos , Electrofisiología Cardíaca/métodos , Gráficos por Computador , Animales , Ventrículos Cardíacos/anatomía & histología , Ventrículos Cardíacos/diagnóstico por imagen , Imagenología Tridimensional , Imagen por Resonancia Magnética , Masculino , Ratones , Ratones Endogámicos C57BL
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA