Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 3244-3247, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30441083

RESUMEN

There are between 6,000 - 7,000 known rare diseases today. Identifying and diagnosing a patient with rare disease is time consuming, cumbersome, cost intensive and requires resources generally available only at large hospital centers. Furthermore, most medical doctors, especially general practitioners, will likely only see one patient with a rare disease if at all. A cognitive assistant for differential diagnosis in rare disease will provide the knowledge on all rare diseases online, help create a list of weighted diagnosis and access to the evidence base on which the list was created. The system is built on knowledge graph technology that incorporates data from ICD-10, DOID, medDRA, PubMed, Wikipedia, Orphanet, the CDC and anonymized patient data. The final knowledge graph comprised over 500,000 nodes. The solution was tested with 101 published cases for rare disease. The learning system improves over training sprints and delivers 79.5 % accuracy in finding the diagnosis in the top 1 % of nodes. A further learning step was taken to rank the correct result in the TOP 15 hits. With a reduced data pool, 51% of the 101 cases were tested delivering the correct result in the TOP 3 - 13 (TOP 6 on average) for 74% of these cases. The results show that data curation is among the most critical aspects to deliver accurate results. The knowledge graph technology demonstrates its power to deliver cognitive solutions for differential diagnosis in rare disease that can be applied in clinical practice.


Asunto(s)
Cognición , Enfermedades Raras , Diagnóstico Diferencial , Humanos
2.
Adv Exp Med Biol ; 1031: 387-404, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29214584

RESUMEN

Personalised Medicine has become a reality over the last years. The emergence of 'omics' and big data has started revolutionizing healthcare. New 'omics' technologies lead to a better molecular characterization of diseases and a new understanding of the complexity of diseases. The approach of PM is already successfully applied in different healthcare areas such as oncology, cardiology, nutrition and for rare diseases. However, health systems across the EU are often still promoting the 'one-size fits all' approach, even if it is known that patients do greatly vary in their molecular characteristics and response to drugs and other interventions. To make use of the full potentials of PM in the next years ahead several challenges need to be addressed such as the integration of big data, patient empowerment, translation of basic to clinical research, bringing the innovation to the market and shaping sustainable healthcare systems.


Asunto(s)
Genómica/métodos , Medicina de Precisión/métodos , Enfermedades Raras/terapia , Investigación Biomédica Traslacional/métodos , Minería de Datos , Bases de Datos Factuales , Predisposición Genética a la Enfermedad , Humanos , Fenotipo , Valor Predictivo de las Pruebas , Pronóstico , Enfermedades Raras/diagnóstico , Enfermedades Raras/epidemiología , Enfermedades Raras/genética , Sistema de Registros , Factores de Riesgo
3.
Public Health Genomics ; 20(6): 312-320, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29617688

RESUMEN

Digitization is considered to radically transform healthcare. As such, with seemingly unlimited opportunities to collect data, it will play an important role in the public health policy-making process. In this context, health data cooperatives (HDC) are a key component and core element for public health policy-making and for exploiting the potential of all the existing and rapidly emerging data sources. Being able to leverage all the data requires overcoming the computational, algorithmic, and technological challenges that characterize today's highly heterogeneous data landscape, as well as a host of diverse regulatory, normative, governance, and policy constraints. The full potential of big data can only be realized if data are being made accessible and shared. Treating research data as a public good, creating HDC to empower citizens through citizen-owned health data, and allowing data access for research and the development of new diagnostics, therapies, and public health policies will yield the transformative impact of digital health. The HDC model for data governance is an arrangement, based on moral codes, that encourages citizens to participate in the improvement of their own health. This then enables public health institutions and policymakers to monitor policy changes and evaluate their impact and risk on a population level.

4.
Public Health Genomics ; 20(6): 321-331, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29936514

RESUMEN

INTRODUCTION: Currently, abundances of highly relevant health data are locked up in data silos due to decentralized storage and data protection laws. The health data cooperative (HDC) model is established to make this valuable data available for societal purposes. The aim of this study is to analyse the HDC model and its potentials and challenges. RESULTS: An HDC is a health data bank. The HDC model has as core principles a cooperative approach, citizen-centredness, not-for-profit structure, data enquiry procedure, worldwide accessibility, cloud computing data storage, open source, and transparency about governance policy. HDC members have access to the HDC platform, which consists of the "core," the "app store," and the "big data." This, respectively, enables the users to collect, store, manage, and share health information, to analyse personal health data, and to conduct big data analytics. Identified potentials of the HDC model are digitization of healthcare information, citizen empowerment, knowledge benefit, patient empowerment, cloud computing data storage, and reduction in healthcare expenses. Nevertheless, there are also challenges linked with this approach, including privacy and data security, citizens' restraint, disclosure of clinical results, big data, and commercial interest. Limitations and Outlook: The results of this article are not generalizable because multiple studies with a limited number of study participants are included. Therefore, it is recommended to undertake further elaborate research on these topics among larger and various groups of individuals. Additionally, more pilots on the HDC model are required before it can be fully implemented. Moreover, when the HDC model becomes operational, further research on its performances should be undertaken.

5.
Public Health Genomics ; 20(5): 274-285, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29353273

RESUMEN

Sepsis, with its often devastating consequences for patients and their families, remains a major public health concern that poses an increasing financial burden. Early resuscitation together with the elucidation of the biological pathways and pathophysiological mechanisms with the use of "-omics" technologies have started changing the clinical and research landscape in sepsis. Metabolomics (i.e., the study of the metabolome), an "-omics" technology further down in the "-omics" cascade between the genome and the phenome, could be particularly fruitful in sepsis research with the potential to alter the clinical practice. Apart from its benefit for the individual patient, metabolomics has an impact on public health that extends beyond its applications in medicine. In this review, we present recent developments in metabolomics research in sepsis, with a focus on pneumonia, and we discuss the impact of metabolomics on public health, with a focus on free/libre open source software.


Asunto(s)
Metabolómica , Neumonía , Sepsis , Humanos , Invenciones , Metaboloma , Metabolómica/métodos , Metabolómica/tendencias , Neumonía/complicaciones , Neumonía/microbiología , Sepsis/etiología , Sepsis/metabolismo
6.
Genet Epidemiol ; 41(1): 51-60, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27873357

RESUMEN

The use of data analytics across the entire healthcare value chain, from drug discovery and development through epidemiology to informed clinical decision for patients or policy making for public health, has seen an explosion in the recent years. The increase in quantity and variety of data available together with the improvement of storing capabilities and analytical tools offer numerous possibilities to all stakeholders (manufacturers, regulators, payers, healthcare providers, decision makers, researchers) but most importantly, it has the potential to improve general health outcomes if we learn how to exploit it in the right way. This article looks at the different sources of data and the importance of unstructured data. It goes on to summarize current and potential future uses in drug discovery, development, and monitoring as well as in public and personal healthcare; including examples of good practice and recent developments. Finally, we discuss the main practical and ethical challenges to unravel the full potential of big data in healthcare and conclude that all stakeholders need to work together towards the common goal of making sense of the available data for the common good.


Asunto(s)
Conjuntos de Datos como Asunto/estadística & datos numéricos , Toma de Decisiones , Atención a la Salud , Descubrimiento de Drogas , Medicina de Precisión , Salud Pública , Genómica , Humanos
7.
Cancer Epidemiol Biomarkers Prev ; 25(12): 1619-1624, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27539266

RESUMEN

BACKGROUND: We have developed a genome-wide association study analysis method called DEPTH (DEPendency of association on the number of Top Hits) to identify genomic regions potentially associated with disease by considering overlapping groups of contiguous markers (e.g., SNPs) across the genome. DEPTH is a machine learning algorithm for feature ranking of ultra-high dimensional datasets, built from well-established statistical tools such as bootstrapping, penalized regression, and decision trees. Unlike marginal regression, which considers each SNP individually, the key idea behind DEPTH is to rank groups of SNPs in terms of their joint strength of association with the outcome. Our aim was to compare the performance of DEPTH with that of standard logistic regression analysis. METHODS: We selected 1,854 prostate cancer cases and 1,894 controls from the UK for whom 541,129 SNPs were measured using the Illumina Infinium HumanHap550 array. Confirmation was sought using 4,152 cases and 2,874 controls, ascertained from the UK and Australia, for whom 211,155 SNPs were measured using the iCOGS Illumina Infinium array. RESULTS: From the DEPTH analysis, we identified 14 regions associated with prostate cancer risk that had been reported previously, five of which would not have been identified by conventional logistic regression. We also identified 112 novel putative susceptibility regions. CONCLUSIONS: DEPTH can reveal new risk-associated regions that would not have been identified using a conventional logistic regression analysis of individual SNPs. IMPACT: This study demonstrates that the DEPTH algorithm could identify additional genetic susceptibility regions that merit further investigation. Cancer Epidemiol Biomarkers Prev; 25(12); 1619-24. ©2016 AACR.


Asunto(s)
Predisposición Genética a la Enfermedad , Estudio de Asociación del Genoma Completo/métodos , Aprendizaje Automático , Polimorfismo de Nucleótido Simple , Neoplasias de la Próstata/genética , Australia , Humanos , Masculino , Persona de Mediana Edad , Reino Unido
8.
Public Health Genomics ; 19(4): 211-9, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27241319

RESUMEN

BACKGROUND: Knowledge in the era of Omics and Big Data has been increasingly conceptualized as a public good. Sharing of de-identified patient data has been advocated as a means to increase confidence and public trust in the results of clinical trials. On the other hand, research has shown that the current research and development model of the biopharmaceutical industry has reached its innovation capacity. In response to that, the biopharmaceutical industry has adopted open innovation practices, with sharing of clinical trial data being among the most interesting ones. However, due to the free rider problem, clinical trial data sharing among biopharmaceutical companies could undermine their innovativeness. METHOD: Based on the theory of public goods, we have developed a commons arrangement and devised a model, which enables secure and fair clinical trial data sharing over a Virtual Knowledge Bank based on a web platform. Our model uses data as a virtual currency and treats knowledge as a club good. RESULTS: Fair sharing of clinical trial data over the Virtual Knowledge Bank has positive effects on the innovation capacity of the biopharmaceutical industry without compromising the intellectual rights, proprietary interests and competitiveness of the latter. CONCLUSION: The Virtual Knowledge Bank is a sustainable and self-expanding model for secure and fair clinical trial data sharing that allows for sharing of clinical trial data, while at the same time it increases the innovation capacity of the biopharmaceutical industry.


Asunto(s)
Ensayos Clínicos como Asunto , Industria Farmacéutica/organización & administración , Difusión de la Información/métodos , Innovación Organizacional , Investigación Biomédica , Humanos , Propiedad Intelectual , Modelos Teóricos , Responsabilidad Social
9.
Health Inf Sci Syst ; 3(Suppl 1 HISA Big Data in Biomedicine and Healthcare 2013 Con): S3, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25870758

RESUMEN

Genome-wide association studies (GWAS) are a common approach for systematic discovery of single nucleotide polymorphisms (SNPs) which are associated with a given disease. Univariate analysis approaches commonly employed may miss important SNP associations that only appear through multivariate analysis in complex diseases. However, multivariate SNP analysis is currently limited by its inherent computational complexity. In this work, we present a computational framework that harnesses supercomputers. Based on our results, we estimate a three-way interaction analysis on 1.1 million SNP GWAS data requiring over 5.8 years on the full "Avoca" IBM Blue Gene/Q installation at the Victorian Life Sciences Computation Initiative. This is hundreds of times faster than estimates for other CPU based methods and four times faster than runtimes estimated for GPU methods, indicating how the improvement in the level of hardware applied to interaction analysis may alter the types of analysis that can be performed. Furthermore, the same analysis would take under 3 months on the currently largest IBM Blue Gene/Q supercomputer "Sequoia" at the Lawrence Livermore National Laboratory assuming linear scaling is maintained as our results suggest. Given that the implementation used in this study can be further optimised, this runtime means it is becoming feasible to carry out exhaustive analysis of higher order interaction studies on large modern GWAS.

10.
Health Inf Sci Syst ; 3(Suppl 1 HISA Big Data in Biomedicine and Healthcare 2013 Con): S7, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25870761

RESUMEN

Even with the advent of next-generation sequencing (NGS) technologies which have revolutionised the field of bacterial genomics in recent years, a major barrier still exists to the implementation of NGS for routine microbiological use (in public health and clinical microbiology laboratories). Such routine use would make a big difference to investigations of pathogen transmission and prevention/control of (sometimes lethal) infections. The inherent complexity and high frequency of data analyses on very large sets of bacterial DNA sequence data, the ability to ensure data provenance and automatically track and log all analyses for audit purposes, the need for quick and accurate results, together with an essential user-friendly interface for regular non-technical laboratory staff, are all critical requirements for routine use in a public health setting. There are currently no systems to answer positively to all these requirements, in an integrated manner. In this paper, we describe a system for sequence analysis and interpretation that is highly automated and tackles the issues raised earlier, and that is designed for use in diagnostic laboratories by healthcare workers with no specialist bioinformatics knowledge.

11.
Pathogens ; 3(2): 437-58, 2014 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-25437808

RESUMEN

Recent advances in DNA sequencing technologies have the potential to transform the field of clinical and public health microbiology, and in the last few years numerous case studies have demonstrated successful applications in this context. Among other considerations, a lack of user-friendly data analysis and interpretation tools has been frequently cited as a major barrier to routine use of these techniques. Here we consider the requirements of microbiology laboratories for the analysis, clinical interpretation and management of bacterial whole-genome sequence (WGS) data. Then we discuss relevant, existing WGS analysis tools. We highlight many essential and useful features that are represented among existing tools, but find that no single tool fulfils all of the necessary requirements. We conclude that to fully realise the potential of WGS analyses for clinical and public health microbiology laboratories of all scales, we will need to develop tools specifically with the needs of these laboratories in mind.

12.
Stud Health Technol Inform ; 205: 1173-7, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25160374

RESUMEN

The supplementation of medical data with environmental data offers rich new insights that can improve decision-making within health systems and the healthcare profession. In this study, we simulate disease incidence for various scenarios using a mathematical model. We subsequently visualise the infectious disease spread in human populations over time and geographies. We demonstrate this for malaria, which is one of the top three causes of mortality for children under the age of 5 years in sub-Saharan Africa, and its associated interventions within Kenya. We demonstrate how information can be collected, analysed, and presented in new ways to inform key decision makers in understanding the prevalence of disease and the response to interventions.


Asunto(s)
Sistemas de Información Geográfica , Imagenología Tridimensional/métodos , Malaria/epidemiología , Malaria/prevención & control , Vigilancia de la Población/métodos , Análisis Espacio-Temporal , África del Sur del Sahara/epidemiología , Femenino , Geografía Médica , Humanos , Incidencia , Lactante , Recién Nacido , Masculino
14.
Artículo en Inglés | MEDLINE | ID: mdl-23734785

RESUMEN

We have developed the capability to rapidly simulate cardiac electrophysiological phenomena in a human heart discretised at a resolution comparable with the length of a cardiac myocyte. Previous scientific investigation has generally invoked simplified geometries or coarse-resolution hearts, with simulation duration limited to 10s of heartbeats. Using state-of-the-art high-performance computing techniques coupled with one of the most powerful computers available (the 20 PFlop/s IBM BlueGene/Q at Lawrence Livermore National Laboratory), high-resolution simulation of the human heart can now be carried out over 1200 times faster compared with published results in the field. We demonstrate the utility of this capability by simulating, for the first time, the formation of transmural re-entrant waves in a 3D human heart. Such wave patterns are thought to underlie Torsades de Pointes, an arrhythmia that indicates a high risk of sudden cardiac death. Our new simulation capability has the potential to impact a multitude of applications in medicine, pharmaceuticals and implantable devices.


Asunto(s)
Simulación por Computador , Corazón/fisiología , Modelos Cardiovasculares , Arritmias Cardíacas/etiología , Electrocardiografía , Fenómenos Electrofisiológicos , Humanos
15.
J Am Coll Cardiol ; 60(21): 2182-91, 2012 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-23153844

RESUMEN

OBJECTIVES: The study was designed to assess the ability of computer-simulated electrocardiography parameters to predict clinical outcomes and to risk-stratify patients with long QT syndrome type 1 (LQT1). BACKGROUND: Although attempts have been made to correlate mutation-specific ion channel dysfunction with patient phenotype in long QT syndrome, these have been largely unsuccessful. Systems-level computational models can be used to predict consequences of complex changes in channel function to the overall heart rhythm. METHODS: A total of 633 LQT1-genotyped subjects with 34 mutations from multinational long QT syndrome registries were studied. Cellular electrophysiology function was determined for the mutations and introduced in a 1-dimensional transmural electrocardiography computer model. The mutation effect on transmural repolarization was determined for each mutation and related to the risk for cardiac events (syncope, aborted cardiac arrest, and sudden cardiac death) among patients. RESULTS: Multivariate analysis showed that mutation-specific transmural repolarization prolongation (TRP) was associated with an increased risk for cardiac events (35% per 10-ms increment [p < 0.0001]; ≥upper quartile hazard ratio: 2.80 [p < 0.0001]) and life-threatening events (aborted cardiac arrest/sudden cardiac death: 27% per 10-ms increment [p = 0.03]; ≥upper quartile hazard ratio: 2.24 [p = 0.002]) independently of patients' individual QT interval corrected for heart rate (QTc). Subgroup analysis showed that among patients with mild to moderate QTc duration (<500 ms), the risk associated with TRP was maintained (36% per 10 ms [p < 0.0001]), whereas the patient's individual QTc was not associated with a significant risk increase after adjustment for TRP. CONCLUSIONS: These findings suggest that simulated repolarization can be used to predict clinical outcomes and to improve risk stratification in patients with LQT1, with a more pronounced effect among patients with a lower-range QTc, in whom a patient's individual QTc may provide less incremental prognostic information.


Asunto(s)
Simulación por Computador , Técnicas Electrofisiológicas Cardíacas , Frecuencia Cardíaca/genética , Modelos Cardiovasculares , Medición de Riesgo , Síndrome de Romano-Ward/fisiopatología , Adolescente , Adulto , ADN/análisis , Femenino , Estudios de Seguimiento , Genotipo , Humanos , Canal de Potasio KCNQ1/genética , Masculino , Mutación , Fenotipo , Valor Predictivo de las Pruebas , Pronóstico , Sistema de Registros , Factores de Riesgo , Síndrome de Romano-Ward/genética , Síndrome de Romano-Ward/patología , Adulto Joven
17.
Artículo en Inglés | MEDLINE | ID: mdl-23366127

RESUMEN

Most published GWAS do not examine SNP interactions due to the high computational complexity of computing p-values for the interaction terms. Our aim is to utilize supercomputing resources to apply complex statistical techniques to the world's accumulating GWAS, epidemiology, survival and pathology data to uncover more information about genetic and environmental risk, biology and aetiology. We performed the Bayesian Posterior Probability test on a pseudo data set with 500,000 single nucleotide polymorphism and 100 samples as proof of principle. We carried out strong scaling simulations on 2 to 4,096 processing cores with factor 2 increments in partition size. On two processing cores, the run time is 317h, i.e. almost two weeks, compared to less than 10 minutes on 4,096 processing cores. The speedup factor is 2,020 that is very close to the theoretical value of 2,048. This work demonstrates the feasibility of performing exhaustive higher order analysis of GWAS studies using independence testing for contingency tables. We are now in a position to employ supercomputers with hundreds of thousands of threads for higher order analysis of GWAS data using complex statistics.


Asunto(s)
Biología Computacional/métodos , Estudio de Asociación del Genoma Completo/métodos , Teorema de Bayes , Simulación por Computador , Humanos , Método de Montecarlo , Neoplasias/genética , Fenotipo , Polimorfismo de Nucleótido Simple
18.
IEEE Trans Biomed Eng ; 58(10): 2965-9, 2011 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-21768044

RESUMEN

Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.


Asunto(s)
Metodologías Computacionales , Modelos Cardiovasculares , Programas Informáticos , Simulación por Computador , Femenino , Humanos , Miocitos Cardíacos/citología , Proyectos Humanos Visibles
19.
Biomed Tech (Berl) ; 56(3): 129-45, 2011 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21657987

RESUMEN

We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.


Asunto(s)
Algoritmos , Metodologías Computacionales , Corazón/anatomía & histología , Corazón/fisiología , Modelos Anatómicos , Modelos Cardiovasculares , Animales , Simulación por Computador , Humanos
20.
Sci Transl Med ; 3(76): 76ra28, 2011 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-21451124

RESUMEN

Inherited long QT syndrome (LQTS) is caused by mutations in ion channels that delay cardiac repolarization, increasing the risk of sudden death from ventricular arrhythmias. Currently, the risk of sudden death in individuals with LQTS is estimated from clinical parameters such as age, gender, and the QT interval, measured from the electrocardiogram. Even though a number of different mutations can cause LQTS, mutation-specific information is rarely used clinically. LQTS type 1 (LQT1), one of the most common forms of LQTS, is caused by mutations in the slow potassium current (I(Ks)) channel α subunit KCNQ1. We investigated whether mutation-specific changes in I(Ks) function can predict cardiac risk in LQT1. By correlating the clinical phenotype of 387 LQT1 patients with the cellular electrophysiological characteristics caused by an array of mutations in KCNQ1, we found that channels with a decreased rate of current activation are associated with increased risk of cardiac events (hazard ratio=2.02), independent of the clinical parameters usually used for risk stratification. In patients with moderate QT prolongation (a QT interval less than 500 ms), slower activation was an independent predictor for cardiac events (syncope, aborted cardiac arrest, and sudden death) (hazard ratio = 2.10), whereas the length of the QT interval itself was not. Our results indicate that genotype and biophysical phenotype analysis may be useful for risk stratification of LQT1 patients and suggest that slow channel activation is associated with an increased risk of cardiac events.


Asunto(s)
Activación del Canal Iónico/fisiología , Canal de Potasio KCNQ1/genética , Canal de Potasio KCNQ1/metabolismo , Síndrome de QT Prolongado/genética , Síndrome de QT Prolongado/fisiopatología , Mutación , Adolescente , Adulto , Animales , Niño , Preescolar , Simulación por Computador , Electrofisiología , Predisposición Genética a la Enfermedad , Genotipo , Humanos , Lactante , Estimación de Kaplan-Meier , Masculino , Modelos Biológicos , Oocitos/citología , Oocitos/fisiología , Fenotipo , Modelos de Riesgos Proporcionales , Sistema de Registros , Factores de Riesgo , Xenopus laevis , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...