Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Genet Epidemiol ; 41(1): 51-60, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27873357

RESUMO

The use of data analytics across the entire healthcare value chain, from drug discovery and development through epidemiology to informed clinical decision for patients or policy making for public health, has seen an explosion in the recent years. The increase in quantity and variety of data available together with the improvement of storing capabilities and analytical tools offer numerous possibilities to all stakeholders (manufacturers, regulators, payers, healthcare providers, decision makers, researchers) but most importantly, it has the potential to improve general health outcomes if we learn how to exploit it in the right way. This article looks at the different sources of data and the importance of unstructured data. It goes on to summarize current and potential future uses in drug discovery, development, and monitoring as well as in public and personal healthcare; including examples of good practice and recent developments. Finally, we discuss the main practical and ethical challenges to unravel the full potential of big data in healthcare and conclude that all stakeholders need to work together towards the common goal of making sense of the available data for the common good.


Assuntos
Conjuntos de Dados como Assunto/estatística & dados numéricos , Tomada de Decisões , Atenção à Saúde , Descoberta de Drogas , Medicina de Precisão , Saúde Pública , Genômica , Humanos
2.
Adv Exp Med Biol ; 1031: 387-404, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29214584

RESUMO

Personalised Medicine has become a reality over the last years. The emergence of 'omics' and big data has started revolutionizing healthcare. New 'omics' technologies lead to a better molecular characterization of diseases and a new understanding of the complexity of diseases. The approach of PM is already successfully applied in different healthcare areas such as oncology, cardiology, nutrition and for rare diseases. However, health systems across the EU are often still promoting the 'one-size fits all' approach, even if it is known that patients do greatly vary in their molecular characteristics and response to drugs and other interventions. To make use of the full potentials of PM in the next years ahead several challenges need to be addressed such as the integration of big data, patient empowerment, translation of basic to clinical research, bringing the innovation to the market and shaping sustainable healthcare systems.


Assuntos
Genômica/métodos , Medicina de Precisão/métodos , Doenças Raras/terapia , Pesquisa Translacional Biomédica/métodos , Mineração de Dados , Bases de Dados Factuais , Predisposição Genética para Doença , Humanos , Fenótipo , Valor Preditivo dos Testes , Prognóstico , Doenças Raras/diagnóstico , Doenças Raras/epidemiologia , Doenças Raras/genética , Sistema de Registros , Fatores de Risco
3.
Biomed Tech (Berl) ; 54(2): 55-65, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19335121

RESUMO

BACKGROUND: The efficacy of cardiac resynchronization therapy through biventricular pacing (BVP) has been demonstrated by numerous studies in patients suffering from congestive heart failure. In order to achieve a guideline for optimal treatment with BVP devices, an automated non-invasive strategy based on a computer model of the heart is presented. MATERIALS AND METHODS: The presented research investigates an off-line optimization algorithm regarding electrode positioning and timing delays. The efficacy of the algorithm is demonstrated in four patients suffering from left bundle branch block (LBBB) and myocardial infarction (MI). The computer model of the heart was used to simulate the LBBB in addition to several MI allocations according to the different left ventricular subdivisions introduced by the American Heart Association. Furthermore, simulations with reduced interventricular conduction velocity were performed in order to model interventricular excitation conduction delay. More than 800,000 simulations were carried out by adjusting a variety of 121 pairs of atrioventricular and interventricular delays and 36 different electrode positioning set-ups. Additionally, three different conduction velocities were examined. The optimization measures included the minimum root mean square error (E(RMS)) between physiological, pathological and therapeutic excitation, and also the difference of QRS-complex duration. Both of these measures were computed automatically. RESULTS: Depending on the patient's pathology and conduction velocity, a reduction of E(RMS) between physiological and therapeutic excitation could be reached. For each patient and pathology, an optimal pacing electrode pair was determined. The results demonstrated the importance of an individual adjustment of BVP parameters to the patient's anatomy and pathology. CONCLUSION: This work proposes a novel non-invasive optimization algorithm to find the best electrode positioning sites and timing delays for BVP in patients with LBBB and MI. This algorithm can be used to plan an optimal therapy for an individual patient.


Assuntos
Bloqueio de Ramo/prevenção & controle , Bloqueio de Ramo/fisiopatologia , Estimulação Cardíaca Artificial/métodos , Eletrodos Implantados , Modelos Cardiovasculares , Infarto do Miocárdio/prevenção & controle , Infarto do Miocárdio/fisiopatologia , Terapia Assistida por Computador/métodos , Bloqueio de Ramo/complicações , Simulação por Computador , Alemanha , Sistema de Condução Cardíaco/fisiopatologia , Ventrículos do Coração/fisiopatologia , Humanos , Infarto do Miocárdio/complicações , Marca-Passo Artificial , Implantação de Prótese/métodos , Garantia da Qualidade dos Cuidados de Saúde/métodos , Fatores de Tempo
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 3244-3247, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30441083

RESUMO

There are between 6,000 - 7,000 known rare diseases today. Identifying and diagnosing a patient with rare disease is time consuming, cumbersome, cost intensive and requires resources generally available only at large hospital centers. Furthermore, most medical doctors, especially general practitioners, will likely only see one patient with a rare disease if at all. A cognitive assistant for differential diagnosis in rare disease will provide the knowledge on all rare diseases online, help create a list of weighted diagnosis and access to the evidence base on which the list was created. The system is built on knowledge graph technology that incorporates data from ICD-10, DOID, medDRA, PubMed, Wikipedia, Orphanet, the CDC and anonymized patient data. The final knowledge graph comprised over 500,000 nodes. The solution was tested with 101 published cases for rare disease. The learning system improves over training sprints and delivers 79.5 % accuracy in finding the diagnosis in the top 1 % of nodes. A further learning step was taken to rank the correct result in the TOP 15 hits. With a reduced data pool, 51% of the 101 cases were tested delivering the correct result in the TOP 3 - 13 (TOP 6 on average) for 74% of these cases. The results show that data curation is among the most critical aspects to deliver accurate results. The knowledge graph technology demonstrates its power to deliver cognitive solutions for differential diagnosis in rare disease that can be applied in clinical practice.


Assuntos
Cognição , Doenças Raras , Diagnóstico Diferencial , Humanos
5.
J Electrocardiol ; 40(4): 328-34, 2007 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-17336996

RESUMO

BACKGROUND: Multiple wavelets and rotors are accused of maintaining atrial fibrillation (AF). However, snake-like excitation patterns have recently been observed in AF. So far, computer models have investigated AF in a simplified anatomical model. In this work, pulmonary vein firing is simulated to investigate the initiation and maintenance of AF in a realistic anatomical model. METHODS AND RESULTS: Thirty-five ectopic foci situated around all pulmonary veins were simulated by a unidirectional conduction block. The excitation propagation was simulated by an adaptive cellular automaton on a realistic 3-dimensional atrial anatomy. Atrial fibrillation was initiated in 65.7% of the simulations. Stable excitation patterns were broken up in anatomically heterogeneous regions, creating a streak-like excitation pattern similar to snakes. Multiple wavelets and rotors could be observed in anatomically smooth areas at the atria's roofs. CONCLUSIONS: The influence of macroscopic anatomical structures on the course of AF seems to play an important role in the excitation propagation in AF. The computer simulations indicate that multiple mechanisms contribute to the maintenance of AF.


Assuntos
Potenciais de Ação , Fibrilação Atrial/fisiopatologia , Relógios Biológicos , Mapeamento Potencial de Superfície Corporal/métodos , Sistema de Condução Cardíaco/fisiopatologia , Modelos Cardiovasculares , Veias Pulmonares/fisiopatologia , Simulação por Computador , Humanos
6.
Med Biol Eng Comput ; 45(9): 845-54, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-17657518

RESUMO

An optimal electrode position, atrio-ventricular (AV) and interventricular (VV) delay in cardiac resynchronization therapy (CRT) improves its success. An optimization strategy does not yet exist. A computer model of the Visible Man and a patient heart was used to simulate an atrio-ventricular and a left bundle branch block with 0%, 20% and 40% reduction in interventricular conduction velocity, respectively. The minimum error between physiological excitation and pathology/therapy was automatically computed for 12 different electrode positions. AV and VV delay timing was adjusted accordingly. The results show the importance of individually adjusting the electrode position as well as the timing delays to the patient's anatomy and pathology, which is in accordance with current clinical studies. The presented methods and strategy offer the opportunity to carry out non-invasive, automatic optimization of CRT preoperatively. The model is subject to validation in future clinical studies.


Assuntos
Algoritmos , Bloqueio Atrioventricular/terapia , Estimulação Cardíaca Artificial/métodos , Simulação por Computador , Bloqueio Atrioventricular/fisiopatologia , Eletrodos , Coração/fisiopatologia , Humanos , Estados Unidos , Projetos Ser Humano Visível
7.
Anadolu Kardiyol Derg ; 7 Suppl 1: 209-12, 2007 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-17584727

RESUMO

OBJECTIVE: Optimization of cardiac resynchronization therapy (CRT) is still unsolved. It has been shown that optimal electrode position,atrioventricular (AV) and interventricular (VV) delays improve the success of CRT and reduce the number of non-responders. However, no automatic, noninvasive optimization strategy exists to date. METHODS: Cardiac resynchronization therapy was simulated on the Visible Man and a patient data-set including fiber orientation and ventricular heterogeneity. A cellular automaton was used for fast computation of ventricular excitation. An AV block and a left bundle branch block were simulated with 100%, 80% and 60% interventricular conduction velocity. A right apical and 12 left ventricular lead positions were set. Sequential optimization and optimization with the downhill simplex algorithm (DSA) were carried out. The minimal error between isochrones of the physiologic excitation and the therapy was computed automatically and leads to an optimal lead position and timing. RESULTS: Up to 1512 simulations were carried out per pathology per patient. One simulation took 4 minutes on an Apple Macintosh 2 GHz PowerPC G5. For each electrode pair an optimal pacemaker delay was found. The DSA reduced the number of simulations by an order of magnitude and the AV-delay and VV - delay were determined with a much higher resolution. The findings are well comparable with clinical studies. CONCLUSION: The presented computer model of CRT automatically evaluates an optimal lead position and AV-delay and VV-delay, which can be used to noninvasively plan an optimal therapy for an individual patient. The application of the DSA reduces the simulation time so that the strategy is suitable for pre-operative planning in clinical routine. Future work will focus on clinical evaluation of the computer models and integration of patient data for individualized therapy planning and optimization.


Assuntos
Bloqueio de Ramo/terapia , Estimulação Cardíaca Artificial , Simulação por Computador , Desfibriladores Implantáveis , Humanos
8.
Public Health Genomics ; 20(6): 321-331, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29936514

RESUMO

INTRODUCTION: Currently, abundances of highly relevant health data are locked up in data silos due to decentralized storage and data protection laws. The health data cooperative (HDC) model is established to make this valuable data available for societal purposes. The aim of this study is to analyse the HDC model and its potentials and challenges. RESULTS: An HDC is a health data bank. The HDC model has as core principles a cooperative approach, citizen-centredness, not-for-profit structure, data enquiry procedure, worldwide accessibility, cloud computing data storage, open source, and transparency about governance policy. HDC members have access to the HDC platform, which consists of the "core," the "app store," and the "big data." This, respectively, enables the users to collect, store, manage, and share health information, to analyse personal health data, and to conduct big data analytics. Identified potentials of the HDC model are digitization of healthcare information, citizen empowerment, knowledge benefit, patient empowerment, cloud computing data storage, and reduction in healthcare expenses. Nevertheless, there are also challenges linked with this approach, including privacy and data security, citizens' restraint, disclosure of clinical results, big data, and commercial interest. Limitations and Outlook: The results of this article are not generalizable because multiple studies with a limited number of study participants are included. Therefore, it is recommended to undertake further elaborate research on these topics among larger and various groups of individuals. Additionally, more pilots on the HDC model are required before it can be fully implemented. Moreover, when the HDC model becomes operational, further research on its performances should be undertaken.

9.
Public Health Genomics ; 20(6): 312-320, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29617688

RESUMO

Digitization is considered to radically transform healthcare. As such, with seemingly unlimited opportunities to collect data, it will play an important role in the public health policy-making process. In this context, health data cooperatives (HDC) are a key component and core element for public health policy-making and for exploiting the potential of all the existing and rapidly emerging data sources. Being able to leverage all the data requires overcoming the computational, algorithmic, and technological challenges that characterize today's highly heterogeneous data landscape, as well as a host of diverse regulatory, normative, governance, and policy constraints. The full potential of big data can only be realized if data are being made accessible and shared. Treating research data as a public good, creating HDC to empower citizens through citizen-owned health data, and allowing data access for research and the development of new diagnostics, therapies, and public health policies will yield the transformative impact of digital health. The HDC model for data governance is an arrangement, based on moral codes, that encourages citizens to participate in the improvement of their own health. This then enables public health institutions and policymakers to monitor policy changes and evaluate their impact and risk on a population level.

10.
Public Health Genomics ; 20(5): 274-285, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29353273

RESUMO

Sepsis, with its often devastating consequences for patients and their families, remains a major public health concern that poses an increasing financial burden. Early resuscitation together with the elucidation of the biological pathways and pathophysiological mechanisms with the use of "-omics" technologies have started changing the clinical and research landscape in sepsis. Metabolomics (i.e., the study of the metabolome), an "-omics" technology further down in the "-omics" cascade between the genome and the phenome, could be particularly fruitful in sepsis research with the potential to alter the clinical practice. Apart from its benefit for the individual patient, metabolomics has an impact on public health that extends beyond its applications in medicine. In this review, we present recent developments in metabolomics research in sepsis, with a focus on pneumonia, and we discuss the impact of metabolomics on public health, with a focus on free/libre open source software.


Assuntos
Metabolômica , Pneumonia , Sepse , Humanos , Invenções , Metaboloma , Metabolômica/métodos , Metabolômica/tendências , Pneumonia/complicações , Pneumonia/microbiologia , Sepse/etiologia , Sepse/metabolismo
11.
Biomed Tech (Berl) ; 51(4): 205-9, 2006 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-17061940

RESUMO

Cardiac arrhythmia is currently investigated from two different points of view. One considers ECG bio-signal analysis and investigates heart rate variability, baroreflex control, heart rate turbulence, alternans phenomena, etc. The other involves building computer models of the heart based on ion channels, bio-domain models and forward calculations to finally reach ECG and body surface potential maps. Both approaches aim to support the cardiologist in better understanding of arrhythmia, improving diagnosis and reliable risk stratification, and optimizing therapy. This article summarizes recent results and aims to trigger new research to bridge the different views.


Assuntos
Relógios Biológicos , Sistema de Condução Cardíaco/fisiopatologia , Frequência Cardíaca , Coração/fisiopatologia , Modelos Cardiovasculares , Miócitos Cardíacos , Periodicidade , Simulação por Computador , Retroalimentação , Homeostase , Humanos
12.
Public Health Genomics ; 19(4): 211-9, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27241319

RESUMO

BACKGROUND: Knowledge in the era of Omics and Big Data has been increasingly conceptualized as a public good. Sharing of de-identified patient data has been advocated as a means to increase confidence and public trust in the results of clinical trials. On the other hand, research has shown that the current research and development model of the biopharmaceutical industry has reached its innovation capacity. In response to that, the biopharmaceutical industry has adopted open innovation practices, with sharing of clinical trial data being among the most interesting ones. However, due to the free rider problem, clinical trial data sharing among biopharmaceutical companies could undermine their innovativeness. METHOD: Based on the theory of public goods, we have developed a commons arrangement and devised a model, which enables secure and fair clinical trial data sharing over a Virtual Knowledge Bank based on a web platform. Our model uses data as a virtual currency and treats knowledge as a club good. RESULTS: Fair sharing of clinical trial data over the Virtual Knowledge Bank has positive effects on the innovation capacity of the biopharmaceutical industry without compromising the intellectual rights, proprietary interests and competitiveness of the latter. CONCLUSION: The Virtual Knowledge Bank is a sustainable and self-expanding model for secure and fair clinical trial data sharing that allows for sharing of clinical trial data, while at the same time it increases the innovation capacity of the biopharmaceutical industry.


Assuntos
Ensaios Clínicos como Assunto , Indústria Farmacêutica/organização & administração , Disseminação de Informação/métodos , Inovação Organizacional , Pesquisa Biomédica , Humanos , Propriedade Intelectual , Modelos Teóricos , Responsabilidade Social
13.
Cancer Epidemiol Biomarkers Prev ; 25(12): 1619-1624, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27539266

RESUMO

BACKGROUND: We have developed a genome-wide association study analysis method called DEPTH (DEPendency of association on the number of Top Hits) to identify genomic regions potentially associated with disease by considering overlapping groups of contiguous markers (e.g., SNPs) across the genome. DEPTH is a machine learning algorithm for feature ranking of ultra-high dimensional datasets, built from well-established statistical tools such as bootstrapping, penalized regression, and decision trees. Unlike marginal regression, which considers each SNP individually, the key idea behind DEPTH is to rank groups of SNPs in terms of their joint strength of association with the outcome. Our aim was to compare the performance of DEPTH with that of standard logistic regression analysis. METHODS: We selected 1,854 prostate cancer cases and 1,894 controls from the UK for whom 541,129 SNPs were measured using the Illumina Infinium HumanHap550 array. Confirmation was sought using 4,152 cases and 2,874 controls, ascertained from the UK and Australia, for whom 211,155 SNPs were measured using the iCOGS Illumina Infinium array. RESULTS: From the DEPTH analysis, we identified 14 regions associated with prostate cancer risk that had been reported previously, five of which would not have been identified by conventional logistic regression. We also identified 112 novel putative susceptibility regions. CONCLUSIONS: DEPTH can reveal new risk-associated regions that would not have been identified using a conventional logistic regression analysis of individual SNPs. IMPACT: This study demonstrates that the DEPTH algorithm could identify additional genetic susceptibility regions that merit further investigation. Cancer Epidemiol Biomarkers Prev; 25(12); 1619-24. ©2016 AACR.


Assuntos
Predisposição Genética para Doença , Estudo de Associação Genômica Ampla/métodos , Aprendizado de Máquina , Polimorfismo de Nucleotídeo Único , Neoplasias da Próstata/genética , Austrália , Humanos , Masculino , Pessoa de Meia-Idade , Reino Unido
14.
Health Inf Sci Syst ; 3(Suppl 1 HISA Big Data in Biomedicine and Healthcare 2013 Con): S3, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25870758

RESUMO

Genome-wide association studies (GWAS) are a common approach for systematic discovery of single nucleotide polymorphisms (SNPs) which are associated with a given disease. Univariate analysis approaches commonly employed may miss important SNP associations that only appear through multivariate analysis in complex diseases. However, multivariate SNP analysis is currently limited by its inherent computational complexity. In this work, we present a computational framework that harnesses supercomputers. Based on our results, we estimate a three-way interaction analysis on 1.1 million SNP GWAS data requiring over 5.8 years on the full "Avoca" IBM Blue Gene/Q installation at the Victorian Life Sciences Computation Initiative. This is hundreds of times faster than estimates for other CPU based methods and four times faster than runtimes estimated for GPU methods, indicating how the improvement in the level of hardware applied to interaction analysis may alter the types of analysis that can be performed. Furthermore, the same analysis would take under 3 months on the currently largest IBM Blue Gene/Q supercomputer "Sequoia" at the Lawrence Livermore National Laboratory assuming linear scaling is maintained as our results suggest. Given that the implementation used in this study can be further optimised, this runtime means it is becoming feasible to carry out exhaustive analysis of higher order interaction studies on large modern GWAS.

15.
Health Inf Sci Syst ; 3(Suppl 1 HISA Big Data in Biomedicine and Healthcare 2013 Con): S7, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25870761

RESUMO

Even with the advent of next-generation sequencing (NGS) technologies which have revolutionised the field of bacterial genomics in recent years, a major barrier still exists to the implementation of NGS for routine microbiological use (in public health and clinical microbiology laboratories). Such routine use would make a big difference to investigations of pathogen transmission and prevention/control of (sometimes lethal) infections. The inherent complexity and high frequency of data analyses on very large sets of bacterial DNA sequence data, the ability to ensure data provenance and automatically track and log all analyses for audit purposes, the need for quick and accurate results, together with an essential user-friendly interface for regular non-technical laboratory staff, are all critical requirements for routine use in a public health setting. There are currently no systems to answer positively to all these requirements, in an integrated manner. In this paper, we describe a system for sequence analysis and interpretation that is highly automated and tackles the issues raised earlier, and that is designed for use in diagnostic laboratories by healthcare workers with no specialist bioinformatics knowledge.

16.
Stud Health Technol Inform ; 205: 1173-7, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25160374

RESUMO

The supplementation of medical data with environmental data offers rich new insights that can improve decision-making within health systems and the healthcare profession. In this study, we simulate disease incidence for various scenarios using a mathematical model. We subsequently visualise the infectious disease spread in human populations over time and geographies. We demonstrate this for malaria, which is one of the top three causes of mortality for children under the age of 5 years in sub-Saharan Africa, and its associated interventions within Kenya. We demonstrate how information can be collected, analysed, and presented in new ways to inform key decision makers in understanding the prevalence of disease and the response to interventions.


Assuntos
Sistemas de Informação Geográfica , Imageamento Tridimensional/métodos , Malária/epidemiologia , Malária/prevenção & controle , Vigilância da População/métodos , Análise Espaço-Temporal , África Subsaariana/epidemiologia , Feminino , Geografia Médica , Humanos , Incidência , Lactente , Recém-Nascido , Masculino
17.
Pathogens ; 3(2): 437-58, 2014 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-25437808

RESUMO

Recent advances in DNA sequencing technologies have the potential to transform the field of clinical and public health microbiology, and in the last few years numerous case studies have demonstrated successful applications in this context. Among other considerations, a lack of user-friendly data analysis and interpretation tools has been frequently cited as a major barrier to routine use of these techniques. Here we consider the requirements of microbiology laboratories for the analysis, clinical interpretation and management of bacterial whole-genome sequence (WGS) data. Then we discuss relevant, existing WGS analysis tools. We highlight many essential and useful features that are represented among existing tools, but find that no single tool fulfils all of the necessary requirements. We conclude that to fully realise the potential of WGS analyses for clinical and public health microbiology laboratories of all scales, we will need to develop tools specifically with the needs of these laboratories in mind.

18.
Artigo em Inglês | MEDLINE | ID: mdl-23734785

RESUMO

We have developed the capability to rapidly simulate cardiac electrophysiological phenomena in a human heart discretised at a resolution comparable with the length of a cardiac myocyte. Previous scientific investigation has generally invoked simplified geometries or coarse-resolution hearts, with simulation duration limited to 10s of heartbeats. Using state-of-the-art high-performance computing techniques coupled with one of the most powerful computers available (the 20 PFlop/s IBM BlueGene/Q at Lawrence Livermore National Laboratory), high-resolution simulation of the human heart can now be carried out over 1200 times faster compared with published results in the field. We demonstrate the utility of this capability by simulating, for the first time, the formation of transmural re-entrant waves in a 3D human heart. Such wave patterns are thought to underlie Torsades de Pointes, an arrhythmia that indicates a high risk of sudden cardiac death. Our new simulation capability has the potential to impact a multitude of applications in medicine, pharmaceuticals and implantable devices.


Assuntos
Simulação por Computador , Coração/fisiologia , Modelos Cardiovasculares , Arritmias Cardíacas/etiologia , Eletrocardiografia , Fenômenos Eletrofisiológicos , Humanos
19.
Artigo em Inglês | MEDLINE | ID: mdl-23366127

RESUMO

Most published GWAS do not examine SNP interactions due to the high computational complexity of computing p-values for the interaction terms. Our aim is to utilize supercomputing resources to apply complex statistical techniques to the world's accumulating GWAS, epidemiology, survival and pathology data to uncover more information about genetic and environmental risk, biology and aetiology. We performed the Bayesian Posterior Probability test on a pseudo data set with 500,000 single nucleotide polymorphism and 100 samples as proof of principle. We carried out strong scaling simulations on 2 to 4,096 processing cores with factor 2 increments in partition size. On two processing cores, the run time is 317h, i.e. almost two weeks, compared to less than 10 minutes on 4,096 processing cores. The speedup factor is 2,020 that is very close to the theoretical value of 2,048. This work demonstrates the feasibility of performing exhaustive higher order analysis of GWAS studies using independence testing for contingency tables. We are now in a position to employ supercomputers with hundreds of thousands of threads for higher order analysis of GWAS data using complex statistics.


Assuntos
Biologia Computacional/métodos , Estudo de Associação Genômica Ampla/métodos , Teorema de Bayes , Simulação por Computador , Humanos , Método de Monte Carlo , Neoplasias/genética , Fenótipo , Polimorfismo de Nucleotídeo Único
20.
J Am Coll Cardiol ; 60(21): 2182-91, 2012 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-23153844

RESUMO

OBJECTIVES: The study was designed to assess the ability of computer-simulated electrocardiography parameters to predict clinical outcomes and to risk-stratify patients with long QT syndrome type 1 (LQT1). BACKGROUND: Although attempts have been made to correlate mutation-specific ion channel dysfunction with patient phenotype in long QT syndrome, these have been largely unsuccessful. Systems-level computational models can be used to predict consequences of complex changes in channel function to the overall heart rhythm. METHODS: A total of 633 LQT1-genotyped subjects with 34 mutations from multinational long QT syndrome registries were studied. Cellular electrophysiology function was determined for the mutations and introduced in a 1-dimensional transmural electrocardiography computer model. The mutation effect on transmural repolarization was determined for each mutation and related to the risk for cardiac events (syncope, aborted cardiac arrest, and sudden cardiac death) among patients. RESULTS: Multivariate analysis showed that mutation-specific transmural repolarization prolongation (TRP) was associated with an increased risk for cardiac events (35% per 10-ms increment [p < 0.0001]; ≥upper quartile hazard ratio: 2.80 [p < 0.0001]) and life-threatening events (aborted cardiac arrest/sudden cardiac death: 27% per 10-ms increment [p = 0.03]; ≥upper quartile hazard ratio: 2.24 [p = 0.002]) independently of patients' individual QT interval corrected for heart rate (QTc). Subgroup analysis showed that among patients with mild to moderate QTc duration (<500 ms), the risk associated with TRP was maintained (36% per 10 ms [p < 0.0001]), whereas the patient's individual QTc was not associated with a significant risk increase after adjustment for TRP. CONCLUSIONS: These findings suggest that simulated repolarization can be used to predict clinical outcomes and to improve risk stratification in patients with LQT1, with a more pronounced effect among patients with a lower-range QTc, in whom a patient's individual QTc may provide less incremental prognostic information.


Assuntos
Simulação por Computador , Técnicas Eletrofisiológicas Cardíacas , Frequência Cardíaca/genética , Modelos Cardiovasculares , Medição de Risco , Síndrome de Romano-Ward/fisiopatologia , Adolescente , Adulto , DNA/análise , Feminino , Seguimentos , Genótipo , Humanos , Canal de Potássio KCNQ1/genética , Masculino , Mutação , Fenótipo , Valor Preditivo dos Testes , Prognóstico , Sistema de Registros , Fatores de Risco , Síndrome de Romano-Ward/genética , Síndrome de Romano-Ward/patologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA