Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
2.
Atherosclerosis ; 275: 80-87, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29879685

RESUMO

BACKGROUND AND AIMS: The cost effectiveness of cascade testing for familial hypercholesterolaemia (FH) is well recognised. Less clear is the cost effectiveness of FH screening when it includes case identification strategies that incorporate routinely available data from primary and secondary care electronic health records. METHODS: Nine strategies were compared, all using cascade testing in combination with different index case approaches (primary care identification, secondary care identification, and clinical assessment using the Simon Broome (SB) or Dutch Lipid Clinic Network (DLCN) criteria). A decision analytic model was informed by three systematic literature reviews and expert advice provided by a NICE Guideline Committee. RESULTS: The model found that the addition of primary care case identification by database search for patients with recorded total cholesterol >9.3 mmol/L was more cost effective than cascade testing alone. The incremental cost-effectiveness ratio (ICER) of clinical assessment using the DLCN criteria was £3254 per quality-adjusted life year (QALY) compared with case-finding with no genetic testing. The ICER of clinical assessment using the SB criteria was £13,365 per QALY (compared with primary care identification using the DLCN criteria), indicating that the SB criteria was preferred because it achieved additional health benefits at an acceptable cost. Secondary care identification, with either the SB or DLCN criteria, was not cost effective, alone (dominated and dominated respectively) or combined with primary care identification (£63, 514 per QALY, and £82,388 per QALY respectively). CONCLUSIONS: Searching primary care databases for people at high risk of FH followed by cascade testing is likely to be cost-effective.


Assuntos
Apolipoproteína B-100/genética , Mineração de Dados/economia , Registros Eletrônicos de Saúde , Testes Genéticos/economia , Custos de Cuidados de Saúde , Hiperlipoproteinemia Tipo II/diagnóstico , Mutação , Pró-Proteína Convertase 9/genética , Receptores de LDL/genética , Análise Custo-Benefício , Mineração de Dados/métodos , Bases de Dados Factuais , Inglaterra , Predisposição Genética para Doença , Humanos , Hiperlipoproteinemia Tipo II/economia , Hiperlipoproteinemia Tipo II/genética , Hiperlipoproteinemia Tipo II/terapia , Cadeias de Markov , Modelos Econômicos , Fenótipo , Valor Preditivo dos Testes , Atenção Primária à Saúde , Qualidade de Vida , Anos de Vida Ajustados por Qualidade de Vida , Medição de Risco , Fatores de Risco , Atenção Secundária à Saúde , Fatores de Tempo , País de Gales
3.
J Am Med Inform Assoc ; 24(6): 1211-1220, 2017 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-29016974

RESUMO

OBJECTIVES: To introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains. TARGET AUDIENCE: Biomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains. SCOPE: The covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains.


Assuntos
Segurança Computacional , Mineração de Dados , Informática Médica , Algoritmos , Comércio , Confidencialidade , Mineração de Dados/economia
4.
Anal Chem ; 89(2): 1254-1259, 2017 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-27983788

RESUMO

The speed and throughput of analytical platforms has been a driving force in recent years in the "omics" technologies and while great strides have been accomplished in both chromatography and mass spectrometry, data analysis times have not benefited at the same pace. Even though personal computers have become more powerful, data transfer times still represent a bottleneck in data processing because of the increasingly complex data files and studies with a greater number of samples. To meet the demand of analyzing hundreds to thousands of samples within a given experiment, we have developed a data streaming platform, XCMS Stream, which capitalizes on the acquisition time to compress and stream recently acquired data files to data processing servers, mimicking just-in-time production strategies from the manufacturing industry. The utility of this XCMS Online-based technology is demonstrated here in the analysis of T cell metabolism and other large-scale metabolomic studies. A large scale example on a 1000 sample data set demonstrated a 10 000-fold time savings, reducing data analysis time from days to minutes. Further, XCMS Stream has the capability to increase the efficiency of downstream biochemical dependent data acquisition (BDDA) analysis by initiating data conversion and data processing on subsets of data acquired, expanding its application beyond data transfer to smart preliminary data decision-making prior to full acquisition.


Assuntos
Compressão de Dados/métodos , Mineração de Dados/métodos , Metabolômica/métodos , Linfócitos T/metabolismo , Compressão de Dados/economia , Mineração de Dados/economia , Humanos , Metabolômica/economia , Software , Fatores de Tempo , Fluxo de Trabalho
5.
Am J Manag Care ; 22(12): 816-820, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-27982668

RESUMO

OBJECTIVES: This study illustrates a systematic methodology to embed medical costs into the exact flow of clinical events associated with chronic care delivery. We summarized and visualized the results using clinical and cost data, with the goal of empowering patients and care providers with actionable information as they navigate through a multitude of clinical events and medical expenses. STUDY DESIGN: We analyzed the electronic health records (EHRs) and medication cost data of 288 patients from 2009 to 2011, whose initial diagnoses included chronic kidney disease stage 3, hypertension, and diabetes. METHODS: We developed chronological pathways of care and costs for each patient from EHR and medication cost data. Using a data-driven method called clinical pathway (CP) learning, which leverages statistical machine-learning algorithms, we categorized patients into clinically similar subgroups based on progressing clinical complexity and associated care needs. The CP-based subgroups were compared against cost-based subgroups stratified by quartiles of total medication costs, and visualized via pathways that are color-coded by costs. RESULTS: Our methods identified 3 CP-based, and 4 cost-based, patient subgroups. Two sets of subgroups from each approach indicated some clinical similarity in terms of average statistics, such as number of diagnoses and medication needs. However, the CP-based subgroups displayed significant variation in costs; conversely, large differences in clinical needs were observed among cost-based subgroups. CONCLUSIONS: This study demonstrates that CPs extracted from EHRs can be enhanced with appropriate cost information to potentially provide detailed visibility into the variability and inconsistencies in current best practices for chronic care delivery.


Assuntos
Doença Crônica/economia , Procedimentos Clínicos/economia , Atenção à Saúde/economia , Registros Eletrônicos de Saúde/estatística & dados numéricos , Gastos em Saúde , Doença Crônica/terapia , Mineração de Dados/economia , Feminino , Humanos , Assistência de Longa Duração/economia , Masculino , Inovação Organizacional , Estados Unidos
6.
PLoS One ; 11(11): e0165972, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27832163

RESUMO

As one of data mining techniques, outlier detection aims to discover outlying observations that deviate substantially from the reminder of the data. Recently, the Local Outlier Factor (LOF) algorithm has been successfully applied to outlier detection. However, due to the computational complexity of the LOF algorithm, its application to large data with high dimension has been limited. The aim of this paper is to propose grid-based algorithm that reduces the computation time required by the LOF algorithm to determine the k-nearest neighbors. The algorithm divides the data spaces in to a smaller number of regions, called as a "grid", and calculates the LOF value of each grid. To examine the effectiveness of the proposed method, several experiments incorporating different parameters were conducted. The proposed method demonstrated a significant computation time reduction with predictable and acceptable trade-off errors. Then, the proposed methodology was successfully applied to real database transaction logs of Korea Atomic Energy Research Institute. As a result, we show that for a very large dataset, the grid-LOF can be considered as an acceptable approximation for the original LOF. Moreover, it can also be effectively used for real-time outlier detection.


Assuntos
Algoritmos , Mineração de Dados/métodos , Análise por Conglomerados , Sistemas Computacionais/economia , Mineração de Dados/economia , Fatores de Tempo
7.
PLoS One ; 11(10): e0163477, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27695049

RESUMO

Blockchain is a decentralized transaction and data management technology developed first for Bitcoin cryptocurrency. The interest in Blockchain technology has been increasing since the idea was coined in 2008. The reason for the interest in Blockchain is its central attributes that provide security, anonymity and data integrity without any third party organization in control of the transactions, and therefore it creates interesting research areas, especially from the perspective of technical challenges and limitations. In this research, we have conducted a systematic mapping study with the goal of collecting all relevant research on Blockchain technology. Our objective is to understand the current research topics, challenges and future directions regarding Blockchain technology from the technical perspective. We have extracted 41 primary papers from scientific databases. The results show that focus in over 80% of the papers is on Bitcoin system and less than 20% deals with other Blockchain applications including e.g. smart contracts and licensing. The majority of research is focusing on revealing and improving limitations of Blockchain from privacy and security perspectives, but many of the proposed solutions lack concrete evaluation on their effectiveness. Many other Blockchain scalability related challenges including throughput and latency have been left unstudied. On the basis of this study, recommendations on future research directions are provided for researchers.


Assuntos
Mineração de Dados/tendências , Pesquisa/tendências , Tecnologia/economia , Análise Custo-Benefício , Mineração de Dados/economia , Humanos , Tecnologia/tendências
9.
Methods Inf Med ; 55(4): 356-64, 2016 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-27405787

RESUMO

BACKGROUND: Clinical text contains valuable information but must be de-identified before it can be used for secondary purposes. Accurate annotation of personally identifiable information (PII) is essential to the development of automated de-identification systems and to manual redaction of PII. Yet the accuracy of annotations may vary considerably across individual annotators and annotation is costly. As such, the marginal benefit of incorporating additional annotators has not been well characterized. OBJECTIVES: This study models the costs and benefits of incorporating increasing numbers of independent human annotators to identify the instances of PII in a corpus. We used a corpus with gold standard annotations to evaluate the performance of teams of annotators of increasing size. METHODS: Four annotators independently identified PII in a 100-document corpus consisting of randomly selected clinical notes from Family Practice clinics in a large integrated health care system. These annotations were pooled and validated to generate a gold standard corpus for evaluation. RESULTS: Recall rates for all PII types ranged from 0.90 to 0.98 for individual annotators to 0.998 to 1.0 for teams of three, when meas-ured against the gold standard. Median cost per PII instance discovered during corpus annotation ranged from $ 0.71 for an individual annotator to $ 377 for annotations discovered only by a fourth annotator. CONCLUSIONS: Incorporating a second annotator into a PII annotation process reduces unredacted PII and improves the quality of annotations to 0.99 recall, yielding clear benefit at reasonable cost; the cost advantages of annotation teams larger than two diminish rapidly.


Assuntos
Análise Custo-Benefício/economia , Mineração de Dados/economia , Sistemas de Identificação de Pacientes/economia , Registros Eletrônicos de Saúde , Humanos
10.
Heart ; 102(11): 855-61, 2016 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-26864669

RESUMO

OBJECTIVE: To evaluate the performance of a new electronic screening tool (TARB-Ex) in detecting general practice patients at potential risk of familial hypercholesterolaemia (FH). METHODS: Medical records for all active patients seen between 2012 and 2014 (n=3708) at a large general practice in Perth, Western Australia were retrospectively screened for potential FH risk using TARB-Ex. Electronic extracts of medical records for patients identified with potential FH risk (defined as Dutch Lipid Clinic Network Criteria (DLCNC) score ≥5) through TARB-Ex were reviewed by a general practitioner (GP) and lipid specialist. High-risk patients were recalled for clinical assessment to determine phenotypic FH diagnosis. Performance was evaluated against a manual record review by a GP in the subset of 360 patients with high blood cholesterol (cholesterol ≥7 mmol/L or low-density lipoprotein cholesterol ≥4.0 mmol/L). RESULTS: Thirty-two patients with DLCNC score ≥5 were identified through electronic screening compared with 22 through GP manual review. Sensitivity was 95.5% (95% CI 77.2% to 99.9%), specificity was 96.7% (95% CI 94.3% to 98.3%), negative predictive accuracy was 99.7% (95% CI 98.3% to 100%) and positive predictive accuracy was 65.6% (95% CI 46.9% to 8%). Electronic screening was completed in 10 min compared with 60 h for GP manual review. 10 of 32 patients (31%) were considered high risk and recalled for clinical assessment. Six of seven patients (86%) who attended clinical assessment were diagnosed with phenotypic FH on examination. CONCLUSIONS: TARB-Ex screening is a time-effective and cost-effective method of systematically identifying potential FH risk patients from general practice records for clinical follow-up.


Assuntos
Colesterol/sangue , Mineração de Dados , Registros Eletrônicos de Saúde , Medicina Geral , Hiperlipoproteinemia Tipo II/diagnóstico , Programas de Rastreamento/métodos , Adulto , Idoso , Biomarcadores/sangue , LDL-Colesterol/sangue , Análise Custo-Benefício , Mineração de Dados/economia , Registros Eletrônicos de Saúde/economia , Feminino , Medicina Geral/economia , Custos de Cuidados de Saúde , Humanos , Hiperlipoproteinemia Tipo II/sangue , Hiperlipoproteinemia Tipo II/economia , Masculino , Programas de Rastreamento/economia , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Estudos Retrospectivos , Medição de Risco , Fatores de Risco , Fatores de Tempo , Austrália Ocidental , Adulto Jovem
12.
Artigo em Inglês | MEDLINE | ID: mdl-25776020

RESUMO

The manual curation of the information in biomedical resources is an expensive task. This article argues the value of this approach in comparison with other apparently less costly options, such as automated annotation or text-mining, then discusses ways in which databases can make cost savings by sharing infrastructure and tool development. Sharing curation effort is a model already being adopted by several data resources. Approaches taken by two of these, the Gene Ontology annotation effort and the IntAct molecular interaction database, are reviewed in more detail. These models help to ensure long-term persistence of curated data and minimizes redundant development of resources by multiple disparate groups.


Assuntos
Curadoria de Dados/métodos , Mineração de Dados/métodos , Bases de Dados Genéticas , Ontologia Genética , Curadoria de Dados/economia , Mineração de Dados/economia
13.
J Med Pract Manage ; 29(5): 327-30, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24873133

RESUMO

Data is the new currency. Business intelligence tools will provide better performing practices with a competitive intelligence advantage that will separate the high performers from the rest of the pack. Given the investments of time and money into our data systems, practice leaders must work to take every advantage and look at the datasets as a potential goldmine of business intelligence decision tools. A fresh look at decision tools created from practice data will create efficiencies and improve effectiveness for end-users and managers.


Assuntos
Inteligência Artificial , Comércio/organização & administração , Mineração de Dados/métodos , Tomada de Decisões Gerenciais , Competição Econômica/organização & administração , Administração da Prática Médica/organização & administração , Agendamento de Consultas , Inteligência Artificial/economia , Comércio/economia , Mineração de Dados/economia , Técnicas de Apoio para a Decisão , Competição Econômica/economia , Registros Eletrônicos de Saúde/economia , Registros Eletrônicos de Saúde/organização & administração , Humanos , Administração da Prática Médica/economia , Listas de Espera
14.
OMICS ; 18(1): 1-9, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24456464

RESUMO

Metadata refer to descriptions about data or as some put it, "data about data." Metadata capture what happens on the backstage of science, on the trajectory from study conception, design, funding, implementation, and analysis to reporting. Definitions of metadata vary, but they can include the context information surrounding the practice of science, or data generated as one uses a technology, including transactional information about the user. As the pursuit of knowledge broadens in the 21(st) century from traditional "science of whats" (data) to include "science of hows" (metadata), we analyze the ways in which metadata serve as a catalyst for responsible and open innovation, and by extension, science diplomacy. In 2015, the United Nations Millennium Development Goals (MDGs) will formally come to an end. Therefore, we propose that metadata, as an ingredient of responsible innovation, can help achieve the Sustainable Development Goals (SDGs) on the post-2015 agenda. Such responsible innovation, as a collective learning process, has become a key component, for example, of the European Union's 80 billion Euro Horizon 2020 R&D Program from 2014-2020. Looking ahead, OMICS: A Journal of Integrative Biology, is launching an initiative for a multi-omics metadata checklist that is flexible yet comprehensive, and will enable more complete utilization of single and multi-omics data sets through data harmonization and greater visibility and accessibility. The generation of metadata that shed light on how omics research is carried out, by whom and under what circumstances, will create an "intervention space" for integration of science with its socio-technical context. This will go a long way to addressing responsible innovation for a fairer and more transparent society. If we believe in science, then such reflexive qualities and commitments attained by availability of omics metadata are preconditions for a robust and socially attuned science, which can then remain broadly respected, independent, and responsibly innovative. "In Sierra Leone, we have not too much electricity. The lights will come on once in a week, and the rest of the month, dark[ness]. So I made my own battery to power light in people's houses." Kelvin Doe (Global Minimum, 2012) MIT Visiting Young Innovator Cambridge, USA, and Sierra Leone "An important function of the (Global) R&D Observatory will be to provide support and training to build capacity in the collection and analysis of R&D flows, and how to link them to the product pipeline." World Health Organization (2013) Draft Working Paper on a Global Health R&D Observatory.


Assuntos
Mineração de Dados/estatística & dados numéricos , Disseminação de Informação/ética , Metagenômica/estatística & dados numéricos , Mineração de Dados/economia , Mineração de Dados/tendências , União Europeia , Humanos , Metagenômica/economia , Metagenômica/tendências , Editoração , Projetos de Pesquisa
16.
Nurs Adm Q ; 37(2): 105-8, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23454988

RESUMO

The amount of health care data in our world has been exploding, and the ability to store, aggregate, and combine data and then use the results to perform deep analyses have become ever more important. "Big data," large pools of data that can be captured, communicated, aggregated, stored, and analyzed, are now part of every sector and function of the global economy. While most research into big data thus far has focused on the question of their volume, there is evidence that the business and economic possibilities of big data and their wider implications are important for consideration. It is even offering the possibility that health care data could become the most valuable asset over the next 5 years as "secondary use" of electronic health record data takes off.


Assuntos
Mineração de Dados , Tomada de Decisões Assistida por Computador , Atenção à Saúde/economia , Atenção à Saúde/tendências , Registros Eletrônicos de Saúde , Mineração de Dados/economia , Registros Eletrônicos de Saúde/economia , Humanos , Estados Unidos
17.
J Am Med Inform Assoc ; 19(4): 529-32, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22249966

RESUMO

The performance of a classification system depends on the context in which it will be used, including the prevalence of the classes and the relative costs of different types of errors. Metrics such as accuracy are limited to the context in which the experiment was originally carried out, and metrics such as sensitivity, specificity, and receiver operating characteristic area--while independent of prevalence--do not provide a clear picture of the performance characteristics of the system over different contexts. Graphing a prevalence-specific metric such as F-measure or the relative cost of errors over a wide range of prevalence allows a visualization of the performance of the system and a comparison of systems in different contexts.


Assuntos
Classificação , Mineração de Dados , Sistemas de Informação , Avaliação da Tecnologia Biomédica/métodos , Recursos Audiovisuais , Análise Custo-Benefício , Mineração de Dados/economia , Humanos , Sistemas de Informação/economia , Modelos Teóricos , Prevalência , Curva ROC , Sensibilidade e Especificidade , Avaliação da Tecnologia Biomédica/economia
18.
Mol Ecol Resour ; 11(1): 126-33, 2011 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21429109

RESUMO

Online sequence databases can provide valuable resources for the development of cross-species genetic markers. In particular, mining expressed tag sequences (EST) for microsatellites and developing conserved cross-species microsatellite markers can provide a rapid and relatively inexpensive method to develop new markers for a range of species. Here, we adopt this approach to develop cross-species microsatellite markers in Anolis lizards, which is a model genus in evolutionary biology and ecology. Using EST sequences from Anolis carolinensis, we identified 127 microsatellites that satisfied our criteria, and tested 49 of these in five species of Anolis (carolinensis, distichus, apletophallus, porcatus and sagrei). We identified between 8 and 25 new variable genetic markers for five Anolis species. These markers will be a valuable resource for studies of population genetics, comparative mapping, mating systems, behavioural ecology and adaptive radiations in this diverse lineage.


Assuntos
Mineração de Dados/métodos , Bases de Dados de Ácidos Nucleicos , Genômica/métodos , Lagartos/genética , Repetições de Microssatélites , Animais , Mineração de Dados/economia , Bases de Dados de Ácidos Nucleicos/economia , Etiquetas de Sequências Expressas , Genômica/economia , Lagartos/classificação , Sistemas On-Line/economia , Especificidade da Espécie
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA