Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 135
Filtrar
3.
Rev. Hosp. Ital. B. Aires (2004) ; 36(4): 160-164, dic. 2016. ilus, graf
Artigo em Espanhol | LILACS | ID: biblio-1145367

RESUMO

El amplio acceso a computadoras de alto desempeño y dispositivos electrónicos de gran almacenamiento, entre otros, ha permitido en los últimos años la generación de cantidades masivas de datos, concepto que puede ser representado por Velocidad, Volumen y Variabilidad. La Minería de Datos es un proceso que permite descubrir patrones o asociaciones relevantes, no plenamente descubiertas en principio con los métodos tradicionales de análisis, en grandes bases de datos y generar modelos. Para ello, usa herramientas de áreas tales como Sistemas de Bases de Datos, Almacenamiento, Aprendizaje Automático, Estadística, Visualización de la Información y Computación de Alto Desempeño. En las últimas décadas, la biología molecular ha pasado del análisis de genes individuales a estudios más complejos que abarcan el genoma completo de un individuo. El desarrollo de tecnologías genómicas de alto desempeño, como los microarrays y la secuenciación de próxima generación (NGS), ha hecho posible producir de manera exponencial información, con la expansión de nuestro conocimiento de las bases genéticas de varias enfermedades. En la Medicina Genómica, el uso de la Minería de Datos para el análisis de la información genómica se está convirtiendo en una necesidad cada vez más buscada, contribuyendo así hacia una medicina personalizada tal que permite inferir modelos clínicamente relevantes y definir estrategias terapéuticas individualizadas a partir de datos moleculares de pacientes. (AU)


The availability of use of high-performance computers and large-storage electronic devices, among others, has allowed the generation of a huge masses of digital data, an idea that can be represented by velocity, volume and variety. Data mining is a process that permits to discover relevant patterns or relations, not previously seen with traditional methods of analysis, in large databases and generate models. It uses tools from Database Systems, Data Warehouse, Machine Learning, Statistics, Information Visualization and High-Performance Computing. In the last decades, molecular biology has moved from individual gene analysis to more complex studies that involve the complete genome. The development of high-throughput genomic technologies, such as microarrays and next-generation sequencing, has promoted the exponential growth of a huge amount of information, expanding our knowledge on the genetic basis of various diseases. In genomics medicine, the application of data mining techniques has become an increasingly important process that contributes towards a personalized medicine, that involves the inference of clinically relevant models and defines individualized therapeutic strategies based on the molecular data of patients. (AU)


Assuntos
Humanos , Genômica/métodos , Mineração de Dados , Transtorno Autístico/genética , Computadores de Grande Porte , Gestão da Informação , Aprendizado de Máquina , Data Warehousing , Análise de Dados
5.
Crit Rev Biomed Eng ; 44(1-2): 99-122, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27652454

RESUMO

Cardiac electrophysiological modeling in conjunction with experimental and clinical findings has contributed to better understanding of electrophysiological phenomena in various species. As our knowledge on underlying electrical, mechanical, and chemical processes has improved over time, mathematical models of the cardiac electrophysiology have become more realistic and detailed. These models have provided a testbed for various hypotheses and conditions that may not be easy to implement experimentally. In addition to the limitations in experimentally validating various scenarios implemented by the models, one of the major obstacles for these models is computational complexity. However, the ever-increasing computational power of supercomputers facilitates the clinical application of cardiac electrophysiological models. The potential clinical applications include testing and predicting effects of pharmaceutical agents and performing patient-specific ablation and defibrillation. A review of studies involving these models and their major findings are provided.


Assuntos
Computadores de Grande Porte , Fenômenos Eletrofisiológicos , Coração/fisiologia , Modelos Teóricos , Humanos , Computação Matemática , Modelos Cardiovasculares
6.
Clin J Oncol Nurs ; 19(1): 31-2, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25689646

RESUMO

IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.


Assuntos
Tomada de Decisão Clínica , Computadores de Grande Porte , Tomada de Decisões Assistida por Computador , Humanos , Relações Interinstitucionais
7.
Opt Express ; 20(18): 20407-26, 2012 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-23037091

RESUMO

We propose a most economical design of the Optical Shared MemOry Supercomputer Interconnect System (OSMOSIS) all-optical, wavelength-space crossbar switch fabric. It is shown, by analysis and simulation, that the total number of on-off gates required for the proposed N × N switch fabric can scale asymptotically as N ln N if the number of input/output ports N can be factored into a product of small primes. This is of the same order of magnitude as Shannon's lower bound for switch complexity, according to which the minimum number of two-state switches required for the construction of a N × N permutation switch is log2 (N!).


Assuntos
Desenho Assistido por Computador , Computadores de Grande Porte , Tecnologia de Fibra Óptica/instrumentação , Processamento de Sinais Assistido por Computador/instrumentação , Desenho de Equipamento , Análise de Falha de Equipamento
8.
Artigo em Inglês | MEDLINE | ID: mdl-22254462

RESUMO

The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.


Assuntos
Algoritmos , Computadores de Grande Porte , DNA Bacteriano/genética , Genoma Bacteriano/genética , Alinhamento de Sequência/métodos , Análise de Sequência de DNA/métodos , Software , Sequência de Bases , Dados de Sequência Molecular , Design de Software
10.
Health Phys ; 97(3): 242-7, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19667807

RESUMO

In this study the radionuclide databases for two versions of the Clean Air Act Assessment Package-1988 (CAP88) computer model were assessed in detail. CAP88 estimates radiation dose and the risk of health effects to human populations from radionuclide emissions to air. This program is used by several U.S. Department of Energy (DOE) facilities to comply with National Emission Standards for Hazardous Air Pollutants regulations. CAP88 Mainframe, referred to as version 1.0 on the U.S. Environmental Protection Agency Web site (http://www.epa.gov/radiation/assessment/CAP88/), was the very first CAP88 version released in 1988. Some DOE facilities including the Savannah River Site still employ this version (1.0) while others use the more user-friendly personal computer Windows-based version 3.0 released in December 2007. Version 1.0 uses the program RADRISK based on International Commission on Radiological Protection Publication 30 as its radionuclide database. Version 3.0 uses half-life, dose, and risk factor values based on Federal Guidance Report 13. Differences in these values could cause different results for the same input exposure data (same scenario), depending on which version of CAP88 is used. Consequently, the differences between the two versions are being assessed in detail at Savannah River National Laboratory. The version 1.0 and 3.0 database files contain 496 and 838 radionuclides, respectively, and though one would expect the newer version to include all the 496 radionuclides, 35 radionuclides are listed in version 1.0 that are not included in version 3.0. The majority of these has either extremely short or long half-lives or is no longer in production; however, some of the short-lived radionuclides might produce progeny of great interest at DOE sites. In addition, 122 radionuclides were found to have different half-lives in the two versions, with 21 over 3 percent different and 12 over 10 percent different.


Assuntos
Poluentes Radioativos do Ar/análise , Poluentes Atmosféricos/análise , Bases de Dados Factuais , Radioisótopos/análise , Computadores de Grande Porte , Humanos , Microcomputadores , Monitoramento de Radiação/legislação & jurisprudência , Monitoramento de Radiação/estatística & dados numéricos , Proteção Radiológica/estatística & dados numéricos , Software , South Carolina , Estados Unidos , United States Environmental Protection Agency
14.
J Am Med Inform Assoc ; 11(3): 207-16, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-14764612

RESUMO

Most studies of the impact of information systems in organizations tend to see the implementation process as a "rollout" of technology, as a technical matter removed from organizational dynamics. There is substantial agreement that the success of implementing information systems is determined by organizational factors. However, it is less clear what these factors are. The authors propose to characterize the introduction of an information system as a process of mutual shaping. As a result, both the technology and the practice supported by the technology are transformed, and specific technical and social outcomes gradually emerge. The authors suggest that insights from social studies of science and technology can help to understand an implementation process. Focusing on three theoretical aspects, the authors argue first that the implementation process should be understood as a thoroughly social process in which both technology and practice are transformed. Second, following Orlikowski's concept of "emergent change," they suggest that implementing a system is, by its very nature, unpredictable. Third, they argue that success and failure are not dichotomous and static categories, but socially negotiated judgments. Using these insights, the authors have analyzed the implementation of a computerized physician order entry (CPOE) system in a large Dutch university medical center. During the course of this study, the full implementation of CPOE was halted, but the aborted implementation exposed issues on which the authors did not initially focus.


Assuntos
Centros Médicos Acadêmicos/organização & administração , Sistemas de Informação Hospitalar , Sistemas Computadorizados de Registros Médicos , Atitude Frente aos Computadores , Computadores de Grande Porte , Implementação de Plano de Saúde , Estudos Longitudinais , Países Baixos , Cultura Organizacional , Inovação Organizacional , Médicos , Sociologia , Software , Interface Usuário-Computador
17.
Rev. esp. patol ; 36(2): 159-170, abr. 2003. ilus
Artigo em Es | IBECS | ID: ibc-26199

RESUMO

Introducción: Diez años después del comienzo de la utilización pública de la World Wide Web, las tecnologías de Internet han tenido tanto éxito que incluso las redes hospitalarias han adoptado este modelo para desarrollar sus propias redes o Intranets. El objetivo de este trabajo es aplicar los servidores web para su uso en Anatomía Patológica. Material y métodos: Se explican los pasos necesarios para instalar un servidor de páginas web tanto accesible a Internet, mediante conexión ADSL (línea digital del cliente asimétrica), como para una red local o Intranet. Se describe la instalación de los servidores web Microsoft Internet Información Server y Apache y la utilización de herramientas que facilitan la elaboración de sitios web, como Microsoft Share Point Portal Server. Resultados: Es posible la creación de completos sitios web mediante las herramientas seleccionadas, con gestión de menús, enlaces y sistemas de búsquedas. Además, es posible la creación de páginas web con diseño personalizado. Discusión y conclusiones: Los servidores web son de utilidad en Anatomía Patológica por su bajo coste y permitir compartir la experiencias con otros hospitales (Internet) o con el resto del hospital (intranet), siendo de especial interés para la difusión de imágenes digitales. (AU)


Assuntos
Humanos , Internet/tendências , Computadores de Grande Porte/tendências , Patologia/tendências , Redes de Comunicação de Computadores/tendências , Interpretação de Imagem Assistida por Computador
19.
Health Devices ; 30(9-10): 323-59, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-11696968

RESUMO

Computerized provider order entry (CPOE) systems are designed to replace a hospital's paper-based ordering system. They allow users to electronically write the full range of orders, maintain an online medication administration record, and review changes made to an order by successive personnel. They also offer safety alerts that are triggered when an unsafe order (such as for a duplicate drug therapy) is entered, as well as clinical decision support to guide caregivers to less expensive alternatives or to choices that better fit established hospital protocols. CPOE systems can, when correctly configured, markedly increase efficiency and improve patient safety and patient care. However, facilities need to recognize that currently available CPOE systems require a tremendous amount of time and effort to be spent in customization before their safety and clinical support features can be effectively implemented. What's more, even after they've been customized, the systems may still allow certain unsafe orders to be entered. Thus, CPOE systems are not currently a quick or easy remedy for medical errors. ECRI's Evaluation of CPOE systems--conducted in collaboration with the Institute for Safe Medication Practices (ISMP)--discusses these and other related issues. It also examines and compares CPOE systems from three suppliers: Eclipsys Corp., IDX Systems Corp., and Siemens Medical Solutions Health Services Corp. Our testing focuses primarily on the systems' interfacing capabilities, patient safeguards, and ease of use.


Assuntos
Estudos de Avaliação como Assunto , Sistemas de Informação Hospitalar/organização & administração , Sistemas Computadorizados de Registros Médicos , Inteligência Artificial , Computadores de Grande Porte , Análise Custo-Benefício , Desenho de Equipamento , Análise de Falha de Equipamento , Humanos , Internet , Erros Médicos/prevenção & controle , Sistemas Computadorizados de Registros Médicos/economia , Sistemas Computadorizados de Registros Médicos/instrumentação , Sistemas Computadorizados de Registros Médicos/normas , Terminologia como Assunto , Interface Usuário-Computador
20.
Biomed Instrum Technol ; 35(5): 349-52, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-11678139

RESUMO

We talked about using 3 UNIX commands. In UNIX, there are many other options for using them. But for the most part, if you can use them like I have shown, you will be able to do everything you need. If you can learn these few points well, I think you will be better off than if I give you 50 options and leave you totally confused about when to do what. On some UNIX systems, an electronic version of the UNIX manual is on the system. This gives a lot more information about each command. However, it is a bit difficult to understand. If you want more information about any command, you can type man COMMAND, e.g., man ls. This will give you more ways to use the ls command. And remember, the command pwd tells what directory you are in, cd/directory changes to another directory, ls lists the contents of the directory you are in, ls more displays the directory contents 1 page at a time (the space bar gives you the next page), ls-al gives a detailed listing of the contents of the directory you are in, ls-al more displays them 1 page at a time (the space bar gives you the next page).


Assuntos
Computadores de Grande Porte , Software , Interface Usuário-Computador , Engenharia Biomédica , Capacitação de Usuário de Computador , Metodologias Computacionais , Humanos , Armazenamento e Recuperação da Informação , Linguagens de Programação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...