Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(22)2022 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-36433501

RESUMO

Vision-based localization approaches now underpin newly emerging navigation pipelines for myriad use cases, from robotics to assistive technologies. Compared to sensor-based solutions, vision-based localization does not require pre-installed sensor infrastructure, which is costly, time-consuming, and/or often infeasible at scale. Herein, we propose a novel vision-based localization pipeline for a specific use case: navigation support for end users with blindness and low vision. Given a query image taken by an end user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database of the target space. The geolocations of these similar images are utilized in a downstream task that employs a weighted-average method to estimate the end user's location. Another downstream task utilizes the perspective-n-point (PnP) algorithm to estimate the end user's direction by exploiting the 2D-3D point correspondences between the query image and the 3D environment, as extracted from matched images in the database. Additionally, this system implements Dijkstra's algorithm to calculate a shortest path based on a navigable map that includes the trip origin and destination. The topometric map used for localization and navigation is built using a customized graphical user interface that projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan. Sequential images used for map construction can be collected in a pre-mapping step or scavenged through public databases/citizen science. The end-to-end system can be installed on any internet-accessible device with a camera that hosts a custom mobile application. For evaluation purposes, mapping and localization were tested in a complex hospital environment. The evaluation results demonstrate that our system can achieve localization with an average error of less than 1 m without knowledge of the camera's intrinsic parameters, such as focal length.


Assuntos
Robótica , Baixa Visão , Humanos , Algoritmos , Robótica/métodos , Bases de Dados Factuais , Cegueira
2.
J Stroke Cerebrovasc Dis ; 26(11): 2662-2670, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28760409

RESUMO

BACKGROUND: Annotation and Image Markup on ClearCanvas Enriched Stroke-phenotyping Software (ACCESS) is a novel stand-alone computer software application that allows the creation of simple standardized annotations for reporting brain images of all stroke types. We developed the ACCESS application and determined its inter-rater and intra-rater reliability in the Stroke Investigative Research and Educational Network (SIREN) study to assess its suitability for multicenter studies. METHODS: One hundred randomly selected stroke imaging reports from 5 SIREN sites were re-evaluated by 4 trained independent raters to determine the inter-rater reliability of the ACCESS (version 12.0) software for stroke phenotyping. To determine intra-rater reliability, 6 raters reviewed the same cases previously reported by them after a month of interval. Ischemic stroke was classified using the Oxfordshire Community Stroke Project (OCSP), Trial of Org 10172 in Acute Stroke Treatment (TOAST), and Atherosclerosis, Small-vessel disease, Cardiac source, Other cause (ASCO) protocols, while hemorrhagic stroke was classified using the Structural lesion, Medication, Amyloid angiopathy, Systemic disease, Hypertensive angiopathy and Undetermined (SMASH-U) protocol in ACCESS. Agreement among raters was measured with Cohen's kappa statistics. RESULTS: For primary stroke type, inter-rater agreement was .98 (95% confidence interval [CI], .94-1.00), while intra-rater agreement was 1.00 (95% CI, 1.00). For OCSP subtypes, inter-rater agreement was .97 (95% CI, .92-1.00) for the partial anterior circulation infarcts, .92 (95% CI, .76-1.00) for the total anterior circulation infarcts, and excellent for both lacunar infarcts and posterior circulation infarcts. Intra-rater agreement was .97 (.90-1.00), while inter-rater agreement was .93 (95% CI, .84-1.00) for TOAST subtypes. Inter-rater agreement ranged between .78 (cardioembolic) and .91 (large artery atherosclerotic) for ASCO subtypes and was .80 (95% CI, .56-1.00) for SMASH-U subtypes. CONCLUSION: The ACCESS application facilitates a concordant and reproducible classification of stroke subtypes by multiple investigators, making it suitable for clinical use and multicenter research.


Assuntos
Encéfalo/diagnóstico por imagem , Hemorragia/diagnóstico , Fenótipo , Acidente Vascular Cerebral/diagnóstico , Isquemia Encefálica/complicações , Eletrocardiografia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Distribuição Aleatória , Reprodutibilidade dos Testes , Acidente Vascular Cerebral/classificação , Acidente Vascular Cerebral/etiologia , Tomografia Computadorizada por Raios X , Ultrassonografia Doppler
3.
J Digit Imaging ; 27(6): 692-701, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24934452

RESUMO

Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.


Assuntos
Curadoria de Dados/métodos , Diagnóstico por Imagem/normas , Modelos Organizacionais , National Cancer Institute (U.S.) , Neoplasias/diagnóstico por imagem , Sistemas de Informação em Radiologia/normas , Curadoria de Dados/normas , Fundações , Humanos , Radiografia , Sistemas de Informação em Radiologia/organização & administração , Estados Unidos
4.
Radiology ; 267(2): 560-9, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23392431

RESUMO

PURPOSE: To conduct a comprehensive analysis of radiologist-made assessments of glioblastoma (GBM) tumor size and composition by using a community-developed controlled terminology of magnetic resonance (MR) imaging visual features as they relate to genetic alterations, gene expression class, and patient survival. MATERIALS AND METHODS: Because all study patients had been previously deidentified by the Cancer Genome Atlas (TCGA), a publicly available data set that contains no linkage to patient identifiers and that is HIPAA compliant, no institutional review board approval was required. Presurgical MR images of 75 patients with GBM with genetic data in the TCGA portal were rated by three neuroradiologists for size, location, and tumor morphology by using a standardized feature set. Interrater agreements were analyzed by using the Krippendorff α statistic and intraclass correlation coefficient. Associations between survival, tumor size, and morphology were determined by using multivariate Cox regression models; associations between imaging features and genomics were studied by using the Fisher exact test. RESULTS: Interrater analysis showed significant agreement in terms of contrast material enhancement, nonenhancement, necrosis, edema, and size variables. Contrast-enhanced tumor volume and longest axis length of tumor were strongly associated with poor survival (respectively, hazard ratio: 8.84, P = .0253, and hazard ratio: 1.02, P = .00973), even after adjusting for Karnofsky performance score (P = .0208). Proneural class GBM had significantly lower levels of contrast enhancement (P = .02) than other subtypes, while mesenchymal GBM showed lower levels of nonenhanced tumor (P < .01). CONCLUSION: This analysis demonstrates a method for consistent image feature annotation capable of reproducibly characterizing brain tumors; this study shows that radiologists' estimations of macroscopic imaging features can be combined with genetic alterations and gene expression subtypes to provide deeper insight to the underlying biologic properties of GBM subsets.


Assuntos
Neoplasias Encefálicas/mortalidade , Neoplasias Encefálicas/patologia , Glioblastoma/metabolismo , Glioblastoma/patologia , Imageamento por Ressonância Magnética/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/metabolismo , Feminino , Expressão Gênica , Glioblastoma/genética , Humanos , Masculino , Pessoa de Meia-Idade , Modelos de Riscos Proporcionais , Reprodutibilidade dos Testes , Taxa de Sobrevida , Terminologia como Assunto
5.
Trials ; 24(1): 169, 2023 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-36879333

RESUMO

BACKGROUND: Blind/low vision (BLV) severely limits information about our three-dimensional world, leading to poor spatial cognition and impaired navigation. BLV engenders mobility losses, debility, illness, and premature mortality. These mobility losses have been associated with unemployment and severe compromises in quality of life. VI not only eviscerates mobility and safety but also, creates barriers to inclusive higher education. Although true in almost every high-income country, these startling facts are even more severe in low- and middle-income countries, such as Thailand. We aim to use VIS4ION (Visually Impaired Smart Service System for Spatial Intelligence and Onboard Navigation), an advanced wearable technology, to enable real-time access to microservices, providing a potential solution to close this gap and deliver consistent and reliable access to critical spatial information needed for mobility and orientation during navigation. METHODS: We are leveraging 3D reconstruction and semantic segmentation techniques to create a digital twin of the campus that houses Mahidol University's disability college. We will do cross-over randomization, and two groups of randomized VI students will deploy this augmented platform in two phases: a passive phase, during which the wearable will only record location, and an active phase, in which end users receive orientation cueing during location recording. A group will perform the active phase first, then the passive, and the other group will experiment reciprocally. We will assess for acceptability, appropriateness, and feasibility, focusing on experiences with VIS4ION. In addition, we will test another cohort of students for navigational, health, and well-being improvements, comparing weeks 1 to 4. We will also conduct a process evaluation according to the Saunders Framework. Finally, we will extend our computer vision and digital twinning technique to a 12-block spatial grid in Bangkok, providing aid in a more complex environment. DISCUSSION: Although electronic navigation aids seem like an attractive solution, there are several barriers to their use; chief among them is their dependence on either environmental (sensor-based) infrastructure or WiFi/cell "connectivity" infrastructure or both. These barriers limit their widespread adoption, particularly in low-and-middle-income countries. Here we propose a navigation solution that operates independently of both environmental and Wi-Fi/cell infrastructure. We predict the proposed platform supports spatial cognition in BLV populations, augmenting personal freedom and agency, and promoting health and well-being. TRIAL REGISTRATION: ClinicalTrials.gov under the identifier: NCT03174314, Registered 2017.06.02.


Assuntos
Baixa Visão , Humanos , Qualidade de Vida , Tailândia , Universidades , Inteligência , Ensaios Clínicos Controlados Aleatórios como Assunto
6.
Radiographics ; 32(4): 1223-32, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22556315

RESUMO

In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.


Assuntos
Mineração de Dados/métodos , Sistemas de Gerenciamento de Base de Dados , Registros de Saúde Pessoal , Internet , Neoplasias/diagnóstico , Sistemas de Informação em Radiologia , Interface Usuário-Computador , Documentação/métodos , Estados Unidos
8.
ArXiv ; 2021 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-34815983

RESUMO

Artificial intelligence (AI) provides a promising substitution for streamlining COVID-19 diagnoses. However, concerns surrounding security and trustworthiness impede the collection of large-scale representative medical data, posing a considerable challenge for training a well-generalised model in clinical practices. To address this, we launch the Unified CT-COVID AI Diagnostic Initiative (UCADI), where the AI model can be distributedly trained and independently executed at each host institution under a federated learning framework (FL) without data sharing. Here we show that our FL model outperformed all the local models by a large yield (test sensitivity /specificity in China: 0.973/0.951, in the UK: 0.730/0.942), achieving comparable performance with a panel of professional radiologists. We further evaluated the model on the hold-out (collected from another two hospitals leaving out the FL) and heterogeneous (acquired with contrast materials) data, provided visual explanations for decisions made by the model, and analysed the trade-offs between the model performance and the communication costs in the federated training process. Our study is based on 9,573 chest computed tomography scans (CTs) from 3,336 patients collected from 23 hospitals located in China and the UK. Collectively, our work advanced the prospects of utilising federated learning for privacy-preserving AI in digital health.

9.
Nat Mach Intell ; 3(12): 1081-1089, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38264185

RESUMO

Artificial intelligence provides a promising solution for streamlining COVID-19 diagnoses; however, concerns surrounding security and trustworthiness impede the collection of large-scale representative medical data, posing a considerable challenge for training a well-generalized model in clinical practices. To address this, we launch the Unified CT-COVID AI Diagnostic Initiative (UCADI), where the artificial intelligence (AI) model can be distributedly trained and independently executed at each host institution under a federated learning framework without data sharing. Here we show that our federated learning framework model considerably outperformed all of the local models (with a test sensitivity/specificity of 0.973/0.951 in China and 0.730/0.942 in the United Kingdom), achieving comparable performance with a panel of professional radiologists. We further evaluated the model on the hold-out (collected from another two hospitals without the federated learning framework) and heterogeneous (acquired with contrast materials) data, provided visual explanations for decisions made by the model, and analysed the trade-offs between the model performance and the communication costs in the federated training process. Our study is based on 9,573 chest computed tomography scans from 3,336 patients collected from 23 hospitals located in China and the United Kingdom. Collectively, our work advanced the prospects of utilizing federated learning for privacy-preserving AI in digital health.

10.
J Digit Imaging ; 23(2): 217-25, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19294468

RESUMO

Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.


Assuntos
Biologia Computacional/organização & administração , Redes de Comunicação de Computadores/organização & administração , Diagnóstico por Imagem/normas , Intensificação de Imagem Radiográfica/tendências , Sistemas de Informação em Radiologia/organização & administração , Bases de Dados Factuais , Diagnóstico por Imagem/tendências , Humanos , Comunicação Interdisciplinar , Sistemas Computadorizados de Registros Médicos , National Cancer Institute (U.S.) , National Institutes of Health (U.S.) , Neoplasias/diagnóstico por imagem , Avaliação de Programas e Projetos de Saúde , Qualidade da Assistência à Saúde , Intensificação de Imagem Radiográfica/normas , Software , Integração de Sistemas , Estados Unidos , Interface Usuário-Computador
11.
Proc IEEE World Congr Serv ; 2020: 1-3, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35983015

RESUMO

An ability to move freely, when wanted, is an essential activity for healthy living. Visually impaired and completely blinded persons encounter many disadvantages in their day-to-day activities, including performing work-related tasks. They are at risk of mobility losses, illness, debility, social isolation, and premature mortality. A novel wearable device and computing platform called VIS4ION is reducing the disadvantage gaps and raising living standards for the visually challenged. It provides personal mobility navigational services that serves as a customizable, human-in-the-loop, sensing-to-feedback platform to deliver functional assistance. The platform is configured as a wearable that provides on-board microcomputers, human-machine interfaces, and sensory augmentation. Mobile edge computing enhances functionality as more services are unleashed with the computational gains. The meta-level goal is to support spatial cognition, personal freedom, and activities, and to promoting health and wellbeing. VIS4ION can be conceptualized as the dovetailing of two thrusts: an on-person navigational and computing device and a multimodal functional aid providing microservices through the cloud. The device has on-board wireless capabilities connected through Wi-Fi or 4/5G. The cloud-based microservices reduce hardware and power requirements while allowing existing and new services to be enhanced and added such as loading new map and real-time communication via haptic or audio signals. This technology can be made available and affordable in the economies of transition countries.

12.
medRxiv ; 2020 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-32511484

RESUMO

Artificial intelligence can potentially provide a substantial role in streamlining chest computed tomography (CT) diagnosis of COVID-19 patients. However, several critical hurdles have impeded the development of robust AI model, which include deficiency, isolation, and heterogeneity of CT data generated from diverse institutions. These bring about lack of generalization of AI model and therefore prevent it from applications in clinical practices. To overcome this, we proposed a federated learning-based Unified CT-COVID AI Diagnostic Initiative (UCADI, http://www.ai-ct-covid.team/), a decentralized architecture where the AI model is distributed to and executed at each host institution with the data sources or client ends for training and inferencing without sharing individual patient data. Specifically, we firstly developed an initial AI CT model based on data collected from three Tongji hospitals in Wuhan. After model evaluation, we found that the initial model can identify COVID from Tongji CT test data at near radiologist-level (97.5% sensitivity) but performed worse when it was tested on COVID cases from Wuhan Union Hospital (72% sensitivity), indicating a lack of model generalization. Next, we used the publicly available UCADI framework to build a federated model which integrated COVID CT cases from the Tongji hospitals and Wuhan Union hospital (WU) without transferring the WU data. The federated model not only performed similarly on Tongji test data but improved the detection sensitivity (98%) on WU test cases. The UCADI framework will allow participants worldwide to use and contribute to the model, to deliver a real-world, globally built and validated clinic CT-COVID AI tool. This effort directly supports the United Nations Sustainable Development Goals' number 3, Good Health and Well-Being, and allows sharing and transferring of knowledge to fight this devastating disease around the world.

13.
J Digit Imaging ; 21(3): 257-68, 2008 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-17534683

RESUMO

This paper describes the web-based visualization interface of RadMonitor, a platform-independent web application designed to help manage the complexity of information flow within a health care enterprise. The system eavesdrops on Health Layer 7 traffic and parses statistical operational information into a database. The information is then presented to the user as a treemap--a graphical visualization scheme that simplifies the display of hierarchical information. While RadMonitor has been implemented for the purpose of analyzing radiology operations, its XML backend allows it to be reused for virtually any other hierarchical data set.


Assuntos
Apresentação de Dados , Armazenamento e Recuperação da Informação/métodos , Internet , Sistemas Computadorizados de Registros Médicos , Intensificação de Imagem Radiográfica , Interface Usuário-Computador , Gráficos por Computador , Sistemas de Gerenciamento de Base de Dados , Humanos , Sensibilidade e Especificidade , Design de Software , Estados Unidos
15.
Radiographics ; 25(2): 543-8, 2005.
Artigo em Inglês | MEDLINE | ID: mdl-15798070

RESUMO

A Medical Image Resource Center (MIRC)-compliant teaching file system was created that can be integrated into a picture archiving and communication system (PACS) environment. This system models the three-step process necessary for efficient teaching file creation in a PACS environment: (a) identifying and transferring a case quickly and easily during primary interpretation, (b) editing and authoring the case outside of primary interpretation time, and (c) publishing the case locally and via MIRC standard-based mechanisms. Images from interesting cases are e-mailed to the teaching file system from either the PACS workstation or the radiologist's personal computer. Notes and clinical information may be included in the e-mail text to prompt the recollection of case details. Images are automatically extracted from the e-mail and sent to an image repository, and text fields are captured in a database. The World Wide Web-based authoring component provides tools for final authoring of cases and for the manipulation of existing cases. Authors designate access levels for each case, which is then made available locally and, potentially, to the entire MIRC-compliant community. Although this application has not yet been implemented as a departmental solution, it promises to improve and streamline medical education and promote better patient care.


Assuntos
Sistemas de Informação em Radiologia , Radiologia/educação
18.
Summit Transl Bioinform ; 2009: 106-10, 2009 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-21347180

RESUMO

Integrating and relating images with clinical and molecular data is a crucial activity in translational research, but challenging because the information in images is not explicit in standard computer-accessible formats. We have developed an ontology-based representation of the semantic contents of radiology images called AIM (Annotation and Image Markup). AIM specifies the quantitative and qualitative content that researchers extract from images. The AIM ontology enables semantic image annotation and markup, specifying the entities and relations necessary to describe images. AIM annotations, represented as instances in the ontology, enable key use cases for images in translational research such as disease status assessment, query, and inter-observer variation analysis. AIM will enable ontology-based query and mining of images, and integration of images with data in other ontology-annotated bioinformatics databases. Our ultimate goal is to enable researchers to link images with related scientific data so they can learn the biological and physiological significance of the image content.

19.
Artigo em Inglês | MEDLINE | ID: mdl-19964202

RESUMO

An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human (or machine) observer. An image markup is the graphical symbols placed over the image to depict an annotation. In the majority of current, clinical and research imaging practice, markup is captured in proprietary formats and annotations are referenced only in free text radiology reports. This makes these annotations difficult to query, retrieve and compute upon, hampering their integration into other data mining and analysis efforts. This paper describes the National Cancer Institute's Cancer Biomedical Informatics Grid's (caBIG) Annotation and Image Markup (AIM) project, focusing on how to use AIM to query for annotations. The AIM project delivers an information model for image annotation and markup. The model uses controlled terminologies for important concepts. All of the classes and attributes of the model have been harmonized with the other models and common data elements in use at the National Cancer Institute. The project also delivers XML schemata necessary to instantiate AIMs in XML as well as a software application for translating AIM XML into DICOM S/R and HL7 CDA. Large collections of AIM annotations can be built and then queried as Grid or Web services. Using the tools of the AIM project, image annotations and their markup can be captured and stored in human and machine readable formats. This enables the inclusion of human image observation and inference as part of larger data mining and analysis activities.


Assuntos
Diagnóstico por Imagem/métodos , Engenharia Biomédica , Biologia Computacional , Bases de Dados Factuais , Diagnóstico por Imagem/estatística & dados numéricos , Humanos , Interface Usuário-Computador
20.
J Digit Imaging ; 18(4): 326-32, 2005 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-16132484

RESUMO

Acquiring, implementing, and maintaining a picture archiving and communication system (PACS) is an enduring and complex endeavor. A large-scale project such as this requires efficient and effective communication among a large number of stakeholders, sharing of complex documentation, recording ideas, experiences, and events such as meetings, and project milestones to succeed. Often, mass-market technologies designed for other purposes can be used to solve specific complex problems in healthcare. In this case, we wanted to explore the role of popular weblogging or "blogging" software to meet our needs. We reviewed a number of well-known blog software packages and evaluated them based on a set of criteria. We looked at simplicity of installation, configuration, and management. We also wanted an intuitive, Web-based interface for end-users, low cost of ownership, use of open source software, and a secure forum for all PACS team members. We chose and implemented the Invision Power Board for two purposes: local PACS administrative purposes and for a national PACS users' group discussion. We conclude that off the shelf, state-of-the-art, mass-market software such as that used for the currently very popular purpose of weblogging or "blogging" can be very useful in managing the variety of communications necessary for the successful implementation of PACS.


Assuntos
Gestão da Informação , Internet , Sistemas de Informação em Radiologia , Sistemas de Informação em Radiologia/organização & administração , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA