Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Digit Imaging ; 34(4): 1005-1013, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34405297

RESUMEN

Real-time execution of machine learning (ML) pipelines on radiology images is difficult due to limited computing resources in clinical environments, whereas running them in research clusters requires efficient data transfer capabilities. We developed Niffler, an open-source Digital Imaging and Communications in Medicine (DICOM) framework that enables ML and processing pipelines in research clusters by efficiently retrieving images from the hospitals' PACS and extracting the metadata from the images. We deployed Niffler at our institution (Emory Healthcare, the largest healthcare network in the state of Georgia) and retrieved data from 715 scanners spanning 12 sites, up to 350 GB/day continuously in real-time as a DICOM data stream over the past 2 years. We also used Niffler to retrieve images bulk on-demand based on user-provided filters to facilitate several research projects. This paper presents the architecture and three such use cases of Niffler. First, we executed an IVC filter detection and segmentation pipeline on abdominal radiographs in real-time, which was able to classify 989 test images with an accuracy of 96.0%. Second, we applied the Niffler Metadata Extractor to understand the operational efficiency of individual MRI systems based on calculated metrics. We benchmarked the accuracy of the calculated exam time windows by comparing Niffler against the Clinical Data Warehouse (CDW). Niffler accurately identified the scanners' examination timeframes and idling times, whereas CDW falsely depicted several exam overlaps due to human errors. Third, with metadata extracted from the images by Niffler, we identified scanners with misconfigured time and reconfigured five scanners. Our evaluations highlight how Niffler enables real-time ML and processing pipelines in a research cluster.


Asunto(s)
Sistemas de Información Radiológica , Radiología , Data Warehousing , Humanos , Aprendizaje Automático , Radiografía
2.
Comput Netw ; 2032022 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-35082552

RESUMEN

Small-scale data centers at the edge are becoming prominent in offering various services to the end-users following the cloud model while avoiding the high latency inherent to the classic cloud environments when accessed from remote Internet regions. However, we should address several challenges to facilitate the end-users finding and consuming the relevant services from the edge at the Internet scale. First, the scale and diversity of the edge hinder seamless access. Second, a framework where researchers openly share their services and data in a secured manner among themselves and with external consumers over the Internet does not exist. Third, the lack of a unified interface and trust across the service providers hinder their interchangeability in composing workflows by chaining the services. Thus, creating a workflow from the services deployed on the various edge nodes is presently impractical. This paper designs Viseu, a latency-aware blockchain framework to provide Virtual Internet Services at the Edge. Viseu aims to solve the puzzle of network service discovery at the edge, considering the peers' reputation and latency when choosing the service instances. Viseu enables peers to share their computational resources, services, and data among each other in an untrusted environment, rather than relying on a set of trusted service providers. By composing workflows from the peers' services, rather than confining them to the pre-established service provider and consumer roles, Viseu aims to facilitate scientific collaboration across the peers natively. Furthermore, by offering services from multiple peers close to the end-users, Viseu also minimizes end-to-end latency and data loss in the service execution at the Internet scale.

3.
IEEE Access ; 10: 36268-36285, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36199437

RESUMEN

Closed-loop Vagus Nerve Stimulation (VNS) based on physiological feedback signals is a promising approach to regulate organ functions and develop therapeutic devices. Designing closed-loop neurostimulation systems requires simulation environments and computing infrastructures that support i) modeling the physiological responses of organs under neuromodulation, also known as physiological models, and ii) the interaction between the physiological models and the neuromodulation control algorithms. However, existing simulation platforms do not support closed-loop VNS control systems modeling without extensive rewriting of computer code and manual deployment and configuration of programs. The CONTROL-CORE project aims to develop a flexible software platform for designing and implementing closed-loop VNS systems. This paper proposes the software architecture and the elements of the CONTROL-CORE platform that allow the interaction between a controller and a physiological model in feedback. CONTROL-CORE facilitates modular simulation and deployment of closed-loop peripheral neuromodulation control systems, spanning multiple organizations securely and concurrently. CONTROL-CORE allows simulations to run on different operating systems, be developed in various programming languages (such as Matlab, Python, C++, and Verilog), and be run locally, in containers, and in a distributed fashion. The CONTROL-CORE platform allows users to create tools and testbenches to facilitate sophisticated simulation experiments. We tested the CONTROL-CORE platform in the context of closed-loop control of cardiac physiological models, including pulsatile and nonpulsatile rat models. These were tested using various controllers such as Model Predictive Control and Long-Short-Term Memory based controllers. Our wide range of use cases and evaluations show the performance, flexibility, and usability of the CONTROL-CORE platform.

4.
IEEE Access ; 9: 10621-10633, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35966128

RESUMEN

Understanding system performance metrics ensures better utilization of the radiology resources with more targeted interventions. The images produced by radiology scanners typically follow the DICOM (Digital Imaging and Communications in Medicine) standard format. The DICOM images consist of textual metadata that can be used to calculate key timing parameters, such as the exact study durations and scanner utilization. However, hospital networks lack the resources and capabilities to extract the metadata from the images quickly and automatically compute the scanner utilization properties. Thus, they resort to using data records from the Radiology Information Systems (RIS). However, data acquired from RIS are prone to human errors, rendering many derived key performance metrics inadequate and inaccurate. Hence, there is motivation to establish a real-time image transfer from the Picture Archiving and Communication Systems (PACS) to receive the DICOM images from the scanners to research clusters to conduct such metadata processing to evaluate scanner utilization metrics efficiently and quickly. This paper analyzes the scanners' utilization by developing a real-time monitoring framework that retrieves radiology images into a research cluster using the DICOM networking protocol and then extracts and processes the metadata from the images. Our proposed approach facilitates a better understanding of scanner utilization across a vast healthcare network by observing properties such as study duration, the interval between the encounters, and the series count of studies. Benchmarks against using the RIS data indicate that our proposed framework based on real-time PACS data estimates the scanner utilization more accurately. Furthermore, our framework has been running stable and performing its computation for more than two years on our extensive healthcare network in pseudo real-time.

5.
IEEE Access ; 9: 131733-131745, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34631327

RESUMEN

Closed-loop neuromodulation control systems facilitate regulating abnormal physiological processes by recording neurophysiological activities and modifying those activities through feedback loops. Designing such systems requires interoperable service composition, consisting of cycles. Workflow frameworks enable standard modular architectures, offering reproducible automated pipelines. However, those frameworks limit their support to executions represented by directed acyclic graphs (DAGs). DAGs need a pre-defined start and end execution step with no cycles, thus preventing the researchers from using the standard workflow languages as-is for closed-loop workflows and pipelines. In this paper, we present NEXUS, a workflow orchestration framework for distributed analytics systems. NEXUS proposes a Software-Defined Workflows approach, inspired by Software-Defined Networking (SDN), which separates the data flows across the service instances from the control flows. NEXUS enables creating interoperable workflows with closed loops by defining the workflows in a logically centralized approach, from microservices representing each execution step. The centralized NEXUS orchestrator facilitates dynamically composing and managing scientific workflows from the services and existing workflows, with minimal restrictions. NEXUS represents complex workflows as directed hypergraphs (DHGs) rather than DAGs. We illustrate a seamless execution of neuromodulation control systems by supporting loops in a workflow as the use case of NEXUS. Our evaluations highlight the feasibility, flexibility, performance, and scalability of NEXUS in modeling and executing closed-loop workflows.

6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5610-5614, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33019249

RESUMEN

Sepsis, a dysregulated immune response to infection, has been the leading cause of morbidity and mortality in critically ill patients. Multiple studies have demonstrated improved survival outcomes when early treatment is initiated for septic patients. In our previous work, we developed a real-time machine learning algorithm capable of predicting onset of sepsis four to six hours prior to clinical recognition. In this work, we develop AIDEx, an open-source platform that consumes data as FHIR resources. It is capable of consuming live patient data, securely transporting it into a cloud environment, and monitoring patients in real-time. We build AIDEx as an EHR vendor-agnostic open-source platform that can be easily deployed in clinical environments. Finally, the computation of the sepsis risk scores uses a common design pattern that is seen in streaming clinical informatics and predictive analytics applications. AIDEx provides a comprehensive case study in the design and development of a production-ready ML platform that integrates with Healthcare IT systems.


Asunto(s)
Informática Médica , Sepsis , Algoritmos , Enfermedad Crítica , Humanos , Aprendizaje Automático , Sepsis/diagnóstico
7.
JCO Clin Cancer Inform ; 4: 491-499, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32479186

RESUMEN

PURPOSE: Precision medicine requires an understanding of individual variability, which can only be acquired from large data collections such as those supported by the Cancer Imaging Archive (TCIA). We have undertaken a program to extend the types of data TCIA can support. This, in turn, will enable TCIA to play a key role in precision medicine research by collecting and disseminating high-quality, state-of-the-art, quantitative imaging data that meet the evolving needs of the cancer research community. METHODS: A modular technology platform is presented that would allow existing data resources, such as TCIA, to evolve into a comprehensive data resource that meets the needs of users engaged in translational research for imaging-based precision medicine. This Platform for Imaging in Precision Medicine (PRISM) helps streamline the deployment and improve TCIA's efficiency and sustainability. More importantly, its inherent modular architecture facilitates a piecemeal adoption by other data repositories. RESULTS: PRISM includes services for managing radiology and pathology images and features and associated clinical data. A semantic layer is being built to help users explore diverse collections and pool data sets to create specialized cohorts. PRISM includes tools for image curation and de-identification. It includes image visualization and feature exploration tools. The entire platform is distributed as a series of containerized microservices with representational state transfer interfaces. CONCLUSION: PRISM is helping modernize, scale, and sustain the technology stack that powers TCIA. Repositories can take advantage of individual PRISM services such as de-identification and quality control. PRISM is helping scale image informatics for cancer research at a time when the size, complexity, and demands to integrate image data with other precision medicine data-intensive commons are mounting.


Asunto(s)
Medicina de Precisión , Radiología , Diagnóstico por Imagen , Humanos , Control de Calidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA