RESUMEN
The higher brain functions arise from coordinated neural activity between distinct brain regions, but the spatial, temporal, and spectral complexity of these functional connectivity networks (FCNs) has challenged the identification of correlates with neurobehavioral phenotypes. Characterizing behavioral correlates of early life FCNs is important to understand the activity dependent emergence of neurodevelopmental performance and for improving health outcomes. Here, we develop an analysis pipeline for identifying multiplex dynamic FCNs that combine spectral and spatiotemporal characteristics of the newborn cortical activity. This data-driven approach automatically uncovers latent networks that show robust neurobehavioral correlations and consistent effects by in utero drug exposure. Altogether, the proposed pipeline provides a robust end-to-end solution for an objective assessment and quantitation of neurobehaviorally meaningful network constellations in the highly dynamic cortical functions.
Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Recién Nacido , Humanos , Encéfalo/diagnóstico por imagen , Mapeo EncefálicoRESUMEN
Social distancing is crucial to restrain the spread of diseases such as COVID-19, but complete adherence to safety guidelines is not guaranteed. Monitoring social distancing through mass surveillance is paramount to develop appropriate mitigation plans and exit strategies. Nevertheless, it is a labor-intensive task that is prone to human error and tainted with plausible breaches of privacy. This paper presents a privacy-preserving adaptive social distance estimation and crowd monitoring solution for camera surveillance systems. We develop a novel person localization strategy through pose estimation, build a privacy-preserving adaptive smoothing and tracking model to mitigate occlusions and noisy/missing measurements, compute inter-personal distances in the real-world coordinates, detect social distance infractions, and identify overcrowded regions in a scene. Performance evaluation is carried out by testing the system's ability in person detection, localization, density estimation, anomaly recognition, and high-risk areas identification. We compare the proposed system to the latest techniques and examine the performance gain delivered by the localization and smoothing/tracking algorithms. Experimental results indicate a considerable improvement, across different metrics, when utilizing the developed system. In addition, they show its potential and functionality for applications other than social distancing.
Asunto(s)
COVID-19 , Distanciamiento Físico , Algoritmos , Aglomeración , Humanos , SARS-CoV-2RESUMEN
Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools.
RESUMEN
With the aim of producing a 3D representation of tumors, imaging and molecular annotation of xenografts and tumors (IMAXT) uses a large variety of modalities in order to acquire tumor samples and produce a map of every cell in the tumor and its host environment. With the large volume and variety of data produced in the project, we developed automatic data workflows and analysis pipelines. We introduce a research methodology where scientists connect to a cloud environment to perform analysis close to where data are located, instead of bringing data to their local computers. Here, we present the data and analysis infrastructure, discuss the unique computational challenges and describe the analysis chains developed and deployed to generate molecularly annotated tumor models. Registration is achieved by use of a novel technique involving spherical fiducial marks that are visible in all imaging modalities used within IMAXT. The automatic pipelines are highly optimized and allow to obtain processed datasets several times quicker than current solutions narrowing the gap between data acquisition and scientific exploitation.
RESUMEN
Purpose: XNAT is an informatics software platform to support imaging research, particularly in the context of large, multicentre studies of the type that are essential to validate quantitative imaging biomarkers. XNAT provides import, archiving, processing and secure distribution facilities for image and related study data. Until recently, however, modern data visualisation and annotation tools were lacking on the XNAT platform. We describe the background to, and implementation of, an integration of the Open Health Imaging Foundation (OHIF) Viewer into the XNAT environment. We explain the challenges overcome and discuss future prospects for quantitative imaging studies. Materials and methods: The OHIF Viewer adopts an approach based on the DICOM web protocol. To allow operation in an XNAT environment, a data-routing methodology was developed to overcome the mismatch between the DICOM and XNAT information models and a custom viewer panel created to allow navigation within the viewer between different XNAT projects, subjects and imaging sessions. Modifications to the development environment were made to allow developers to test new code more easily against a live XNAT instance. Major new developments focused on the creation and storage of regions-of-interest (ROIs) and included: ROI creation and editing tools for both contour- and mask-based regions; a "smart CT" paintbrush tool; the integration of NVIDIA's Artificial Intelligence Assisted Annotation (AIAA); the ability to view surface meshes, fractional segmentation maps and image overlays; and a rapid image reader tool aimed at radiologists. We have incorporated the OHIF microscopy extension and, in parallel, introduced support for microscopy session types within XNAT for the first time. Results: Integration of the OHIF Viewer within XNAT has been highly successful and numerous additional and enhanced tools have been created in a programme started in 2017 that is still ongoing. The software has been downloaded more than 3700 times during the course of the development work reported here, demonstrating the impact of the work. Conclusions: The OHIF open-source, zero-footprint web viewer has been incorporated into the XNAT platform and is now used at many institutions worldwide. Further innovations are envisaged in the near future.
Asunto(s)
Inteligencia Artificial , Diagnóstico por Imagen , Archivos , Humanos , Programas InformáticosRESUMEN
Modern technology has pushed us into the information age, making it easier to generate and record vast quantities of new data. Datasets can help in analyzing the situation to give a better understanding, and more importantly, decision making. Consequently, datasets, and uses to which they can be put, have become increasingly valuable commodities. This article describes the DroneRF dataset: a radio frequency (RF) based dataset of drones functioning in different modes, including off, on and connected, hovering, flying, and video recording. The dataset contains recordings of RF activities, composed of 227 recorded segments collected from 3 different drones, as well as recordings of background RF activities with no drones. The data has been collected by RF receivers that intercepts the drone's communications with the flight control module. The receivers are connected to two laptops, via PCIe cables, that runs a program responsible for fetching, processing and storing the sensed RF data in a database. An example of how this dataset can be interpreted and used can be found in the related research article "RF-based drone detection and identification using deep learning approaches: an initiative towards a large open source drone database" (Al-Sa'd et al., 2019).