Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Pac Symp Biocomput ; 29: 65-80, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38160270

RESUMEN

Topological data analysis (TDA) combined with machine learning (ML) algorithms is a powerful approach for investigating complex brain interaction patterns in neurological disorders such as epilepsy. However, the use of ML algorithms and TDA for analysis of aberrant brain interactions requires substantial domain knowledge in computing as well as pure mathematics. To lower the threshold for clinical and computational neuroscience researchers to effectively use ML algorithms together with TDA to study neurological disorders, we introduce an integrated web platform called MaTiLDA. MaTiLDA is the first tool that enables users to intuitively use TDA methods together with ML models to characterize interaction patterns derived from neurophysiological signal data such as electroencephalogram (EEG) recorded during routine clinical practice. MaTiLDA features support for TDA methods, such as persistent homology, that enable classification of signal data using ML models to provide insights into complex brain interaction patterns in neurological disorders. We demonstrate the practical use of MaTiLDA by analyzing high-resolution intracranial EEG from refractory epilepsy patients to characterize the distinct phases of seizure propagation to different brain regions. The MaTiLDA platform is available at: https://bmhinformatics.case.edu/nicworkflow/MaTiLDA.


Asunto(s)
Epilepsia , Procesamiento de Señales Asistido por Computador , Humanos , Biología Computacional , Encéfalo , Aprendizaje Automático , Análisis de Datos
2.
medRxiv ; 2023 Oct 19.
Artículo en Inglés | MEDLINE | ID: mdl-37425941

RESUMEN

The rapid adoption of machine learning (ML) algorithms in a wide range of biomedical applications has highlighted issues of trust and the lack of understanding regarding the results generated by ML algorithms. Recent studies have focused on developing interpretable ML models and establish guidelines for transparency and ethical use, ensuring the responsible integration of machine learning in healthcare. In this study, we demonstrate the effectiveness of ML interpretability methods to provide important insights into the dynamics of brain network interactions in epilepsy, a serious neurological disorder affecting more than 60 million persons worldwide. Using high-resolution intracranial electroencephalogram (EEG) recordings from a cohort of 16 patients, we developed high accuracy ML models to categorize these brain activity recordings into either seizure or non-seizure classes followed by a more complex task of delineating the different stages of seizure progression to different parts of the brain as a multi-class classification task. We applied three distinct types of interpretability methods to the high-accuracy ML models to gain an understanding of the relative contributions of different categories of brain interaction patterns, including multi-focii interactions, which play an important role in distinguishing between different states of the brain. The results of this study demonstrate for the first time that post-hoc interpretability methods enable us to understand why ML algorithms generate a given set of results and how variations in value of input values affect the accuracy of the ML algorithms. In particular, we show in this study that interpretability methods can be used to identify brain regions and interaction patterns that have a significant impact on seizure events. The results of this study highlight the importance of the integrated implementation of ML algorithms together with interpretability methods in aberrant brain network studies and the wider domain of biomedical research.

3.
Database (Oxford) ; 20222022 11 11.
Artículo en Inglés | MEDLINE | ID: mdl-36367313

RESUMEN

To preserve scientific data created by publicly and/or philanthropically funded research projects and to make it ready for exploitation using recent and ongoing advances in advanced and large-scale computational modeling methods, publicly available data must use in common, now-evolving standards for formatting, identifying and annotating should share data. The OpenNeuro.org archive, built first as a repository for magnetic resonance imaging data based on the Brain Imaging Data Structure formatting standards, aims to house and share all types of human neuroimaging data. Here, we present NEMAR.org, a web gateway to OpenNeuro data for human neuroelectromagnetic data. NEMAR allows users to search through, visually explore and assess the quality of shared electroencephalography (EEG), magnetoencephalography and intracranial EEG data and then to directly process selected data using high-performance computing resources of the San Diego Supercomputer Center via the Neuroscience Gateway (nsgportal.org, NSG), a freely available web portal to high-performance computing serving a variety of neuroscientific analysis environments and tools. Combined, OpenNeuro, NEMAR and NSG form an efficient, integrated data, tools and compute resource for human neuroimaging data analysis and meta-analysis. Database URL: https://nemar.org.


Asunto(s)
Acceso a la Información , Neurociencias , Humanos , Bases de Datos Factuales , Imagen por Resonancia Magnética , Neurociencias/métodos
4.
Neuroimage ; 224: 116778, 2021 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-32289453

RESUMEN

EEGLAB signal processing environment is currently the leading open-source software for processing electroencephalographic (EEG) data. The Neuroscience Gateway (NSG, nsgportal.org) is a web and API-based portal allowing users to easily run a variety of neuroscience-related software on high-performance computing (HPC) resources in the U.S. XSEDE network. We have reported recently (Delorme et al., 2019) on the Open EEGLAB Portal expansion of the free NSG services to allow the neuroscience community to build and run MATLAB pipelines using the EEGLAB tool environment. We are now releasing an EEGLAB plug-in, nsgportal, that interfaces EEGLAB with NSG directly from within EEGLAB running on MATLAB on any personal lab computer. The plug-in features a flexible MATLAB graphical user interface (GUI) that allows users to easily submit, interact with, and manage NSG jobs, and to retrieve and examine their results. Command line nsgportal tools supporting these GUI functionalities allow EEGLAB users and plug-in tool developers to build largely automated functions and workflows that include optional NSG job submission and processing. Here we present details on nsgportal implementation and documentation, provide user tutorials on example applications, and show sample test results comparing computation times using HPC versus laptop processing.


Asunto(s)
Electroencefalografía , Neurociencias , Programas Informáticos , Interfaz Usuario-Computador , Algoritmos , Electroencefalografía/métodos , Procesamiento Automatizado de Datos , Humanos
5.
Neuron ; 103(3): 395-411.e5, 2019 08 07.
Artículo en Inglés | MEDLINE | ID: mdl-31201122

RESUMEN

Computational models are powerful tools for exploring the properties of complex biological systems. In neuroscience, data-driven models of neural circuits that span multiple scales are increasingly being used to understand brain function in health and disease. But their adoption and reuse has been limited by the specialist knowledge required to evaluate and use them. To address this, we have developed Open Source Brain, a platform for sharing, viewing, analyzing, and simulating standardized models from different brain regions and species. Model structure and parameters can be automatically visualized and their dynamical properties explored through browser-based simulations. Infrastructure and tools for collaborative interaction, development, and testing are also provided. We demonstrate how existing components can be reused by constructing new models of inhibition-stabilized cortical networks that match recent experimental results. These features of Open Source Brain improve the accessibility, transparency, and reproducibility of models and facilitate their reuse by the wider community.


Asunto(s)
Encéfalo/fisiología , Biología Computacional/normas , Simulación por Computador , Modelos Neurológicos , Neuronas/fisiología , Encéfalo/citología , Biología Computacional/métodos , Humanos , Internet , Redes Neurales de la Computación , Sistemas en Línea
6.
Phys Med Biol ; 55(11): 3077-86, 2010 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-20463376

RESUMEN

Monte Carlo simulation is the most accurate method for absorbed dose calculations in radiotherapy. Its efficiency still requires improvement for routine clinical applications, especially for online adaptive radiotherapy. In this paper, we report our recent development on a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport. We have implemented the dose planning method (DPM) Monte Carlo dose calculation package (Sempau et al 2000 Phys. Med. Biol. 45 2263-91) on the GPU architecture under the CUDA platform. The implementation has been tested with respect to the original sequential DPM code on the CPU in phantoms with water-lung-water or water-bone-water slab geometry. A 20 MeV mono-energetic electron point source or a 6 MV photon point source is used in our validation. The results demonstrate adequate accuracy of our GPU implementation for both electron and photon beams in the radiotherapy energy range. Speed-up factors of about 5.0-6.6 times have been observed, using an NVIDIA Tesla C1060 GPU card against a 2.27 GHz Intel Xeon CPU processor.


Asunto(s)
Radioterapia/métodos , Algoritmos , Huesos/efectos de la radiación , Simulación por Computador , Electrones , Humanos , Pulmón/efectos de la radiación , Modelos Estadísticos , Método de Montecarlo , Fantasmas de Imagen , Fotones , Programas Informáticos , Factores de Tiempo , Agua/química
7.
Phys Med Biol ; 55(1): 207-19, 2010 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-20009197

RESUMEN

Online adaptive radiation therapy (ART) promises the ability to deliver an optimal treatment in response to daily patient anatomic variation. A major technical barrier for the clinical implementation of online ART is the requirement of rapid image segmentation. Deformable image registration (DIR) has been used as an automated segmentation method to transfer tumor/organ contours from the planning image to daily images. However, the current computational time of DIR is insufficient for online ART. In this work, this issue is addressed by using computer graphics processing units (GPUs). A gray-scale-based DIR algorithm called demons and five of its variants were implemented on GPUs using the compute unified device architecture (CUDA) programming environment. The spatial accuracy of these algorithms was evaluated over five sets of pulmonary 4D CT images with an average size of 256 x 256 x 100 and more than 1100 expert-determined landmark point pairs each. For all the testing scenarios presented in this paper, the GPU-based DIR computation required around 7 to 11 s to yield an average 3D error ranging from 1.5 to 1.8 mm. It is interesting to find out that the original passive force demons algorithms outperform subsequently proposed variants based on the combination of accuracy, efficiency and ease of implementation.


Asunto(s)
Algoritmos , Gráficos por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Bases de Datos como Asunto , Humanos , Procesamiento de Imagen Asistido por Computador/instrumentación , Imagenología Tridimensional/instrumentación , Pulmón/diagnóstico por imagen , Factores de Tiempo , Tomografía Computarizada por Rayos X/instrumentación , Tomografía Computarizada por Rayos X/métodos
8.
Phys Med Biol ; 54(20): 6287-97, 2009 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-19794244

RESUMEN

Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.


Asunto(s)
Neoplasias de la Próstata/radioterapia , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos , Algoritmos , Gráficos por Computador , Computadores , Metodologías Computacionales , Humanos , Masculino , Modelos Estadísticos , Radioterapia/métodos , Dosificación Radioterapéutica , Programas Informáticos , Interfaz Usuario-Computador
9.
Phys Med Biol ; 54(21): 6565-73, 2009 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-19826201

RESUMEN

The widespread adoption of on-board volumetric imaging in cancer radiotherapy has stimulated research efforts to develop online adaptive radiotherapy techniques to handle the inter-fraction variation of the patient's geometry. Such efforts face major technical challenges to perform treatment planning in real time. To overcome this challenge, we are developing a supercomputing online re-planning environment (SCORE) at the University of California, San Diego (UCSD). As part of the SCORE project, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) optimization algorithm on graphics processing units (GPUs). We adopt a penalty-based quadratic optimization model, which is solved by using a gradient projection method with Armijo's line search rule. Our optimization algorithm has been implemented in CUDA for parallel GPU computing as well as in C for serial CPU computing for comparison purpose. A prostate IMRT case with various beamlet and voxel sizes was used to evaluate our implementation. On an NVIDIA Tesla C1060 GPU card, we have achieved speedup factors of 20-40 without losing accuracy, compared to the results from an Intel Xeon 2.27 GHz CPU. For a specific nine-field prostate IMRT case with 5 x 5 mm(2) beamlet size and 2.5 x 2.5 x 2.5 mm(3) voxel size, our GPU implementation takes only 2.8 s to generate an optimal IMRT plan. Our work has therefore solved a major problem in developing online re-planning technologies for adaptive radiotherapy.


Asunto(s)
Neoplasias/radioterapia , Neoplasias de la Próstata/radioterapia , Planificación de la Radioterapia Asistida por Computador/instrumentación , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos , Algoritmos , Computadores , Metodologías Computacionales , Humanos , Masculino , Modelos Estadísticos , Oncología por Radiación/métodos , Dosificación Radioterapéutica , Reproducibilidad de los Resultados , Programas Informáticos , Tomografía Computarizada por Rayos X/métodos
10.
Am J Obstet Gynecol ; 199(2): 198.e1-5, 2008 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-18513684

RESUMEN

OBJECTIVE: The objective of the study was to develop a model of the female pelvic floor to study levator stretch during simulated childbirth. STUDY DESIGN: Magnetic resonance data from an asymptomatic nulligravida were segmented into pelvic muscles and bones to create a simulation model. Stiffness estimates of lateral and anteroposterior levator attachments were varied to estimate the impact on levator stretch. A 9 cm sphere was passed through the pelvis, along the path of the vagina, simulating childbirth. Levator response was interpreted at 4 positions of the sphere, simulating fetal head descent. The levator was color mapped to display the stretch experienced. RESULTS: A maximum stretch ratio of 3.5 to 1 was seen in the posteriomedial puborectalis. Maximum stretch increased with increasing stiffness of lateral levator attachments. CONCLUSION: Although preliminary, this work may help explain epidemiologic data regarding the pelvic floor impact of a first delivery. The models and simulation technique need refinement, but they may help study the effect of labor parameters on the pelvic floor.


Asunto(s)
Modelos Anatómicos , Músculo Esquelético/fisiología , Parto/fisiología , Diafragma Pélvico/fisiología , Adulto , Elasticidad , Femenino , Humanos , Imagen por Resonancia Magnética , Pelvis/fisiología , Embarazo
11.
Comput Methods Programs Biomed ; 67(2): 115-24, 2002 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-11809318

RESUMEN

This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.


Asunto(s)
Simulación por Computador , Imagen Eco-Planar/métodos , Modelos Anatómicos , Método de Montecarlo , Tomografía Computarizada de Emisión de Fotón Único/métodos , Humanos , Procesamiento de Imagen Asistido por Computador , Radioisótopos de Yodo , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...