Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
MAGMA ; 2023 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-37989921

RESUMEN

OBJECTIVE: This study aims to assess the statistical significance of training parameters in 240 dense UNets (DUNets) used for enhancing low Signal-to-Noise Ratio (SNR) and undersampled MRI in various acquisition protocols. The objective is to determine the validity of differences between different DUNet configurations and their impact on image quality metrics. MATERIALS AND METHODS: To achieve this, we trained all DUNets using the same learning rate and number of epochs, with variations in 5 acquisition protocols, 24 loss function weightings, and 2 ground truths. We calculated evaluation metrics for two metric regions of interest (ROI). We employed both Analysis of Variance (ANOVA) and Mixed Effects Model (MEM) to assess the statistical significance of the independent parameters, aiming to compare their efficacy in revealing differences and interactions among fixed parameters. RESULTS: ANOVA analysis showed that, except for the acquisition protocol, fixed variables were statistically insignificant. In contrast, MEM analysis revealed that all fixed parameters and their interactions held statistical significance. This emphasizes the need for advanced statistical analysis in comparative studies, where MEM can uncover finer distinctions often overlooked by ANOVA. DISCUSSION: These findings highlight the importance of utilizing appropriate statistical analysis when comparing different deep learning models. Additionally, the surprising effectiveness of the UNet architecture in enhancing various acquisition protocols underscores the potential for developing improved methods for characterizing and training deep learning models. This study serves as a stepping stone toward enhancing the transparency and comparability of deep learning techniques for medical imaging applications.

2.
J Digit Imaging ; 34(4): 1014-1025, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34027587

RESUMEN

The recent introduction of wireless head-mounted displays (HMD) promises to enhance 3D image visualization by immersing the user into 3D morphology. This work introduces a prototype holographic augmented reality (HAR) interface for the 3D visualization of magnetic resonance imaging (MRI) data for the purpose of planning neurosurgical procedures. The computational platform generates a HAR scene that fuses pre-operative MRI sets, segmented anatomical structures, and a tubular tool for planning an access path to the targeted pathology. The operator can manipulate the presented images and segmented structures and perform path-planning using voice and gestures. On-the-fly, the software uses defined forbidden-regions to prevent the operator from harming vital structures. In silico studies using the platform with a HoloLens HMD assessed its functionality and the computational load and memory for different tasks. A preliminary qualitative evaluation revealed that holographic visualization of high-resolution 3D MRI data offers an intuitive and interactive perspective of the complex brain vasculature and anatomical structures. This initial work suggests that immersive experiences may be an unparalleled tool for planning neurosurgical procedures.


Asunto(s)
Realidad Aumentada , Holografía , Cirugía Asistida por Computador , Imagenología Tridimensional , Imagen por Resonancia Magnética , Procedimientos Neuroquirúrgicos , Programas Informáticos , Interfaz Usuario-Computador
3.
Comput Methods Programs Biomed ; 198: 105779, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33045556

RESUMEN

BACKGROUND AND OBJECTIVE: Modern imaging scanners produce an ever-growing body of 3D/4D multimodal data requiring image analytics and visualization of fused images, segmentations, and information. For the latter, augmented reality (AR) with head-mounted displays (HMDs) has shown potential. This work describes a framework (FI3D) for interactive immersion with data, integration of image processing and analytics, and rendering and fusion with an AR interface. METHODS: The FI3D was designed and endowed with modules to communicate with peripherals, including imaging scanners and HMDs, and to provide computational power for data acquisition and processing. The core of FI3D is deployed to a dedicated computational unit that performs the computationally demanding processes in real-time, and the HMD is used as a display output peripheral and an input peripheral through gestures and voice commands. FI3D offers user-made processing and analysis dedicated modules. Users can customize and optimize these for a particular workflow while incorporating current or future libraries. RESULTS: The FI3D framework was used to develop a workflow for processing, rendering, and visualization of CINE MRI cardiac sets. In this version, the data were loaded from a remote database, and the endocardium and epicardium of the left ventricle (LV) were segmented using a machine learning model and transmitted to a HoloLens HMD to be visualized in 4D. Performance results show that the system is capable of maintaining an image stream of one image per second with a resolution of 512 × 512. Also, it can modify visual properties of the holograms at 1 update per 16 milliseconds (62.5 Hz) while providing enough resources for the segmentation and surface reconstruction tasks without hindering the HMD. CONCLUSIONS: We provide a system design and framework to be used as a foundation for medical applications that benefit from AR visualization, removing several technical challenges from the developmental pipeline.


Asunto(s)
Realidad Aumentada , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Inmersión , Interfaz Usuario-Computador
4.
Int J Med Robot ; 17(1): 1-12, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33047863

RESUMEN

BACKGROUND: This study presents user evaluation studies to assess the effect of information rendered by an interventional planning software on the operator's ability to plan transrectal magnetic resonance (MR)-guided prostate biopsies using actuated robotic manipulators. METHODS: An intervention planning software was developed based on the clinical workflow followed for MR-guided transrectal prostate biopsies. The software was designed to interface with a generic virtual manipulator and simulate an intervention environment using 2D and 3D scenes. User studies were conducted with urologists using the developed software to plan virtual biopsies. RESULTS: User studies demonstrated that urologists with prior experience in using 3D software completed the planning less time. 3D scenes were required to control all degrees-of-freedom of the manipulator, while 2D scenes were sufficient for planar motion of the manipulator. CONCLUSIONS: The study provides insights on using 2D versus 3D environment from a urologist's perspective for different operational modes of MR-guided prostate biopsy systems.


Asunto(s)
Neoplasias de la Próstata , Biopsia , Humanos , Biopsia Guiada por Imagen , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Programas Informáticos
5.
Int J Med Robot ; 17(5): e2290, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34060214

RESUMEN

BACKGROUND: User interfaces play a vital role in the planning and execution of an interventional procedure. The objective of this study is to investigate the effect of using different user interfaces for planning transrectal robot-assisted MR-guided prostate biopsy (MRgPBx) in an augmented reality (AR) environment. METHOD: End-user studies were conducted by simulating an MRgPBx system with end- and side-firing modes. The information from the system to the operator was rendered on HoloLens as an output interface. Joystick, mouse/keyboard, and holographic menus were used as input interfaces to the system. RESULTS: The studies indicated that using a joystick improved the interactive capacity and enabled operator to plan MRgPBx in less time. It efficiently captures the operator's commands to manipulate the augmented environment representing the state of MRgPBx system. CONCLUSIONS: The study demonstrates an alternative to conventional input interfaces to interact and manipulate an AR environment within the context of MRgPBx planning.


Asunto(s)
Realidad Aumentada , Cirugía Asistida por Computador , Biopsia , Humanos , Imagen por Resonancia Magnética , Masculino , Próstata/cirugía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA