Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Public Health ; 11: 1143947, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37033028

RESUMEN

Virtual Reality (VR) has emerged as a new safe and efficient tool for the rehabilitation of many childhood and adulthood illnesses. VR-based therapies have the potential to improve both motor and functional skills in a wide range of age groups through cortical reorganization and the activation of various neuronal connections. Recently, the potential for using serious VR-based games that combine perceptual learning and dichoptic stimulation has been explored for the rehabilitation of ophthalmological and neurological disorders. In ophthalmology, several clinical studies have demonstrated the ability to use VR training to enhance stereopsis, contrast sensitivity, and visual acuity. The use of VR technology provides a significant advantage in training each eye individually without requiring occlusion or penalty. In neurological disorders, the majority of patients undergo recurrent episodes (relapses) of neurological impairment, however, in a few cases (60-80%), the illness progresses over time and becomes chronic, consequential in cumulated motor disability and cognitive deficits. Current research on memory restoration has been spurred by theories about brain plasticity and findings concerning the nervous system's capacity to reconstruct cellular synapses as a result of interaction with enriched environments. Therefore, the use of VR training can play an important role in the improvement of cognitive function and motor disability. Although there are several reviews in the community employing relevant Artificial Intelligence in healthcare, VR has not yet been thoroughly examined in this regard. In this systematic review, we examine the key ideas of VR-based training for prevention and control measurements in ocular diseases such as Myopia, Amblyopia, Presbyopia, and Age-related Macular Degeneration (AMD), and neurological disorders such as Alzheimer, Multiple Sclerosis (MS) Epilepsy and Autism spectrum disorder. This review highlights the fundamentals of VR technologies regarding their clinical research in healthcare. Moreover, these findings will raise community awareness of using VR training and help researchers to learn new techniques to prevent and cure different diseases. We further discuss the current challenges of using VR devices, as well as the future prospects of human training.


Asunto(s)
Trastorno del Espectro Autista , Personas con Discapacidad , Trastornos Motores , Enfermedades del Sistema Nervioso , Realidad Virtual , Humanos , Niño , Inteligencia Artificial
3.
Sensors (Basel) ; 22(10)2022 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-35632050

RESUMEN

The detection and segmentation of thrombi are essential for monitoring the disease progression of abdominal aortic aneurysms (AAAs) and for patient care and management. As they have inherent capabilities to learn complex features, deep convolutional neural networks (CNNs) have been recently introduced to improve thrombus detection and segmentation. However, investigations into the use of CNN methods is in the early stages and most of the existing methods are heavily concerned with the segmentation of thrombi, which only works after they have been detected. In this work, we propose a fully automated method for the whole process of the detection and segmentation of thrombi, which is based on a well-established mask region-based convolutional neural network (Mask R-CNN) framework that we improve with optimized loss functions. The combined use of complete intersection over union (CIoU) and smooth L1 loss was designed for accurate thrombus detection and then thrombus segmentation was improved with a modified focal loss. We evaluated our method against 60 clinically approved patient studies (i.e., computed tomography angiography (CTA) image volume data) by conducting 4-fold cross-validation. The results of comparisons to multiple other state-of-the-art methods suggested the superior performance of our method, which achieved the highest F1 score for thrombus detection (0.9197) and outperformed most metrics for thrombus segmentation.


Asunto(s)
Aneurisma de la Aorta Abdominal , Trombosis , Aneurisma de la Aorta Abdominal/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , Trombosis/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
4.
Artículo en Inglés | MEDLINE | ID: mdl-35613068

RESUMEN

Automatic recognition of 3-D objects in a 3-D model by convolutional neural network (CNN) methods has been successfully applied to various tasks, e.g., robotics and augmented reality. Three-dimensional object recognition is mainly performed by analyzing the object using multi-view images, depth images, graphs, or volumetric data. In some cases, using volumetric data provides the most promising results. However, existing recognition techniques on volumetric data have many drawbacks, such as losing object details on converting points to voxels and the large size of the input volume data that leads to substantial 3-D CNNs. Using point clouds could also provide very promising results; however, point-cloud-based methods typically need sparse data entry and time-consuming training stages. Thus, using volumetric could be a more efficient and flexible recognizer for our special case in the School of Medicine, Shanghai Jiao Tong University. In this article, we propose a novel solution to 3-D object recognition from volumetric data using a combination of three compact CNN models, low-cost SparseNet, and feature representation technique. We achieve an optimized network by estimating extra geometrical information comprising the surface normal and curvature into two separated neural networks. These two models provide supplementary information to each voxel data that consequently improve the results. The primary network model takes advantage of all the predicted features and uses these features in Random Forest (RF) for recognition purposes. Our method outperforms other methods in training speed in our experiments and provides an accurate result as good as the state-of-the-art.

5.
Sci Rep ; 12(1): 2173, 2022 02 09.
Artículo en Inglés | MEDLINE | ID: mdl-35140267

RESUMEN

Radiogenomics relationships (RRs) aims to identify statistically significant correlations between medical image features and molecular characteristics from analysing tissue samples. Previous radiogenomics studies mainly relied on a single category of image feature extraction techniques (ETs); these are (i) handcrafted ETs that encompass visual imaging characteristics, curated from knowledge of human experts and, (ii) deep ETs that quantify abstract-level imaging characteristics from large data. Prior studies therefore failed to leverage the complementary information that are accessible from fusing the ETs. In this study, we propose a fused feature signature (FFSig): a selection of image features from handcrafted and deep ETs (e.g., transfer learning and fine-tuning of deep learning models). We evaluated the FFSig's ability to better represent RRs compared to individual ET approaches with two public datasets: the first dataset was used to build the FFSig using 89 patients with non-small cell lung cancer (NSCLC) comprising of gene expression data and CT images of the thorax and the upper abdomen for each patient; the second NSCLC dataset comprising of 117 patients with CT images and RNA-Seq data and was used as the validation set. Our results show that our FFSig encoded complementary imaging characteristics of tumours and identified more RRs with a broader range of genes that are related to important biological functions such as tumourigenesis. We suggest that the FFSig has the potential to identify important RRs that may assist cancer diagnosis and treatment in the future.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Carcinoma de Pulmón de Células no Pequeñas/genética , Genómica de Imágenes , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/genética , Aprendizaje Profundo , Ontología de Genes , Humanos , RNA-Seq , Tomografía Computarizada por Rayos X , Transcriptoma
6.
J Med Internet Res ; 24(1): e30600, 2022 01 28.
Artículo en Inglés | MEDLINE | ID: mdl-35089144

RESUMEN

BACKGROUND: A critical component of disaster preparedness in hospitals is experiential education and training of health care professionals. A live drill is a well-established, effective training approach, but cost restraints and logistic constraints make clinical implementation challenging, and training opportunities with live drills may be severely limited. Virtual reality simulation (VRS) technology may offer a viable training alternative with its inherent features of reproducibility, just-in-time training, and repeatability. OBJECTIVE: This integrated review examines the scientific evidence pertaining to the effectiveness of VRS and its practical usefulness in training health care professionals for in-hospital disaster preparedness. METHODS: A well-known 4-stage methodology was used for the integrated review process. It consisted of problem identification, a literature search and inclusion criteria determination, 2-stage validation and analysis of searched studies, and presentation of findings. A search of diverse publication repositories was performed. They included Web of Science (WOS), PubMed (PMD), and Embase (EMB). RESULTS: The integrated review process resulted in 12 studies being included. Principle findings identified 3 major capabilities of VRS: (1) to realistically simulate the clinical environment and medical practices related to different disaster scenarios, (2) to develop learning effects on increased confidence and enhanced knowledge acquisition, and (3) to enable cost-effective implementation of training programs. CONCLUSIONS: The findings from the integrated review suggested that VRS could be a competitive, cost-effective adjunct to existing training approaches. Although the findings demonstrated the applicability of VRS to different training scenarios, these do not entirely cover all disaster scenarios that could happen in hospitals. This integrated review expects that the recent advances of VR technologies can be 1 of the catalysts to enable the wider adoption of VRS training on challenging clinical scenarios that require sophisticated modeling and environment depiction.


Asunto(s)
Desastres , Realidad Virtual , Simulación por Computador , Hospitales , Humanos , Reproducibilidad de los Resultados
7.
Sensors (Basel) ; 23(1)2022 Dec 24.
Artículo en Inglés | MEDLINE | ID: mdl-36616773

RESUMEN

Abdominal aortic aneurysm (AAA) is a fatal clinical condition with high mortality. Computed tomography angiography (CTA) imaging is the preferred minimally invasive modality for the long-term postoperative observation of AAA. Accurate segmentation of the thrombus region of interest (ROI) in a postoperative CTA image volume is essential for quantitative assessment and rapid clinical decision making by clinicians. Few investigators have proposed the adoption of convolutional neural networks (CNN). Although these methods demonstrated the potential of CNN architectures by automating the thrombus ROI segmentation, the segmentation performance can be further improved. The existing methods performed the segmentation process independently per 2D image and were incapable of using adjacent images, which could be useful for the robust segmentation of thrombus ROIs. In this work, we propose a thrombus ROI segmentation method to utilize not only the spatial features of a target image, but also the volumetric coherence available from adjacent images. We newly adopted a recurrent neural network, bi-directional convolutional long short-term memory (Bi-CLSTM) architecture, which can learn coherence between a sequence of data. This coherence learning capability can be useful for challenging situations, for example, when the target image exhibits inherent postoperative artifacts and noises, the inclusion of adjacent images would facilitate learning more robust features for thrombus ROI segmentation. We demonstrate the segmentation capability of our Bi-CLSTM-based method with a comparison of the existing 2D-based thrombus ROI segmentation counterpart as well as other established 2D- and 3D-based alternatives. Our comparison is based on a large-scale clinical dataset of 60 patient studies (i.e., 60 CTA image volumes). The results suggest the superior segmentation performance of our Bi-CLSTM-based method by achieving the highest scores of the evaluation metrics, e.g., our Bi-CLSTM results were 0.0331 higher on total overlap and 0.0331 lower on false negative when compared to 2D U-net++ as the second-best.


Asunto(s)
Angiografía por Tomografía Computarizada , Trombosis , Humanos , Angiografía por Tomografía Computarizada/métodos , Memoria a Corto Plazo , Tomografía Computarizada por Rayos X , Redes Neurales de la Computación , Trombosis/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
8.
J Biomed Inform ; 106: 103430, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32371232

RESUMEN

Laparoscopic liver surgery is challenging to perform because of compromised ability of the surgeon to localize subsurface anatomy due to minimal invasive visibility. While image guidance has the potential to address this barrier, intraoperative factors, such as insufflations and variable degrees of organ mobilization from supporting ligaments, may generate substantial deformation. The navigation ability in terms of searching and tagging within liver views has not been characterized, and current object detection methods do not account for the mechanics of how these features could be applied to the liver images. In this research, we have proposed spatial pyramid based searching and tagging of liver's intraoperative views using convolution neural network (SPST-CNN). By exploiting a hybrid combination of an image pyramid at input and spatial pyramid pooling layer at deeper stages of SPST-CNN, we reveal the gains of full-image representations for searching and tagging variable scaled liver live views. SPST-CNN provides pinpoint searching and tagging of intraoperative liver views to obtain up-to-date information about the location and shape of the area of interest. Downsampling input using image pyramid enables SPST-CNN framework to deploy input images with a diversity of resolutions for achieving scale-invariance feature. We have compared the proposed approach to the four recent state-of-the-art approaches and our method achieved better mAP up to 85.9%.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Hígado/diagnóstico por imagen , Hígado/cirugía
9.
Int J Comput Assist Radiol Surg ; 14(12): 2221-2231, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31115755

RESUMEN

PURPOSE: Multidisciplinary team meetings (MDTs) are the standard of care for safe, effective patient management in modern hospital-based clinical practice. Medical imaging data are often the central discussion points in many MDTs, and these data are typically visualised, by all participants, on a common large display. We propose a Web-based MDT visualisation system (WMDT-VS) to allow individual participants to view the data on their own personal computing devices with the potential to customise the imaging data, i.e. different view of the data to that of the common display, for their particular clinical perspective. METHODS: We developed the WMDT-VS by leveraging the state-of-the-art Web technologies to support four MDT visualisation features: (1) 2D and 3D visualisations for multiple imaging modality data; (2) a variety of personal computing devices, e.g. smartphone, tablets, laptops and PCs, to access and navigate medical images individually and share the visualisations; (3) customised participant visualisations; and (4) the addition of extra local image data for visualisation and discussion. RESULTS: We outlined these MDT visualisation features on two simulated MDT settings using different imaging data and usage scenarios. We measured compatibility and performances of various personal, consumer-level, computing devices. CONCLUSIONS: Our WMDT-VS provides a more comprehensive visualisation experience for MDT participants.


Asunto(s)
Diagnóstico por Imagen , Grupo de Atención al Paciente , Humanos , Internet
10.
Int J Comput Assist Radiol Surg ; 14(5): 733-744, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-30661169

RESUMEN

PURPOSE: Our aim was to develop an interactive 3D direct volume rendering (DVR) visualization solution to interpret and analyze complex, serial multi-modality imaging datasets from positron emission tomography-computed tomography (PET-CT). METHODS: Our approach uses: (i) a serial transfer function (TF) optimization to automatically depict particular regions of interest (ROIs) over serial datasets with consistent anatomical structures; (ii) integration of a serial segmentation algorithm to interactively identify and track ROIs on PET; and (iii) parallel graphics processing unit (GPU) implementation for interactive visualization. RESULTS: Our DVR visualization more easily identifies changes in ROIs in serial scans in an automated fashion and parallel GPU computation which enables interactive visualization. CONCLUSIONS: Our approach provides a rapid 3D visualization of relevant ROIs over multiple scans, and we suggest that it can be used as an adjunct to conventional 2D viewing software from scanner vendors.


Asunto(s)
Algoritmos , Imagenología Tridimensional/métodos , Linfoma/diagnóstico , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Programas Informáticos , Adulto Joven
11.
JMIR Hum Factors ; 4(3): e21, 2017 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-28851680

RESUMEN

BACKGROUND: Patients undertaking long-term and chronic home hemodialysis (HHD) are subject to feelings of isolation and anxiety due to the absence of physical contact with their health care professionals and lack of feedback in regards to their dialysis treatments. Therefore, it is important for these patients to feel the "presence" of the health care professionals remotely while on hemodialysis at home for better compliance with the dialysis regime and to feel connected with health care professionals. OBJECTIVE: This study presents an HHD system design for hemodialysis patients with features to enhance patient's perceived "copresence" with their health care professionals. Various mechanisms to enhance this perception were designed and implemented, including digital logbooks, emotion sharing, and feedback tools. The mechanism in our HHD system aims to address the limitations associated with existing self-monitoring tools for HHD patients. METHODS: A field trial involving 3 nurses and 74 patients was conducted to test the pilot implementation of the copresence design in our HHD system. Mixed method research was conducted to evaluate the system, including surveys, interviews, and analysis of system data. RESULTS: Patients created 2757 entries of dialysis cases during the period of study. Altogether there were 492 entries submitted with "Very Happy" as the emotional status, 2167 entries with a "Happy" status, 56 entries with a "Neutral" status, 18 entries with an "Unhappy" status, and 24 entries with a "Very unhappy" status. Patients felt assured to share their emotions with health care professionals. Health care professionals were able to prioritize the review of the entries based on the emotional status and also felt assured to see patients' change in mood. There were 989 entries sent with short notes. Entries with negative emotions had a higher percentage of supplementary notes entered compared to the entries with positive and neutral emotions. The qualitative data further showed that the HHD system was able to improve patients' feelings of being connected with their health care professionals and thus enhance their self-care on HHD. The health care professionals felt better assured with patients' status with the use of the system and reported improved productivity and satisfaction with the copresence enhancement mechanism. The survey on the system usability indicated a high level of satisfaction among patients and nurses. CONCLUSIONS: The copresence enhancement design complements the conventional use of a digitized HHD logbook and will further benefit the design of future telehealth systems.

12.
Comput Med Imaging Graph ; 51: 40-9, 2016 07.
Artículo en Inglés | MEDLINE | ID: mdl-27139998

RESUMEN

'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation.


Asunto(s)
Diagnóstico por Imagen/métodos , Algoritmos , Análisis por Conglomerados , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones
13.
Artículo en Inglés | MEDLINE | ID: mdl-25571537

RESUMEN

Multi-modality positron emission tomography and computed tomography (PET-CT) imaging depicts biological and physiological functions (from PET) within a higher resolution anatomical reference frame (from CT). The need to efficiently assimilate the information from these co-aligned volumes simultaneously has resulted in 3D visualisation methods that depict e.g., slice of interest (SOI) from PET combined with direct volume rendering (DVR) of CT. However because DVR renders the whole volume, regions of interests (ROIs) such as tumours that are embedded within the volume may be occluded from view. Volume clipping is typically used to remove occluding structures by `cutting away' parts of the volume; this involves tedious trail-and-error tweaking of the clipping attempts until a satisfied visualisation is made, thus restricting its application. Hence, we propose a new automated opacity-driven volume clipping method for PET-CT using DVR-SOI visualisation. Our method dynamically calculates the volume clipping depth by considering the opacity information of the CT voxels in front of the PET SOI, thereby ensuring that only the relevant anatomical information from the CT is visualised while not impairing the visibility of the PET SOI. We outline the improvements of our method when compared to conventional 2D and traditional DVR-SOI visualisations.


Asunto(s)
Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Huesos/anatomía & histología , Huesos/diagnóstico por imagen , Humanos , Ultrasonografía
14.
Artículo en Inglés | MEDLINE | ID: mdl-23366481

RESUMEN

Dual-modal positron emission tomography and computed tomography (PET-CT) imaging enables the visualization of functional structures (PET) within human bodies in the spatial context of their anatomical (CT) counterparts, and is providing unprecedented capabilities in understanding diseases. However, the need to access and assimilate the two volumes simultaneously has raised new visualization challenges. In typical dual-modal visualization, the transfer functions for the two volumes are designed in isolation with the resulting volumes being fused. Unfortunately, such transfer function design fails to exploit the correlation that exists between the two volumes. In this study, we propose a dual-modal visualization method where we employ 'visibility' metrics to provide interactive visual feedback regarding the occlusion caused by the first volume on the second volume and vice versa. We further introduce a region of interest (ROI) function that allows visibility analyses to be restricted to subsection of the volume. We demonstrate the new visualization enabled by our proposed dual-modal visibility metrics using clinical whole-body PET-CT studies of various diseases.


Asunto(s)
Imagen Multimodal/métodos , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Algoritmos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA