Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Bases de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 20(23)2020 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-33297531

RESUMO

In recent years, Image-Guide Navigation Systems (IGNS) have become an important tool for various surgical operations. In the preparations for planning a surgical path, verifying the location of a lesion, etc., it is an essential tool; in operations such as bronchoscopy, which is the procedure for the inspection and retrieval of diagnostic samples for lung-related surgeries, it is even more so. The IGNS for bronchoscopy uses 2D-based images from a flexible bronchoscope to navigate through the bronchial airways in order to reach the targeted location. In this procedure, the accurate localization of the scope becomes very important, because incorrect information could potentially cause a surgeon to mistakenly direct the scope down the wrong passage. It would be a great aid for the surgeon to be able to visualize the bronchoscope images alongside the current location of the bronchoscope. For this purpose, in this paper, we propose a novel registration method to match real bronchoscopy images with virtual bronchoscope images from a 3D bronchial tree model built using computed tomography (CT) image stacks in order to obtain the current 3D position of the bronchoscope in the airways. This method is a combination of a novel position-tracking method using the current frames from the bronchoscope and the verification of the position of the real bronchoscope image against an image extracted from the 3D model using an adaptive-network-based fuzzy inference system (ANFIS)-based image matching method. Experimental results show that the proposed method performs better than the other methods used in the comparison.


Assuntos
Realidade Aumentada , Broncoscópios , Broncoscopia , Algoritmos , Imageamento Tridimensional , Pulmão , Reprodutibilidade dos Testes
2.
Sensors (Basel) ; 19(16)2019 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-31443237

RESUMO

By the standard of today's image-guided surgery (IGS) technology, in order to check and verify the progress of the surgery, the surgeons still require divert their attention from the patients occasionally to check against the display. In this paper, a mixed-reality system for medical use is proposed that combines an Intel RealSense sensor with Microsoft's Hololens head-mounted display system, for superimposing medical data onto the physical surface of a patient, so the surgeons do not need to divert their attention from their patients. The main idea of our proposed system is to display the 3D medical images of the patients on the actual patients themselves by placing the medical images and the patients in the same coordinate space. However, the virtual medical data may contain noises and outliers, so the transformation mapping function must be able to handle these problems. The transform function in our system is performed by the use of our proposed Denoised-Resampled-Weighted-and-Perturbed-Iterative Closest Points (DRWP-ICP) algorithm, which performs denoising and removal of outliers before aligning the pre-operative medical image data points to the patient's physical surface position before displaying the result using the Microsoft HoloLens display system. The experimental results shows that our proposed mixed-reality system using DRWP-ICP is capable of performing accurate and robust mapping despite the presence of noise and outliers.

3.
Sensors (Basel) ; 18(8)2018 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-30071645

RESUMO

In many surgery assistance systems, cumbersome equipment or complicated algorithms are often introduced to build the whole system. To build a system without cumbersome equipment or complicated algorithms, and to provide physicians the ability to observe the location of the lesion in the course of surgery, an augmented reality approach using an improved alignment method to image-guided surgery (IGS) is proposed. The system uses RGB-Depth sensor in conjunction with the Point Cloud Library (PCL) to build and establish the patient's head surface information, and, through the use of the improved alignment algorithm proposed in this study, the preoperative medical imaging information obtained can be placed in the same world-coordinates system as the patient's head surface information. The traditional alignment method, Iterative Closest Point (ICP), has the disadvantage that an ill-chosen starting position will result only in a locally optimal solution. The proposed improved para-alignment algorithm, named improved-ICP (I-ICP), uses a stochastic perturbation technique to escape from locally optimal solutions and reach the globally optimal solution. After the alignment, the results will be merged and displayed using Microsoft's HoloLens Head-Mounted Display (HMD), and allows the surgeon to view the patient's head at the same time as the patient's medical images. In this study, experiments were performed using spatial reference points with known positions. The experimental results show that the proposed improved alignment algorithm has errors bounded within 3 mm, which is highly accurate.


Assuntos
Algoritmos , Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Cirurgia Assistida por Computador/métodos , Cabeça , Humanos
4.
Sensors (Basel) ; 17(10)2017 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-28946643

RESUMO

The risks involved in nighttime driving include drowsy drivers and dangerous vehicles. Prominent among the more dangerous vehicles around at night are the larger vehicles which are usually moving faster at night on a highway. In addition, the risk level of driving around larger vehicles rises significantly when the driver's attention becomes distracted, even for a short period of time. For the purpose of alerting the driver and elevating his or her safety, in this paper we propose two components for any modern vision-based Advanced Drivers Assistance System (ADAS). These two components work separately for the single purpose of alerting the driver in dangerous situations. The purpose of the first component is to ascertain that the driver would be in a sufficiently wakeful state to receive and process warnings; this is the driver drowsiness detection component. The driver drowsiness detection component uses infrared images of the driver to analyze his eyes' movements using a MSR plus a simple heuristic. This component issues alerts to the driver when the driver's eyes show distraction and are closed for a longer than usual duration. Experimental results show that this component can detect closed eyes with an accuracy of 94.26% on average, which is comparable to previous results using more sophisticated methods. The purpose of the second component is to alert the driver when the driver's vehicle is moving around larger vehicles at dusk or night time. The large vehicle detection component accepts images from a regular video driving recorder as input. A bi-level system of classifiers, which included a novel MSR-enhanced KAZE-base Bag-of-Features classifier, is proposed to avoid false negatives. In both components, we propose an improved version of the Multi-Scale Retinex (MSR) algorithm to augment the contrast of the input. Several experiments were performed to test the effects of the MSR and each classifier, and the results are presented in experimental results section of this paper.

5.
Bioengineering (Basel) ; 11(5)2024 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-38790377

RESUMO

A deep convolution network that expands on the architecture of the faster R-CNN network is proposed. The expansion includes adapting unsupervised classification with multiple backbone networks to improve the Region Proposal Network in order to improve accuracy and sensitivity in detecting minute changes in images. The efficiency of the proposed architecture is investigated by applying it to the detection of cancerous lung tumors in CT (computed tomography) images. This investigation used a total of 888 images from the LUNA16 dataset, which contains CT images of both cancerous and non-cancerous tumors of various sizes. These images are divided into 80% and 20%, which are used for training and testing, respectively. The result of the investigation through the experiment is that the proposed deep-learning architecture could achieve an accuracy rate of 95.32%, a precision rate of 94.63%, a specificity of 94.84%, and a high sensitivity of 96.23% using the LUNA16 images. The result shows an improvement compared to a reported accuracy of 93.6% from a previous study using the same dataset.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA