Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Med Image Anal ; 48: 187-202, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29936399

RESUMO

This work aims at creating 3D freehand ultrasound reconstructions from 2D probes with image-based tracking, therefore not requiring expensive or cumbersome external tracking hardware. Existing model-based approaches such as speckle decorrelation only partially capture the underlying complexity of ultrasound image formation, thus producing reconstruction accuracies incompatible with current clinical requirements. Here, we introduce an alternative approach that relies on a statistical analysis rather than physical models, and use a convolutional neural network (CNN) to directly estimate the motion of successive ultrasound frames in an end-to-end fashion. We demonstrate how this technique is related to prior approaches, and derive how to further improve its predictive capabilities by incorporating additional information such as data from inertial measurement units (IMU). This novel method is thoroughly evaluated and analyzed on a dataset of 800 in vivo ultrasound sweeps, yielding unprecedentedly accurate reconstructions with a median normalized drift of 5.2%. Even on long sweeps exceeding 20 cm with complex trajectories, this allows to obtain length measurements with median errors of 3.4%, hence paving the way toward translation into clinical routine.


Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Ultrassonografia/métodos , Algoritmos , Humanos
2.
Artigo em Inglês | MEDLINE | ID: mdl-24505646

RESUMO

Automatic and robust registration of pre-operative magnetic resonance imaging (MRI) and intra-operative ultrasound (US) is essential to neurosurgery. We reformulate and extend an approach which uses a Linear Correlation of Linear Combination (LC2)-based similarity metric, yielding a novel algorithm which allows for fully automatic US-MRI registration in the matter of seconds. It is invariant with respect to the unknown and locally varying relationship between US image intensities and both MRI intensity and its gradient. The overall method based on this both recovers global rigid alignment, as well as the parameters of a free-form-deformation (FFD) model. The algorithm is evaluated on 14 clinical neurosurgical cases with tumors, with an average landmark-based error of 2.52 mm for the rigid transformation. In addition, we systematically study the accuracy, precision, and capture range of the algorithm, as well as its sensitivity to different choices of parameters.


Assuntos
Neoplasias Encefálicas/diagnóstico , Neoplasias Encefálicas/cirurgia , Ecoencefalografia/métodos , Imageamento por Ressonância Magnética/métodos , Procedimentos Neurocirúrgicos/métodos , Cirurgia Assistida por Computador/métodos , Ultrassonografia/métodos , Algoritmos , Humanos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnica de Subtração
3.
Artigo em Inglês | MEDLINE | ID: mdl-23286098

RESUMO

We present image-based methods for tracking teeth in a video image with respect to a CT scan of the jaw, in order to enable a novel light-weight augmented reality (AR) system in orthodontistry. Its purpose is guided bracket placement in orthodontic correction. In this context, our goal is to determine the position of the patient maxilla and mandible in a video image solely based on a CT scan. This is suitable for image guidance through an overlay of the video image with the planned position of brackets in a monocular AR system. Our tracking algorithm addresses the contradicting requirements of robustness, accuracy and performance in two problem-specific formulations. First, we exploit a distance-based modulation of two iso-surfaces from the CT image to approximate the appearance of the gum line. Second, back-projection of previous video frames to an iso-surface is used to account for recently placed brackets. In combination, this novel algorithm allowed us to track several sequences of three patient videos of real procedures, despite difficult lighting conditions. Paired with a systematic evaluation, we were able to show practical feasibility of such a system.


Assuntos
Braquetes Ortodônticos , Reconhecimento Automatizado de Padrão/métodos , Implantação de Prótese/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Dentária/métodos , Tomografia Computadorizada por Raios X/métodos , Interface Usuário-Computador , Humanos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
4.
Med Image Comput Comput Assist Interv ; 14(Pt 1): 516-23, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22003657

RESUMO

Cone-beam X-Ray systems strictly depend on the imaged object being stationary over the entire acquisition process. Even slight patient motion can affect the quality of the final 3D reconstruction. It would be desirable to be able to discover and model patient motion right from the actual projection images, in order to take it into account during reconstruction. However, while the source-detector arrangement is rotating around the patient, it is difficult to separate this motion from the additional patient motion. We present a novel similarity metric for successive X-Ray projections, which is able to distinguish between the expected rotational and additional sources of motion. We quantitatively evaluate it on simulated dental cone-beam X-Rays and qualitatively demonstrate its performance on real patient data.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Algoritmos , Simulação por Computador , Humanos , Análise dos Mínimos Quadrados , Modelos Estatísticos , Modelos Teóricos , Movimento (Física) , Movimento , Imagens de Fantasmas , Software , Raios X
5.
Med Image Comput Comput Assist Interv ; 13(Pt 3): 237-44, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20879405

RESUMO

In the last decade the use of interventional X-ray imaging, especially for fluoroscopy-guided procedures, has increased dramatically. Due to this the radiation exposure of the medical staff has also increased. Although radiation protection measures such as lead vests are used there are still unprotected regions, most notably the hands and the head. Over time these regions can receive significant amounts of radiation. In this paper we propose a system for approximating the radiation exposure of a physician during surgery. The goal is to sensibilize physicians to their radiation exposure and to give them a tool to quickly check it. To this end we use a real-time 3D reconstruction system which builds a 3D-representation of all the objects in the room. The reconstructed 3D-representation of the physician is then tracked over time and at each time step in which the X-Ray source is used the radiation received by each body part is accumulated. To simulate the radiation we use a physics-based simulation package. The physician can review his radiation exposure after the intervention and use the collected radiation information over a longer time period in order to minimize his radiation exposure by adjusting his positioning relative to the X-ray source. The system can also be used as an awareness tool for less experienced physicians.


Assuntos
Carga Corporal (Radioterapia) , Meio Ambiente , Exposição Ocupacional/análise , Doses de Radiação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Intervencionista , Contagem Corporal Total/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
6.
Med Image Comput Comput Assist Interv ; 11(Pt 2): 526-34, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18982645

RESUMO

With the increased presence of automated devices such as C-arms and medical robots and the introduction of a multitude of surgical tools, navigation systems and patient monitoring devices, collision avoidance has become an issue of practical value in interventional environments. In this paper, we present a real-time 3D reconstruction system for interventional environments which aims at predicting collisions by building a 3D representation of all the objects in the room. The 3D reconstruction is used to determine whether other objects are in the working volume of the device and to alert the medical staff before a collision occurs. In the case of C-arms, this allows faster rotational and angular movement which could for instance be used in 3D angiography to obtain a better reconstruction of contrasted vessels. The system also prevents staff to unknowingly enter the working volume of a device. This is of relevance in complex environments with many devices. The recovered 3D representation also opens the path to many new applications utilizing this data such as workflow analysis, 3D video generation or interventional room planning. To validate our claims, we performed several experiments with a real C-arm that show the validity of the approach. This system is currently being transferred to an interventional room in our university hospital.


Assuntos
Meio Ambiente , Imageamento Tridimensional/métodos , Modelos Teóricos , Salas Cirúrgicas , Reconhecimento Automatizado de Padrão/métodos , Radiografia Intervencionista/métodos , Simulação por Computador , Sistemas Computacionais , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA