Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
J Robot Surg ; 17(6): 2735-2742, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37670151

RESUMO

The purpose of this study is to compare robot-assisted and manual subretinal injections in terms of successful subretinal blistering, reflux incidences and damage of the retinal pigment epithelium (RPE). Subretinal injection was simulated on 84 ex-vivo porcine eyes with half of the interventions being carried out manually and the other half by controlling a custom-built robot in a master-slave fashion. After pars plana vitrectomy (PPV), the retinal target spot was determined under a LUMERA 700 microscope with microscope-integrated intraoperative optical coherence tomography (iOCT) RESCAN 700 (Carl Zeiss Meditec, Germany). For injection, a 1 ml syringe filled with perfluorocarbon liquid (PFCL) was tipped with a 40-gauge metal cannula (Incyto Co., Ltd., South Korea). In one set of trials, the needle was attached to the robot's end joint and maneuvered robotically to the retinal target site. In another set of trials, approaching the retina was performed manually. Intraretinal cannula-tip depth was monitored continuously via iOCT. At sufficient depth, PFCL was injected into the subretinal space. iOCT images and fundus video recordings were used to evaluate the surgical outcome. Robotic injections showed more often successful subretinal blistering (73.7% vs. 61.8%, p > 0.05) and a significantly lower incidence of reflux (23.7% vs. 58.8%, p < 0.01). Although larger tip depths were achieved in successful manual trials, RPE penetration occurred in 10.5% of robotic but in 26.5% of manual cases (p > 0.05). In conclusion, significantly less reflux incidences were achieved with the use of a robot. Furthermore, RPE penetrations occurred less and successful blistering more frequently when performing robotic surgery.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Animais , Suínos , Tomografia de Coerência Óptica/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Retina , Vitrectomia/métodos
2.
Micromachines (Basel) ; 14(6)2023 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-37374846

RESUMO

This study aimed to compare the efficacy of robot-assisted and manual cannula insertion in simulated big-bubble deep anterior lamellar keratoplasty (DALK). Novice surgeons with no prior experience in performing DALK were trained to perform the procedure using manual or robot-assisted techniques. The results showed that both methods could generate an airtight tunnel in the porcine cornea, and result in successful generation of a deep stromal demarcation plane representing sufficient depth reached for big-bubble generation in most cases. However, the combination of intraoperative OCT and robotic assistance received a significant increase in the depth of achieved detachment in non-perforated cases, comprising a mean of 89% as opposed to 85% of the cornea in manual trials. This research suggests that robot-assisted DALK may offer certain advantages over manual techniques, particularly when used in conjunction with intraoperative OCT.

3.
IEEE J Biomed Health Inform ; 24(12): 3338-3350, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32750971

RESUMO

Machine learning and especially deep learning techniques are dominating medical image and data analysis. This article reviews machine learning approaches proposed for diagnosing ophthalmic diseases during the last four years. Three diseases are addressed in this survey, namely diabetic retinopathy, age-related macular degeneration, and glaucoma. The review covers over 60 publications and 25 public datasets and challenges related to the detection, grading, and lesion segmentation of the three considered diseases. Each section provides a summary of the public datasets and challenges related to each pathology and the current methods that have been applied to the problem. Furthermore, the recent machine learning approaches used for retinal vessels segmentation, and methods of retinal layers and fluid segmentation are reviewed. Two main imaging modalities are considered in this survey, namely color fundus imaging, and optical coherence tomography. Machine learning approaches that use eye measurements and visual field data for glaucoma detection are also included in the survey. Finally, the authors provide their views, expectations and the limitations of the future of these techniques in the clinical practice.


Assuntos
Técnicas de Diagnóstico Oftalmológico , Interpretação de Imagem Assistida por Computador , Aprendizado de Máquina , Aprendizado Profundo , Glaucoma/diagnóstico por imagem , Humanos , Doenças Retinianas/diagnóstico por imagem , Tomografia de Coerência Óptica
4.
Int J Comput Assist Radiol Surg ; 15(5): 781-789, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32242299

RESUMO

PURPOSE: Intraoperative optical coherence tomography (iOCT) was recently introduced as a new modality for ophthalmic surgeries. It provides real-time cross-sectional information at a very high resolution. However, properly positioning the scan location during surgery is cumbersome and time-consuming, as a surgeon needs both his hands for surgery. The goal of the present study is to present a method to automatically position an iOCT scan on an anatomy of interest in the context of anterior segment surgeries. METHODS: First, a voice recognition algorithm using a context-free grammar is used to obtain the desired pose from the surgeon. Then, the limbus circle is detected in the microscope image and the iOCT scan is placed accordingly in the X-Y plane. Next, an iOCT sweep in Z direction is conducted and the scan is placed to centre the topmost structure. Finally, the position is fine-tuned using semantic segmentation and a rule-based system. RESULTS: The logic to position the scan location on various anatomies was evaluated on ex vivo porcine eyes (10 eyes for corneal apex and 7 eyes for cornea, sclera and iris). The mean euclidean distances (± standard deviation) was 76.7 (± 59.2) pixels and 0.298 (± 0.229) mm. The mean execution time (± standard deviation) in seconds for the four anatomies was 15 (± 1.2). The scans have a size of 1024 by 1024 pixels. The method was implemented on a Carl Zeiss OPMI LUMERA 700 with RESCAN 700. CONCLUSION: The present study introduces a method to fully automatically position an iOCT scanner. Providing the possibility of changing the OCT scan location via voice commands removes the burden of manual device manipulation from surgeons. This in turn allows them to keep their focus on the surgical task at hand and therefore increase the acceptance of iOCT in the operating room.


Assuntos
Monitorização Intraoperatória/métodos , Procedimentos Cirúrgicos Oftalmológicos/métodos , Tomografia de Coerência Óptica/métodos , Algoritmos , Animais , Estudos Transversais , Olho/diagnóstico por imagem , Microscopia/instrumentação , Procedimentos Cirúrgicos Oftalmológicos/instrumentação , Suínos
5.
Med Image Comput Comput Assist Interv ; 12265: 267-276, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34085059

RESUMO

Intraoperative Optical Coherence Tomography (iOCT) has advanced in recent years to provide real-time high resolution volumetric imaging for ophthalmic surgery. It enables real-time 3D feedback during precise surgical maneuvers. Intraoperative 4D OCT generally exhibits lower signal-to-noise ratio compared to diagnostic OCT and visualization is complicated by instrument shadows occluding retinal tissue. Additional constraints of processing data rates upwards of 6GB/s create unique challenges for advanced visualization of 4D OCT. Prior approaches for real-time 4D iOCT rendering have been limited to applying simple denoising filters and colorization to improve visualization. We present a novel real-time rendering pipeline that provides enhanced intraoperative visualization and is specifically designed for the high data rates of 4D iOCT. We decompose the volume into a static part consisting of the retinal tissue and a dynamic part including the instrument. Aligning the static parts over time allows temporal compounding of these structures for improved image quality. We employ a translational motion model and use axial projection images to reduce the dimensionality of the alignment. A model-based instrument segmentation on the projections discriminates static from dynamic parts and is used to exclude instruments from the compounding. Our real-time rendering method combines the compounded static information with the latest iOCT data to provide a visualization which compensates instrument shadows and improves instrument visibility. We evaluate the individual parts of our pipeline on pre-recorded OCT volumes and demonstrate the effectiveness of our method on a recorded volume sequence with a moving retinal forceps.

6.
Int J Comput Assist Radiol Surg ; 13(6): 787-796, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29603065

RESUMO

PURPOSE: Intraoperative optical coherence tomography (iOCT) is an increasingly available imaging technique for ophthalmic microsurgery that provides high-resolution cross-sectional information of the surgical scene. We propose to build on its desirable qualities and present a method for tracking the orientation and location of a surgical needle. Thereby, we enable the direct analysis of instrument-tissue interaction directly in OCT space without complex multimodal calibration that would be required with traditional instrument tracking methods. METHOD: The intersection of the needle with the iOCT scan is detected by a peculiar multistep ellipse fitting that takes advantage of the directionality of the modality. The geometric modeling allows us to use the ellipse parameters and provide them into a latency-aware estimator to infer the 5DOF pose during needle movement. RESULTS: Experiments on phantom data and ex vivo porcine eyes indicate that the algorithm retains angular precision especially during lateral needle movement and provides a more robust and consistent estimation than baseline methods. CONCLUSION: Using solely cross-sectional iOCT information, we are able to successfully and robustly estimate a 5DOF pose of the instrument in less than 5.4 ms on a CPU.


Assuntos
Algoritmos , Oftalmopatias/cirurgia , Microcirurgia/instrumentação , Agulhas , Procedimentos Cirúrgicos Oftalmológicos/instrumentação , Cirurgia Assistida por Computador/métodos , Tomografia de Coerência Óptica/métodos , Animais , Estudos Transversais , Modelos Animais de Doenças , Desenho de Equipamento , Oftalmopatias/diagnóstico por imagem , Suínos
7.
IEEE Trans Vis Comput Graph ; 23(11): 2366-2371, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28809687

RESUMO

Sonic interaction as a technique for conveying information has advantages over conventional visual augmented reality methods specially when augmenting the visual field with extra information brings distraction. Sonification of knowledge extracted by applying computational methods to sensory data is a well-established concept. However, some aspects of sonic interaction design such as aesthetics, the cognitive effort required for perceiving information, and avoiding alarm fatigue are not well studied in literature. In this work, we present a sonification scheme based on employment of physical modeling sound synthesis which targets focus demanding tasks requiring extreme precision. Proposed mapping techniques are designed to require minimum training for users to adapt to and minimum mental effort to interpret the conveyed information. Two experiments are conducted to assess the feasibility of the proposed method and compare it against visual augmented reality in high precision tasks. The observed quantitative results suggest that utilizing sound patches generated by physical modeling achieve the desired goal of improving the user experience and general task performance with minimal training.


Assuntos
Retroalimentação Sensorial/fisiologia , Modelos Neurológicos , Desempenho Psicomotor/fisiologia , Realidade Virtual , Gráficos por Computador , Humanos , Software
8.
Comput Math Methods Med ; 2016: 1067509, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27867418

RESUMO

Detection of instrument tip in retinal microsurgery videos is extremely challenging due to rapid motion, illumination changes, the cluttered background, and the deformable shape of the instrument. For the same reason, frequent failures in tracking add the overhead of reinitialization of the tracking. In this work, a new method is proposed to localize not only the instrument center point but also its tips and orientation without the need of manual reinitialization. Our approach models the instrument as a Conditional Random Field (CRF) where each part of the instrument is detected separately. The relations between these parts are modeled to capture the translation, rotation, and the scale changes of the instrument. The tracking is done via separate detection of instrument parts and evaluation of confidence via the modeled dependence functions. In case of low confidence feedback an automatic recovery process is performed. The algorithm is evaluated on in vivo ophthalmic surgery datasets and its performance is comparable to the state-of-the-art methods with the advantage that no manual reinitialization is needed.


Assuntos
Microcirurgia/métodos , Retina/cirurgia , Cirurgia Assistida por Computador/métodos , Algoritmos , Inteligência Artificial , Bases de Dados Factuais , Desenho de Equipamento , Humanos , Laparoscopia/métodos , Modelos Estatísticos , Procedimentos Cirúrgicos Oftalmológicos/instrumentação , Procedimentos Cirúrgicos Oftalmológicos/métodos , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Software , Instrumentos Cirúrgicos
9.
Med Image Anal ; 34: 82-100, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27237604

RESUMO

Real-time visual tracking of a surgical instrument holds great potential for improving the outcome of retinal microsurgery by enabling new possibilities for computer-aided techniques such as augmented reality and automatic assessment of instrument manipulation. Due to high magnification and illumination variations, retinal microsurgery images usually entail a high level of noise and appearance changes. As a result, real-time tracking of the surgical instrument remains challenging in in-vivo sequences. To overcome these problems, we present a method that builds on random forests and addresses the task by modelling the instrument as an articulated object. A multi-template tracker reduces the region of interest to a rectangular area around the instrument tip by relating the movement of the instrument to the induced changes on the image intensities. Within this bounding box, a gradient-based pose estimation infers the location of the instrument parts from image features. In this way, the algorithm does not only provide the location of instrument, but also the positions of the tool tips in real-time. Various experiments on a novel dataset comprising 18 in-vivo retinal microsurgery sequences demonstrate the robustness and generalizability of our method. The comparison on two publicly available datasets indicates that the algorithm can outperform current state-of-the art.


Assuntos
Algoritmos , Microcirurgia/métodos , Retina/cirurgia , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos , Humanos
10.
Med Image Anal ; 18(1): 103-17, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24184434

RESUMO

Intravascular Ultrasound (IVUS) is a predominant imaging modality in interventional cardiology. It provides real-time cross-sectional images of arteries and assists clinicians to infer about atherosclerotic plaques composition. These plaques are heterogeneous in nature and constitute fibrous tissue, lipid deposits and calcifications. Each of these tissues backscatter ultrasonic pulses and are associated with a characteristic intensity in B-mode IVUS image. However, clinicians are challenged when colocated heterogeneous tissue backscatter mixed signals appearing as non-unique intensity patterns in B-mode IVUS image. Tissue characterization algorithms have been developed to assist clinicians to identify such heterogeneous tissues and assess plaque vulnerability. In this paper, we propose a novel technique coined as Stochastic Driven Histology (SDH) that is able to provide information about co-located heterogeneous tissues. It employs learning of tissue specific ultrasonic backscattering statistical physics and signal confidence primal from labeled data for predicting heterogeneous tissue composition in plaques. We employ a random forest for the purpose of learning such a primal using sparsely labeled and noisy samples. In clinical deployment, the posterior prediction of different lesions constituting the plaque is estimated. Folded cross-validation experiments have been performed with 53 plaques indicating high concurrence with traditional tissue histology. On the wider horizon, this framework enables learning of tissue-energy interaction statistical physics and can be leveraged for promising clinical applications requiring tissue characterization beyond the application demonstrated in this paper.


Assuntos
Inteligência Artificial , Doença da Artéria Coronariana/diagnóstico por imagem , Ecocardiografia/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia de Intervenção/métodos , Algoritmos , Interpretação Estatística de Dados , Humanos , Reprodutibilidade dos Testes , Espalhamento de Radiação , Sensibilidade e Especificidade
11.
Comput Med Imaging Graph ; 38(2): 104-12, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24035737

RESUMO

Coronary artery disease leads to failure of coronary circulation secondary to accumulation of atherosclerotic plaques. In adjunction to primary imaging of such vascular plaques using coronary angiography or alternatively magnetic resonance imaging, intravascular ultrasound (IVUS) is used predominantly for diagnosis and reporting of their vulnerability. In addition to plaque burden estimation, necrosis detection is an important aspect in reporting of IVUS. Since necrotic regions generally appear as hypoechic, with speckle appearance in these regions resembling true shadows or severe signal dropout regions, it contributes to variability in diagnosis. This dilemma in clinical assessment of necrosis imaged with IVUS is addressed in this work. In our approach, fidelity of the backscattered ultrasonic signal received by the imaging transducer is initially estimated. This is followed by identification of true necrosis using statistical physics of ultrasonic backscattering. A random forest machine learning framework is used for the purpose of learning the parameter space defining ultrasonic backscattering distributions related to necrotic regions and discriminating it from non-necrotic shadows. Evidence of hunting down true necrosis in shadows of intravascular ultrasound is presented with ex vivo experiments along with cross-validation using ground truth obtained from histology. Nevertheless, in some rare cases necrosis is marginally over-estimated, primarily on account of non-reliable statistics estimation. This limitation is due to sparse spatial sampling between neighboring scan-lines at location far from the transducer. We suggest considering the geometrical location of detected necrosis together with estimated signal confidence during clinical decision making in view of such limitation.


Assuntos
Algoritmos , Doença da Artéria Coronariana/patologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia de Intervenção/métodos , Simulação por Computador , Humanos , Modelos Cardiovasculares , Necrose/diagnóstico por imagem , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
Med Image Anal ; 17(2): 236-53, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23313331

RESUMO

In this paper, a new segmentation framework with prior knowledge is proposed and applied to the left ventricles in cardiac Cine MRI sequences. We introduce a new formulation of the random walks method, coined as guided random walks, in which prior knowledge is integrated seamlessly. In comparison with existing approaches that incorporate statistical shape models, our method does not extract any principal model of the shape or appearance of the left ventricle. Instead, segmentation is accompanied by retrieving the closest subject in the database that guides the segmentation the best. Using this techniques, rare cases can also effectively exploit prior knowledge from few samples in training set. These cases are usually disregarded in statistical shape models as they are outnumbered by frequent cases (effect of class population). In the worst-case scenario, if there is no matching case in the database to guide the segmentation, performance of the proposed method reaches to the conventional random walks, which is shown to be accurate if sufficient number of seeds is provided. There is a fast solution to the proposed guided random walks by using sparse linear matrix operations and the whole framework can be seamlessly implemented in a parallel architecture. The method has been validated on a comprehensive clinical dataset of 3D+t short axis MR images of 104 subjects from 5 categories (normal, dilated left ventricle, ventricular hypertrophy, recent myocardial infarction, and heart failure). The average segmentation errors were found to be 1.54 mm for the endocardium and 1.48 mm for the epicardium. The method was validated by measuring different algorithmic and physiologic indices and quantified with manual segmentation ground truths, provided by a cardiologist.


Assuntos
Algoritmos , Interpretação Estatística de Dados , Ventrículos do Coração/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imagem Cinética por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
13.
Knee ; 20(6): 505-10, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23044469

RESUMO

BACKGROUND: Studying the kinematics of the ACL deficient (ACLD) knees, during different physiological activities and muscle contraction patterns, can improve our understanding of the joint's altered biomechanics due to ACL deficiency as well as the efficacy and safety of the rehabilitations exercises. METHODS: Twenty-five male volunteers, including 11 normal and 14 unilateral ACLD subjects, participated in this study. The kinematics of the injured knees of the ACLD subjects was compared with their intact knees and the healthy group during passive flexion and isometric leg press with the knees flexed from full extension to 45° flexion, with 15° intervals. An accurate registration algorithm was used to obtain the three dimensional kinematical parameters, from magnetic resonance images. RESULTS: The ACL deficiency mainly altered the tibial anterior translation, and to some extent its internal rotation, with the change in other parameters not significant. During leg press, the anterior translation of the ACLD knees was significantly larger than that of the normal knees at 30° flexion, but not at 45°. Comparison of the anterior translations of the ACLD knees during leg press with that of the passive flexion revealed improved consistency (CVs changed from 1.2 and 4.0 to 0.6 and 0.6, at 30° and 45° flexion, respectively), but considerable larger translations (means increased by 6.2 and 4.9mm, at 30° and 45° flexion, respectively). CONCLUSION: The simultaneous contraction of the quadriceps and hamstrings during leg press, although reduces the knee laxity, cannot compensate for the loss of the ACL to restore the normal kinematics of the joint, at least during early flexion.


Assuntos
Lesões do Ligamento Cruzado Anterior , Teste de Esforço/métodos , Contração Isométrica/fisiologia , Instabilidade Articular/fisiopatologia , Amplitude de Movimento Articular/fisiologia , Algoritmos , Fenômenos Biomecânicos , Estudos de Casos e Controles , Humanos , Traumatismos do Joelho/fisiopatologia , Masculino , Valores de Referência
14.
IEEE Trans Biomed Eng ; 59(11): 3039-49, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22907962

RESUMO

Intravascular ultrasound (IVUS) is the predominant imaging modality in the field of interventional cardiology that provides real-time cross-sectional images of coronary arteries and the extent of atherosclerosis. Due to heterogeneity of lesions and stringent spatial/spectral behavior of tissues, atherosclerotic plaque characterization has always been a challenge and still is an open problem. In this paper, we present a systematic framework from in vitro data collection, histology preparation, IVUS-histology registration along with matching procedure, and finally a robust texture-derived unsupervised atherosclerotic plaque labeling. We have performed our algorithm on in vitro and in vivo images acquired with single-element 40 MHz and 64-elements phased array 20 MHz transducers, respectively. In former case, we have quantified results by local contrasting of constructed tissue colormaps with corresponding histology images employing an independent expert and in the latter case, virtual histology images have been utilized for comparison. We tackle one of the main challenges in the field that is the reliability of tissues behind arc of calcified plaques and validate the results through a novel random walks framework by incorporating underlying physics of ultrasound imaging. We conclude that proposed framework is a formidable approach for retrieving imperative information regarding tissues and building a reliable training dataset for supervised classification and its extension for in vivo applications.


Assuntos
Técnicas Histológicas/métodos , Processamento de Imagem Assistida por Computador/métodos , Placa Aterosclerótica/diagnóstico por imagem , Placa Aterosclerótica/patologia , Ultrassonografia de Intervenção/métodos , Algoritmos , Ecocardiografia , Humanos , Miocárdio/patologia
15.
Artigo em Inglês | MEDLINE | ID: mdl-22254888

RESUMO

In this paper we propose a new method for shape guided segmentation of cardiac boundaries based on manifold learning of the shapes represented by the phase field approximation of the Mumford-Shah functional. A novel distance is defined to measure the similarity of shapes without requiring deformable registration. Cardiac motion is compensated and phases are mapped into one reference phase, that is the end of diastole, to avoid time warping and synchronization at all cardiac phases. Non-linear embedding of these 3D shapes extracts the manifold of the inter-subject variation of the heart shape to be used for guiding the segmentation for a new subject. For validation the method is applied to a comprehensive dataset of 3D+t cardiac Cine MRI from normal subjects and patients.


Assuntos
Coração/anatomia & histologia , Imageamento por Ressonância Magnética/métodos , Humanos
16.
Comput Biol Med ; 40(1): 21-8, 2010 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19913783

RESUMO

In this paper a variational framework for joint segmentation and motion estimation is employed for inspecting heart in Cine MRI sequences. A functional including Mumford-Shah segmentation and optical flow based dense motion estimation is approximated using the phase-field technique. The minimizer of the functional provides an optimum motion field and edge set by considering both spatial and temporal discontinuities. Exploiting calculus of variation principles, multiple partial differential equations associated with the Euler-Lagrange equations of the functional are extracted, first. Next, the finite element method is used to discretize the resulting PDEs for numerical solution. Several simulation runs are used to test the convergence and the parameter sensitivity of the method. It is further applied to a comprehensive set of clinical data in order to compare with conventional cascade methods. Developmental constraints are identified as memory usage and computational complexities, which may be resolved utilizing sparse matrix manipulations and similar techniques. Based on the results of this study, joint segmentation and motion estimation outperforms previously reported cascade approaches especially in segmentation. Experimental results substantiated that the proposed method extracts the motion field and the edge set more precisely in comparison with conventional cascade approaches. This superior result is the consequence of simultaneously considering the discontinuity in both motion field and image space and including consequent frames (usually five) in our joint process functional.


Assuntos
Coração/fisiologia , Imagem Cinética por Ressonância Magnética , Modelos Cardiovasculares , Algoritmos , Diástole/fisiologia , Feminino , Coração/anatomia & histologia , Humanos , Interpretação de Imagem Assistida por Computador , Masculino , Sístole/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...