Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Med Image Anal ; 94: 103146, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38537416

RESUMEN

Focused cardiac ultrasound (FoCUS) is a valuable point-of-care method for evaluating cardiovascular structures and function, but its scope is limited by equipment and operator's experience, resulting in primarily qualitative 2D exams. This study presents a novel framework to automatically estimate the 3D spatial relationship between standard FoCUS views. The proposed framework uses a multi-view U-Net-like fully convolutional neural network to regress line-based heatmaps representing the most likely areas of intersection between input images. The lines that best fit the regressed heatmaps are then extracted, and a system of nonlinear equations based on the intersection between view triplets is created and solved to determine the relative 3D pose between all input images. The feasibility and accuracy of the proposed pipeline were validated using a novel realistic in silico FoCUS dataset, demonstrating promising results. Interestingly, as shown in preliminary experiments, the estimation of the 2D images' relative poses enables the application of 3D image analysis methods and paves the way for 3D quantitative assessments in FoCUS examinations.


Asunto(s)
Imagenología Tridimensional , Redes Neurales de la Computación , Humanos , Imagenología Tridimensional/métodos , Ecocardiografía , Corazón/diagnóstico por imagen
2.
Artículo en Inglés | MEDLINE | ID: mdl-38082637

RESUMEN

Medical image segmentation is a paramount task for several clinical applications, namely for the diagnosis of pathologies, for treatment planning, and for aiding image-guided surgeries. With the development of deep learning, Convolutional Neural Networks (CNN) have become the state-of-the-art for medical image segmentation. However, issues are still raised concerning the precise object boundary delineation, since traditional CNNs can produce non-smooth segmentations with boundary discontinuities. In this work, a U-shaped CNN architecture is proposed to generate both pixel-wise segmentation and probabilistic contour maps of the object to segment, in order to generate reliable segmentations at the object's boundaries. Moreover, since the segmentation and contour maps must be inherently related to each other, a dual consistency loss that relates the two outputs of the network is proposed. Thus, the network is enforced to consistently learn the segmentation and contour delineation tasks during the training. The proposed method was applied and validated on a public dataset of cardiac 3D ultrasound images of the left ventricle. The results obtained showed the good performance of the method and its applicability for the cardiac dataset, showing its potential to be used in clinical practice for medical image segmentation.Clinical Relevance- The proposed network with dual consistency loss scheme can improve the performance of state-of-the-art CNNs for medical image segmentation, proving its value to be applied for computer-aided diagnosis.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Corazón , Ventrículos Cardíacos
3.
Artículo en Inglés | MEDLINE | ID: mdl-38083333

RESUMEN

Breast cancer is a global public health concern. For women with suspicious breast lesions, the current diagnosis requires a biopsy, which is usually guided by ultrasound (US). However, this process is challenging due to the low quality of the US image and the complexity of dealing with the US probe and the surgical needle simultaneously, making it largely reliant on the surgeon's expertise. Some previous works employing collaborative robots emerged to improve the precision of biopsy interventions, providing an easier, safer, and more ergonomic procedure. However, for these equipment to be able to navigate around the breast autonomously, 3D breast reconstruction needs to be available. The accuracy of these systems still needs to improve, with the 3D reconstruction of the breast being one of the biggest focuses of errors. The main objective of this work is to develop a method to obtain a robust 3D reconstruction of the patient's breast, based on RGB monocular images, which later can be used to compute the robot's trajectories for the biopsy. To this end, depth estimation techniques will be developed, based on a deep learning architecture constituted by a CNN, LSTM, and MLP, to generate depth maps capable of being converted into point clouds. After merging several from multiple points of view, it is possible to generate a real-time reconstruction of the breast as a mesh. The development and validation of our method was performed using a previously described synthetic dataset. Hence, this procedure takes RGB images and the cameras' position and outputs the breasts' meshes. It has a mean error of 3.9 mm and a standard deviation of 1.2 mm. The final results attest to the ability of this methodology to predict the breast's shape and size using monocular images.Clinical Relevance- This work proposes a method based on artificial intelligence and monocular RGB images to obtain the breast's volume during robotic guided breast biopsies, improving their execution and safety.


Asunto(s)
Mamoplastia , Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Femenino , Inteligencia Artificial , Mama/patología
4.
Sensors (Basel) ; 23(12)2023 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-37420776

RESUMEN

In the context of Shared Autonomous Vehicles, the need to monitor the environment inside the car will be crucial. This article focuses on the application of deep learning algorithms to present a fusion monitoring solution which was three different algorithms: a violent action detection system, which recognizes violent behaviors between passengers, a violent object detection system, and a lost items detection system. Public datasets were used for object detection algorithms (COCO and TAO) to train state-of-the-art algorithms such as YOLOv5. For violent action detection, the MoLa InCar dataset was used to train on state-of-the-art algorithms such as I3D, R(2+1)D, SlowFast, TSN, and TSM. Finally, an embedded automotive solution was used to demonstrate that both methods are running in real-time.


Asunto(s)
Algoritmos , Carrera , Vehículos Autónomos , Reconocimiento en Psicología
5.
Sensors (Basel) ; 23(8)2023 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-37112337

RESUMEN

Multi-human detection and tracking in indoor surveillance is a challenging task due to various factors such as occlusions, illumination changes, and complex human-human and human-object interactions. In this study, we address these challenges by exploring the benefits of a low-level sensor fusion approach that combines grayscale and neuromorphic vision sensor (NVS) data. We first generate a custom dataset using an NVS camera in an indoor environment. We then conduct a comprehensive study by experimenting with different image features and deep learning networks, followed by a multi-input fusion strategy to optimize our experiments with respect to overfitting. Our primary goal is to determine the best input feature types for multi-human motion detection using statistical analysis. We find that there is a significant difference between the input features of optimized backbones, with the best strategy depending on the amount of available data. Specifically, under a low-data regime, event-based frames seem to be the preferred input feature type, while higher data availability benefits the combined use of grayscale and optical flow features. Our results demonstrate the potential of sensor fusion and deep learning techniques for multi-human tracking in indoor surveillance, although it is acknowledged that further studies are needed to confirm our findings.


Asunto(s)
Cultura , Flujo Optico , Humanos , Iluminación , Movimiento (Física) , Proyectos de Investigación
6.
Sci Rep ; 13(1): 761, 2023 01 14.
Artículo en Inglés | MEDLINE | ID: mdl-36641527

RESUMEN

Chronic Venous Disorders (CVD) of the lower limbs are one of the most prevalent medical conditions, affecting 35% of adults in Europe and North America. Due to the exponential growth of the aging population and the worsening of CVD with age, it is expected that the healthcare costs and the resources needed for the treatment of CVD will increase in the coming years. The early diagnosis of CVD is fundamental in treatment planning, while the monitoring of its treatment is fundamental to assess a patient's condition and quantify the evolution of CVD. However, correct diagnosis relies on a qualitative approach through visual recognition of the various venous disorders, being time-consuming and highly dependent on the physician's expertise. In this paper, we propose a novel automatic strategy for the joint segmentation and classification of CVDs. The strategy relies on a multi-task deep learning network, denominated VENet, that simultaneously solves segmentation and classification tasks, exploiting the information of both tasks to increase learning efficiency, ultimately improving their performance. The proposed method was compared against state-of-the-art strategies in a dataset of 1376 CVD images. Experiments showed that the VENet achieved a classification performance of 96.4%, 96.4%, and 97.2% for accuracy, precision, and recall, respectively, and a segmentation performance of 75.4%, 76.7.0%, 76.7% for the Dice coefficient, precision, and recall, respectively. The joint formulation increased the robustness of both tasks when compared to the conventional classification or segmentation strategies, proving its added value, mainly for the segmentation of small lesions.


Asunto(s)
Enfermedades Cardiovasculares , Redes Neurales de la Computación , Venas , Anciano , Humanos , Europa (Continente) , Procesamiento de Imagen Asistido por Computador/métodos , América del Norte , Enfermedad Crónica
7.
Sensors (Basel) ; 22(19)2022 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-36236577

RESUMEN

The increase of the aging population brings numerous challenges to health and aesthetic segments. Here, the use of laser therapy for dermatology is expected to increase since it allows for non-invasive and infection-free treatments. However, existing laser devices require doctors' manually handling and visually inspecting the skin. As such, the treatment outcome is dependent on the user's expertise, which frequently results in ineffective treatments and side effects. This study aims to determine the workspace and limits of operation of laser treatments for vascular lesions of the lower limbs. The results of this study can be used to develop a robotic-guided technology to help address the aforementioned problems. Specifically, workspace and limits of operation were studied in eight vascular laser treatments. For it, an electromagnetic tracking system was used to collect the real-time positioning of the laser during the treatments. The computed average workspace length, height, and width were 0.84 ± 0.15, 0.41 ± 0.06, and 0.78 ± 0.16 m, respectively. This corresponds to an average volume of treatment of 0.277 ± 0.093 m3. The average treatment time was 23.2 ± 10.2 min, with an average laser orientation of 40.6 ± 5.6 degrees. Additionally, the average velocities of 0.124 ± 0.103 m/s and 31.5 + 25.4 deg/s were measured. This knowledge characterizes the vascular laser treatment workspace and limits of operation, which may ease the understanding for future robotic system development.


Asunto(s)
Robótica , Extremidad Inferior/cirugía , Robótica/métodos , Resultado del Tratamiento
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1016-1019, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36083940

RESUMEN

Cephalometric analysis is an important and routine task in the medical field to assess craniofacial development and to diagnose cranial deformities and midline facial abnormalities. The advance of 3D digital techniques potentiated the development of 3D cephalometry, which includes the localization of cephalometric landmarks in the 3D models. However, manual labeling is still applied, being a tedious and time-consuming task, highly prone to intra/inter-observer variability. In this paper, a framework to automatically locate cephalometric landmarks in 3D facial models is presented. The landmark detector is divided into two stages: (i) creation of 2D maps representative of the 3D model; and (ii) landmarks' detection through a regression convolutional neural network (CNN). In the first step, the 3D facial model is transformed to 2D maps retrieved from 3D shape descriptors. In the second stage, a CNN is used to estimate a probability map for each landmark using the 2D representations as input. The detection method was evaluated in three different datasets of 3D facial models, namely the Texas 3DFR, the BU3DFE, and the Bosphorus databases. An average distance error of 2.3, 3.0, and 3.2 mm were obtained for the landmarks evaluated on each dataset. The obtained results demonstrated the accuracy of the method in different 3D facial datasets with a performance competitive to the state-of-the-art methods, allowing to prove its versability to different 3D models. Clinical Relevance- Overall, the performance of the landmark detector demonstrated its potential to be used for 3D cephalometric analysis.


Asunto(s)
Puntos Anatómicos de Referencia , Imagenología Tridimensional , Puntos Anatómicos de Referencia/diagnóstico por imagen , Cefalometría/métodos , Cara/anatomía & histología , Cara/diagnóstico por imagen , Humanos , Imagenología Tridimensional/métodos , Reproducibilidad de los Resultados
9.
J Biomed Inform ; 132: 104121, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35750261

RESUMEN

Evaluation of the head shape of newborns is needed to detect cranial deformities, disturbances in head growth, and consequently, to predict short- and long-term neurodevelopment. Currently, there is a lack of automatic tools to provide a detailed evaluation of the head shape. Artificial intelligence (AI) methods, namely deep learning (DL), can be explored to develop fast and automatic approaches for shape evaluation. However, due to the clinical variability of patients' head anatomy, generalization of AI networks to the clinical needs is paramount and extremely challenging. In this work, a new framework is proposed to augment the 3D data used for training DL networks for shape evaluation. The proposed augmentation strategy deforms head surfaces towards different deformities. For that, a point-based 3D morphable model (p3DMM) is developed to generate a statistical model representative of head shapes of different cranial deformities. Afterward, a constrained transformation approach (3DHT) is applied to warp a head surface towards a target deformity by estimating a dense motion field from a sparse one resulted from the p3DMM. Qualitative evaluation showed that the proposed method generates realistic head shapes indistinguishable from the real ones. Moreover, quantitative experiments demonstrated that DL networks training with the proposed augmented surfaces improves their performance in terms of head shape analysis. Overall, the introduced augmentation allows to effectively transform a given head surface towards different deformity shapes, potentiating the development of DL approaches for head shape analysis.


Asunto(s)
Inteligencia Artificial , Modelos Estadísticos , Humanos , Lactante , Recién Nacido
10.
Ann Biomed Eng ; 50(9): 1022-1037, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35622207

RESUMEN

Shape analysis of infant's heads is crucial to diagnose cranial deformities and evaluate head growth. Currently available 3D imaging systems can be used to create 3D head models, promoting the clinical practice for head evaluation. However, manual analysis of 3D shapes is difficult and operator-dependent, causing inaccuracies in the analysis. This study aims to validate an automatic landmark detection method for head shape analysis. The detection results were compared with manual analysis in three levels: (1) distance error of landmarks; (2) accuracy of standard cranial measurements, namely cephalic ratio (CR), cranial vault asymmetry index (CVAI), and overall symmetry ratio (OSR); and (3) accuracy of the final diagnosis of cranial deformities. For each level, the intra- and interobserver variability was also studied by comparing manual landmark settings. High landmark detection accuracy was achieved by the method in 166 head models. A very strong agreement with manual analysis for the cranial measurements was also obtained, with intraclass correlation coefficients of 0.997, 0.961, and 0.771 for the CR, CVAI, and OSR. 91% agreement with manual analysis was achieved in the diagnosis of cranial deformities. Considering its high accuracy and reliability in different evaluation levels, the method showed to be feasible for use in clinical practice for head shape analysis.


Asunto(s)
Imagenología Tridimensional , Cráneo , Cefalometría/métodos , Humanos , Imagenología Tridimensional/métodos , Lactante , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Cráneo/diagnóstico por imagen
11.
Comput Methods Programs Biomed ; 215: 106629, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35065326

RESUMEN

BACKGROUND AND OBJECTIVE: Examination of head shape and brain during the fetal period is paramount to evaluate head growth, predict neurodevelopment, and to diagnose fetal abnormalities. Prenatal ultrasound is the most used imaging modality to perform this evaluation. However, manual interpretation of these images is challenging and thus, image processing methods to aid this task have been proposed in the literature. This article aims to present a review of these state-of-the-art methods. METHODS: In this work, it is intended to analyze and categorize the different image processing methods to evaluate fetal head and brain in ultrasound imaging. For that, a total of 109 articles published since 2010 were analyzed. Different applications are covered in this review, namely analysis of head shape and inner structures of the brain, standard clinical planes identification, fetal development analysis, and methods for image processing enhancement. RESULTS: For each application, the reviewed techniques are categorized according to their theoretical approach, and the more suitable image processing methods to accurately analyze the head and brain are identified. Furthermore, future research needs are discussed. Finally, topics whose research is lacking in the literature are outlined, along with new fields of applications. CONCLUSIONS: A multitude of image processing methods has been proposed for fetal head and brain analysis. Summarily, techniques from different categories showed their potential to improve clinical practice. Nevertheless, further research must be conducted to potentiate the current methods, especially for 3D imaging analysis and acquisition and for abnormality detection.


Asunto(s)
Cabeza , Ultrasonografía Prenatal , Encéfalo/diagnóstico por imagen , Femenino , Cabeza/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Embarazo , Ultrasonografía
12.
IEEE J Biomed Health Inform ; 26(1): 324-333, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34152992

RESUMEN

Pectus excavatum (PE) is the most common abnormality of the thoracic cage, whose severity is evaluated by extracting three indices (Haller, correction and asymmetry) from computed tomography (CT) images. To date, this analysis is performed manually, which is tedious and prone to variability. In this paper, a fully automatic framework for PE severity quantification from CT images is proposed, comprising three steps: (1) identification of the sternum's greatest depression point; (2) detection of 8 anatomical keypoints relevant for severity assessment; and (3) measurements' geometric regularization and extraction. The first two steps rely on heatmap regression networks based on the Unet++ architecture, including a novel variant adapted to predict 1D confidence maps. The framework was evaluated on a database with 269 CTs. For comparative purposes, intra-observer, inter-observer and intra-patient variability of the estimated indices were analyzed in a subset of patients. The developed system showed a good agreement with the manual approach (a mean relative absolute error of 4.41%, 5.22% and 1.86% for the Haller, correction, and asymmetry indices, respectively), with limits of agreement comparable to the inter-observer variability. In the intra-patient analysis, the proposed framework outperformed the expert, showing a higher reproducibility between indices extracted from distinct CTs of the same patient. Overall, these results support the feasibility of the developed framework for the automatic, accurate and reproducible quantification of PE severity in a clinical context.


Asunto(s)
Aprendizaje Profundo , Tórax en Embudo , Tórax en Embudo/diagnóstico por imagen , Humanos , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/métodos
13.
Sensors (Basel) ; 22(1)2021 Dec 31.
Artículo en Inglés | MEDLINE | ID: mdl-35009846

RESUMEN

COVID-19 was responsible for devastating social, economic, and political effects all over the world. Although the health authorities imposed restrictions provided relief and assisted with trying to return society to normal life, it is imperative to monitor people's behavior and risk factors to keep virus transmission levels as low as possible. This article focuses on the application of deep learning algorithms to detect the presence of masks on people in public spaces (using RGB cameras), as well as the detection of the caruncle in the human eye area to make an accurate measurement of body temperature (using thermal cameras). For this task, synthetic data generation techniques were used to create hybrid datasets from public ones to train state-of-the-art algorithms, such as YOLOv5 object detector and a keypoint detector based on Resnet-50. For RGB mask detection, YOLOv5 achieved an average precision of 82.4%. For thermal masks, glasses, and caruncle detection, YOLOv5 and keypoint detector achieved an average precision of 96.65% and 78.7%, respectively. Moreover, RGB and thermal datasets were made publicly available.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Algoritmos , Humanos , SARS-CoV-2
14.
Artículo en Inglés | MEDLINE | ID: mdl-33211657

RESUMEN

Renal ultrasound (US) imaging is the primary imaging modality for the assessment of the kidney's condition and is essential for diagnosis, treatment and surgical intervention planning, and follow-up. In this regard, kidney delineation in 3-D US images represents a relevant and challenging task in clinical practice. In this article, a novel framework is proposed to accurately segment the kidney in 3-D US images. The proposed framework can be divided into two stages: 1) initialization of the segmentation method and 2) kidney segmentation. Within the initialization stage, a phase-based feature detection method is used to detect edge points at kidney boundaries, from which the segmentation is automatically initialized. In the segmentation stage, the B-spline explicit active surface framework is adapted to obtain the final kidney contour. Here, a novel hybrid energy functional that combines localized region- and edge-based terms is used during segmentation. For the edge term, a fast-signed phase-based detection approach is applied. The proposed framework was validated in two distinct data sets: 1) 15 3-D challenging poor-quality US images used for experimental development, parameters assessment, and evaluation and 2) 42 3-D US images (both healthy and pathologic kidneys) used to unbiasedly assess its accuracy. Overall, the proposed method achieved a Dice overlap around 81% and an average point-to-surface error of ~2.8 mm. These results demonstrate the potential of the proposed method for clinical usage.


Asunto(s)
Imagenología Tridimensional , Riñón , Algoritmos , Riñón/diagnóstico por imagen , Ultrasonografía
15.
IEEE J Biomed Health Inform ; 25(7): 2643-2654, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33147152

RESUMEN

Landmark labeling in 3D head surfaces is an important and routine task in clinical practice to evaluate head shape, namely to analyze cranial deformities or growth evolution. However, manual labeling is still applied, being a tedious and time-consuming task, highly prone to intra-/inter-observer variability, and can mislead the diagnose. Thus, automatic methods for anthropometric landmark detection in 3D models have a high interest in clinical practice. In this paper, a novel framework is proposed to accurately detect landmarks in 3D infant's head surfaces. The proposed method is divided into two stages: (i) 2D representation of the 3D head surface; and (ii) landmark detection through a deep learning strategy. Moreover, a 3D data augmentation method to create shape models based on the expected head variability is proposed. The proposed framework was evaluated in synthetic and real datasets, achieving accurate detection results. Furthermore, the data augmentation strategy proved its added value, increasing the method's performance. Overall, the obtained results demonstrated the robustness of the proposed method and its potential to be used in clinical practice for head shape analysis.


Asunto(s)
Aprendizaje Profundo , Antropometría , Cabeza/diagnóstico por imagen , Humanos , Imagenología Tridimensional
16.
Med Phys ; 47(1): 19-26, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31661566

RESUMEN

PURPOSE: Electromagnetic tracking systems (EMTSs) have been proposed to assist the percutaneous renal access (PRA) during minimally invasive interventions to the renal system. However, the influence of other surgical instruments widely used during PRA (like ureteroscopy and ultrasound equipment) in the EMTS performance is not completely known. This work performs this assessment for two EMTSs [Aurora® Planar Field Generator (PFG); Aurora® Tabletop Field Generator (TTFG)]. METHODS: An assessment platform, composed by a scaffold with specific supports to attach the surgical instruments and a plate phantom with multiple levels to precisely translate or rotate the surgical instruments, was developed. The median accuracy and precision in terms of position and orientation were estimated for the PFG and TTFG in a surgical environment using this platform. Then, the influence of different surgical instruments (alone or together), namely analogic flexible ureterorenoscope (AUR), digital flexible ureterorenoscope (DUR), two-dimensional (2D) ultrasound (US) probe, and four-dimensional (4D) mechanical US probe, was assessed for both EMTSs by coupling the instruments to 5-DOF and 6-DOF sensors. RESULTS: Overall, the median positional and orientation accuracies in the surgical environment were 0.85 mm and 0.42° for PFG, and 0.72 mm and 0.39° for TTFG, while precisions were 0.10 mm and 0.03° for PFG, and 0.20 mm and 0.12° for TTFG, respectively. No significant differences were found for accuracy between EMTSs. However, PFG showed a tendency for higher precision than TTFG. AUR, DUR, and 2D US probe did not influence the accuracy and precision of both EMTSs. In opposition, the 4D probe distorted the signal near the attached sensor, making readings unreliable. CONCLUSIONS: Ureteroscopy- and ultrasonography-assisted PRA based on EMTS guidance are feasible with the tested AUR or DUR together with the 2D probe. More studies must be performed to evaluate the probes and ureterorenoscopes' influence before their use in PRA based on EMTS guidance.


Asunto(s)
Fenómenos Electromagnéticos , Riñón , Ultrasonografía/instrumentación , Ureteroscopía/instrumentación
17.
Int J Cardiovasc Imaging ; 35(5): 881-895, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-30701439

RESUMEN

The assessment of aortic valve (AV) morphology is paramount for planning transcatheter AV implantation (TAVI). Nowadays, pre-TAVI sizing is routinely performed at one cardiac phase only, usually at mid-systole. Nonetheless, the AV is a dynamic structure that undergoes changes in size and shape throughout the cardiac cycle, which may be relevant for prosthesis selection. Thus, the aim of this study was to present and evaluate a novel software tool enabling the automatic sizing of the AV dynamically in three-dimensional (3D) transesophageal echocardiography (TEE) images. Forty-two patients who underwent preoperative 3D-TEE images were retrospectively analyzed using the software. Dynamic measurements were automatically extracted at four levels, including the aortic annulus. These measures were used to assess the software's ability to accurately and reproducibly quantify the conformational changes of the aortic root and were validated against automated sizing measurements independently extracted at distinct time points. The software extracted physiological dynamic measurements in less than 2 min, that were shown to be accurate (error 2.2 ± 26.3 mm2 and 0.0 ± 2.53 mm for annular area and perimeter, respectively) and highly reproducible (0.85 ± 6.18 and 0.65 ± 7.90 mm2 of intra- and interobserver variability, respectively, in annular area). Using the maximum or minimum measured values rather than mid-systolic ones for device sizing resulted in a potential change of recommended size in 7% and 60% of the cases, respectively. The presented software tool allows a fast, automatic and reproducible dynamic assessment of the AV morphology from 3D-TEE images, with the extracted measures influencing the device selection depending on the cardiac moment used to perform its sizing. This novel tool may thus ease and potentially increase the observer's confidence during prosthesis' size selection at the preoperative TAVI planning.


Asunto(s)
Estenosis de la Válvula Aórtica/diagnóstico por imagen , Válvula Aórtica/diagnóstico por imagen , Ecocardiografía Tridimensional/métodos , Ecocardiografía Transesofágica/métodos , Hemodinámica , Interpretación de Imagen Asistida por Computador/métodos , Anciano , Anciano de 80 o más Años , Algoritmos , Válvula Aórtica/fisiopatología , Válvula Aórtica/cirugía , Estenosis de la Válvula Aórtica/fisiopatología , Estenosis de la Válvula Aórtica/cirugía , Automatización , Femenino , Prótesis Valvulares Cardíacas , Humanos , Masculino , Variaciones Dependientes del Observador , Valor Predictivo de las Pruebas , Diseño de Prótesis , Reproducibilidad de los Resultados , Estudios Retrospectivos , Diseño de Software , Factores de Tiempo , Reemplazo de la Válvula Aórtica Transcatéter/instrumentación
18.
Med Phys ; 46(3): 1115-1126, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30592311

RESUMEN

PURPOSE: As a crucial step in accessing the kidney in several minimally invasive interventions, percutaneous renal access (PRA) practicality and safety may be improved through the fusion of computed tomography (CT) and ultrasound (US) data. This work aims to assess the potential of a surface-based registration technique and establish an optimal US acquisition protocol to fuse two-dimensional (2D) US and CT data for image-guided PRA. METHODS: Ten porcine kidney phantoms with fiducial markers were imaged using CT and three-dimensional (3D) US. Both images were manually segmented and aligned. In a virtual environment, 2D contours were extracted by slicing the 3D US kidney surfaces and using usual PRA US-guided views, while the 3D CT kidney surfaces were transformed to simulate positional variability. Surface-based registration was performed using two methods of the iterative closest point algorithm (point-to-point, ICP1; and point-to-plane, ICP2), while four acquisition variants were studied: (a) use of single-plane (transverse, SPT ; or longitudinal, SPL ) vs bi-plane views (BP); (b) use of different kidney's coverage ranges acquired by a probe's sweep; (c) influence of sweep movements; and (d) influence of the spacing between consecutive slices acquired for a specific coverage range. RESULTS: BP view showed the best performance (TRE = 2.26 mm) when ICP2 method, a wide kidney coverage range (20°, with slices spaced by 5°), and a large sweep along the central longitudinal view were used, showing a statistically similar performance (P = 0.097) to a full 3D US surface registration (TRE = 2.28 mm). CONCLUSIONS: An optimal 2D US acquisition protocol was evaluated. Surface-based registration, using multiple slices and specific sweep movements and views, is here suggested as a valid strategy for intraoperative image fusion using CT and US data, having the potential to be applied to other image modalities and/or interventions.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Riñón/diagnóstico por imagen , Fantasmas de Imagen , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Ultrasonografía/métodos , Algoritmos , Animales , Estudios de Factibilidad , Marcadores Fiduciales , Riñón/cirugía , Propiedades de Superficie , Porcinos
19.
IEEE Trans Med Imaging ; 37(11): 2547-2557, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-29993570

RESUMEN

Over the years, medical image tracking has gained considerable attention from both medical and research communities due to its widespread utility in a multitude of clinical applications, from functional assessment during diagnosis and therapy planning to structure tracking or image fusion during image-guided interventions. Despite the ever-increasing number of image tracking methods available, most still consist of independent implementations with specific target applications, lacking the versatility to deal with distinct end-goals without the need for methodological tailoring and/or exhaustive tuning of numerous parameters. With this in mind, we have developed the medical image tracking toolbox (MITT)-a software package designed to ease customization of image tracking solutions in the medical field. While its workflow principles make it suitable to work with 2-D or 3-D image sequences, its modules offer versatility to set up computationally efficient tracking solutions, even for users with limited programming skills. MITT is implemented in both C/C++ and MATLAB, including several variants of an object-based image tracking algorithm and allowing to track multiple types of objects (i.e., contours, multi-contours, surfaces, and multi-surfaces) with several customization features. In this paper, the toolbox is presented, its features discussed, and illustrative examples of its usage in the cardiology field provided, demonstrating its versatility, simplicity, and time efficiency.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Programas Informáticos , Corazón/diagnóstico por imagen , Cardiopatías/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética
20.
J Am Soc Echocardiogr ; 31(4): 515-525.e5, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29625649

RESUMEN

BACKGROUND: Accurate aortic annulus (AoA) sizing is crucial for transcatheter aortic valve implantation planning. Three-dimensional (3D) transesophageal echocardiography (TEE) is a viable alternative to the standard multidetector row computed tomography (MDCT) for such assessment, with few automatic software solutions available. The aim of this study was to present and evaluate a novel software tool for automatic AoA sizing by 3D TEE. METHODS: One hundred one patients who underwent both preoperative MDCT and 3D TEE were retrospectively analyzed using the software. The automatic software measurements' accuracy was compared against values obtained using standard manual MDCT, as well as against those obtained using manual 3D TEE, and intraobserver, interobserver, and test-retest reproducibility was assessed. Because the software can be used as a fully automatic or as an interactive tool, both options were addressed and contrasted. The impact of these measures on the recommended prosthesis size was then evaluated to assess if the software's automated sizes were concordant with those obtained using an MDCT- or a TEE-based manual sizing strategy. RESULTS: The software showed very good agreement with manual values obtained using MDCT and 3D TEE, with the interactive approach having slightly narrower limits of agreement. The latter also had excellent intra- and interobserver variability. Both fully automatic and interactive analyses showed excellent test-retest reproducibility, with the first having a faster analysis time. Finally, either approach led to good sizing agreement against the true implanted sizes (>77%) and against MDCT-based sizes (>88%). CONCLUSIONS: Given the automated, reproducible, and fast nature of its analyses, the novel software tool presented here may potentially facilitate and thus increase the use of 3D TEE for preoperative transcatheter aortic valve implantation sizing.


Asunto(s)
Estenosis de la Válvula Aórtica/cirugía , Válvula Aórtica/diagnóstico por imagen , Ecocardiografía Tridimensional/métodos , Ecocardiografía Transesofágica/métodos , Tomografía Computarizada Multidetector/métodos , Programas Informáticos , Reemplazo de la Válvula Aórtica Transcatéter/métodos , Anciano de 80 o más Años , Estenosis de la Válvula Aórtica/diagnóstico , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...