Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 78
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38954568

RESUMO

Deep learning methods have recently achieved remarkable performance in vessel segmentation applications, yet require numerous labor-intensive labeled data. To alleviate the requirement of manual annotation, transfer learning methods can potentially be used to acquire the related knowledge of tubular structures from public large-scale labeled vessel datasets for target vessel segmentation in other anatomic sites of the human body. However, the cross-anatomy domain shift is a challenging task due to the formidable discrepancy among various vessel structures in different anatomies, resulting in the limited performance of transfer learning. Therefore, we propose a cross-anatomy transfer learning framework for 3D vessel segmentation, which first generates a pre-trained model on a public hepatic vessel dataset and then adaptively fine-tunes our target segmentation network initialized from the model for segmentation of other anatomic vessels. In the framework, the adaptive fine-tuning strategy is presented to dynamically decide on the frozen or fine-tuned filters of the target network for each input sample with a proxy network. Moreover, we develop a Gaussian-based signed distance map that explicitly encodes vessel-specific shape context. The prediction of the map is added as an auxiliary task in the segmentation network to capture geometry-aware knowledge in the fine-tuning. We demonstrate the effectiveness of our method through extensive experiments on two small-scale datasets of coronary artery and brain vessel. The results indicate the proposed method effectively overcomes the discrepancy of cross-anatomy domain shift to achieve accurate vessel segmentation for these two datasets.

2.
IEEE Trans Image Process ; 33: 3676-3691, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38837936

RESUMO

Medical image segmentation and registration are two fundamental and highly related tasks. However, current works focus on the mutual promotion between the two at the loss function level, ignoring the feature information generated by the encoder-decoder network during the task-specific feature mapping process and the potential inter-task feature relationship. This paper proposes a unified multi-task joint learning framework based on bi-fusion of structure and deformation at multi-scale, called BFM-Net, which simultaneously achieves the segmentation results and deformation field in a single-step estimation. BFM-Net consists of a segmentation subnetwork (SegNet), a registration subnetwork (RegNet), and the multi-task connection module (MTC). The MTC module is used to transfer the latent feature representation between segmentation and registration at multi-scale and link different tasks at the network architecture level, including the spatial attention fusion module (SAF), the multi-scale spatial attention fusion module (MSAF) and the velocity field fusion module (VFF). Extensive experiments on MR, CT and ultrasound images demonstrate the effectiveness of our approach. The MTC module can increase the Dice scores of segmentation and registration by 3.2%, 1.6%, 2.2%, and 6.2%, 4.5%, 3.0%, respectively. Compared with six state-of-the-art algorithms for segmentation and registration, BFM-Net can achieve superior performance in various modal images, fully demonstrating its effectiveness and generalization.

3.
Med Phys ; 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38865713

RESUMO

BACKGROUND: Inferring the shape and position of coronary artery poses challenges when using fluoroscopic image guidance during percutaneous coronary intervention (PCI) procedure. Although angiography enables coronary artery visualization, the use of injected contrast agent raises concerns about radiation exposure and the risk of contrast-induced nephropathy. To address these issues, dynamic coronary roadmapping overlaid on fluoroscopic images can provide coronary visual feedback without contrast injection. PURPOSE: This paper proposes a novel cardio-respiratory motion compensation method that utilizes cardiac state synchronization and catheter motion estimation to achieve coronary roadmapping in fluoroscopic images. METHODS: For more accurate cardiac state synchronization, video frame interpolation is applied to increase the frame rate of the original limited angiographic images, resulting in higher framerate and more adequate roadmaps. The proposed method also incorporates a multi-length cross-correlation based adaptive electrocardiogram (ECG) matching to address irregular cardiac motion situation. Furthermore, a shape-constrained path searching method is proposed to extract catheter structure from both fluoroscopic and angiographic image. Then catheter motion is estimated using a cascaded matching approach with an outlier removal strategy, leading to a final corrected roadmap. RESULTS: Evaluation of the proposed method on clinical x-ray images demonstrates its effectiveness, achieving a 92.8% F1 score for catheter extraction on 589 fluoroscopic and angiographic images. Additionally, the method achieves a 5.6-pixel distance error of the coronary roadmap on 164 intraoperative fluoroscopic images. CONCLUSIONS: Overall, the proposed method achieves accurate coronary roadmapping in fluoroscopic images and shows potential to overlay accurate coronary roadmap on fluoroscopic image in assisting PCI.

4.
IEEE Trans Med Imaging ; PP2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38805326

RESUMO

Accurately reconstructing 4D critical organs contributes to the visual guidance in X-ray image-guided interventional operation. Current methods estimate intraoperative dynamic meshes by refining a static initial organ mesh from the semantic information in the single-frame X-ray images. However, these methods fall short of reconstructing an accurate and smooth organ sequence due to the distinct respiratory patterns between the initial mesh and X-ray image. To overcome this limitation, we propose a novel dual-stage complementary 4D organ reconstruction (DSC-Recon) model for recovering dynamic organ meshes by utilizing the preoperative and intraoperative data with different respiratory patterns. DSC-Recon is structured as a dual-stage framework: 1) The first stage focuses on addressing a flexible interpolation network applicable to multiple respiratory patterns, which could generate dynamic shape sequences between any pair of preoperative 3D meshes segmented from CT scans. 2) In the second stage, we present a deformation network to take the generated dynamic shape sequence as the initial prior and explore the discriminate feature (i.e., target organ areas and meaningful motion information) in the intraoperative X-ray images, predicting the deformed mesh by introducing a designed feature mapping pipeline integrated into the initialized shape refinement process. Experiments on simulated and clinical datasets demonstrate the superiority of our method over state-of-the-art methods in both quantitative and qualitative aspects.

5.
Comput Methods Programs Biomed ; 248: 108108, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38461712

RESUMO

BACKGROUND: The existing face matching method requires a point cloud to be drawn on the real face for registration, which results in low registration accuracy due to the irregular deformation of the patient's skin that makes the point cloud have many outlier points. METHODS: This work proposes a non-contact pose estimation method based on similarity aspect graph hierarchical optimization. The proposed method constructs a distance-weighted and triangular-constrained similarity measure to describe the similarity between views by automatically identifying the 2D and 3D feature points of the face. A mutual similarity clustering method is proposed to construct a hierarchical aspect graph with 3D pose as nodes. A Monte Carlo tree search strategy is used to search the hierarchical aspect graph for determining the optimal pose of the facial 3D model, so as to realize the accurate registration of the facial 3D model and the real face. RESULTS: The proposed method was used to conduct accuracy verification experiments on the phantoms and volunteers, which were compared with four advanced pose calibration methods. The proposed method obtained average fusion errors of 1.13 ± 0.20 mm and 0.92 ± 0.08 mm in head phantom and volunteer experiments, respectively, which exhibits the best fusion performance among all comparison methods. CONCLUSIONS: Our experiments proved the effectiveness of the proposed pose estimation method in facial augmented reality.


Assuntos
Algoritmos , Realidade Aumentada , Humanos , Imageamento Tridimensional/métodos
6.
Comput Biol Med ; 171: 108176, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38401453

RESUMO

The segmentation of the orbit in computed tomography (CT) images plays a crucial role in facilitating the quantitative analysis of orbital decompression surgery for patients with Thyroid-associated Ophthalmopathy (TAO). However, the task of orbit segmentation, particularly in postoperative images, remains challenging due to the significant shape variation and limited amount of labeled data. In this paper, we present a two-stage semi-supervised framework for the automatic segmentation of the orbit in both preoperative and postoperative images, which consists of a pseudo-label generation stage and a semi-supervised segmentation stage. A Paired Copy-Paste strategy is concurrently introduced to proficiently amalgamate features extracted from both preoperative and postoperative images, thereby augmenting the network discriminative capability in discerning changes within orbital boundaries. More specifically, we employ a random cropping technique to transfer regions from labeled preoperative images (foreground) onto unlabeled postoperative images (background), as well as unlabeled preoperative images (foreground) onto labeled postoperative images (background). It is imperative to acknowledge that every set of preoperative and postoperative images belongs to the identical patient. The semi-supervised segmentation network (stage 2) utilizes a combination of mixed supervisory signals from pseudo labels (stage 1) and ground truth to process the two mixed images. The training and testing of the proposed method have been conducted on the CT dataset obtained from the Eye Hospital of Wenzhou Medical University. The experimental results demonstrate that the proposed method achieves a mean Dice similarity coefficient (DSC) of 91.92% with only 5% labeled data, surpassing the performance of the current state-of-the-art method by 2.4%.


Assuntos
Hospitais , Órbita , Humanos , Órbita/diagnóstico por imagem , Órbita/cirurgia , Tomografia Computadorizada por Raios X , Universidades , Processamento de Imagem Assistida por Computador
7.
Comput Biol Med ; 169: 107890, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38168646

RESUMO

Feature matching of monocular laparoscopic videos is crucial for visualization enhancement in computer-assisted surgery, and the keys to conducting high-quality matches are accurate homography estimation, relative pose estimation, as well as sufficient matches and fast calculation. However, limited by various monocular laparoscopic imaging characteristics such as highlight noises, motion blur, texture interference and illumination variation, most exiting feature matching methods face the challenges of producing high-quality matches efficiently and sufficiently. To overcome these limitations, this paper presents a novel sequential coupling feature descriptor to extract and express multilevel feature maps efficiently, and a dual-correlate optimized coarse-fine strategy to establish dense matches in coarse level and adjust pixel-wise matches in fine level. Firstly, a novel sequential coupling swin transformer layer is designed in feature descriptor to learn and extract multilevel feature representations richly without increasing complexity. Then, a dual-correlate optimized coarse-fine strategy is proposed to match coarse feature sequences under low resolution, and the correlated fine feature sequences is optimized to refine pixel-wise matches based on coarse matching priors. Finally, the sequential coupling feature descriptor and dual-correlate optimization are merged into the Sequential Coupling Dual-Correlate Network (SeCo DC-Net) to produce high-quality matches. The evaluation is conducted on two public laparoscopic datasets: Scared and EndoSLAM, and the experimental results show the proposed network outperforms state-of-the-art methods in homography estimation, relative pose estimation, reprojection error, matching pairs number and inference runtime. The source code is publicly available at https://github.com/Iheckzza/FeatureMatching.


Assuntos
Laparoscopia , Cirurgia Assistida por Computador , Aprendizagem , Software
8.
IEEE Trans Biomed Eng ; 71(2): 700-711, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38241137

RESUMO

OBJECTIVE: Biliary interventional procedures require physicians to track the interventional instrument tip (Tip) precisely with X-ray image. However, Tip positioning relies heavily on the physicians' experience due to the limitations of X-ray imaging and the respiratory interference, which leads to biliary damage, prolonged operation time, and increased X-ray radiation. METHODS: We construct an augmented reality (AR) navigation system for biliary interventional procedures. It includes system calibration, respiratory motion correction and fusion navigation. Firstly, the magnetic and 3D computed tomography (CT) coordinates are aligned through system calibration. Secondly, a respiratory motion correction method based on manifold regularization is proposed to correct the misalignment of the two coordinates caused by respiratory motion. Thirdly, the virtual biliary, liver and Tip from CT are overlapped to the corresponding position of the patient for dynamic virtual-real fusion. RESULTS: Our system is respectively evaluated and achieved an average alignment error of 0.75 ± 0.17 mm and 2.79 ± 0.46 mm on phantoms and patients. The navigation experiments conducted on phantoms achieve an average Tip positioning error of 0.98 ± 0.15 mm and an average fusion error of 1.67 ± 0.34 mm after correction. CONCLUSION: Our system can automatically register the Tip to the corresponding location in CT, and dynamically overlap the 3D virtual model onto patients to provide accurate and intuitive AR navigation. SIGNIFICANCE: This study demonstrates the clinical potential of our system by assisting physicians during biliary interventional procedures. Our system enables dynamic visualization of virtual model on patients, reducing the reliance on contrast agents and X-ray usage.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Humanos , Imageamento Tridimensional , Fígado , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Cirurgia Assistida por Computador/métodos
9.
Comput Biol Med ; 168: 107718, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37988787

RESUMO

Fractional flow reserve (FFR) is considered as the gold standard for diagnosing coronary myocardial ischemia. Existing 3D computational fluid dynamics (CFD) methods attempt to predict FFR noninvasively using coronary computed tomography angiography (CTA). However, the accuracy and efficiency of the 3D CFD methods in coronary arteries are considerably limited. In this work, we introduce a multi-dimensional CFD framework that improves the accuracy of FFR prediction by estimating 0D patient-specific boundary conditions, and increases the efficiency by generating 3D initial conditions. The multi-dimensional CFD models contain the 3D vascular model for coronary simulation, the 1D vascular model for iterative optimization, and the 0D vascular model for boundary conditions expression. To improve the accuracy, we utilize clinical parameters to derive 0D patient-specific boundary conditions with an optimization algorithm. To improve the efficiency, we evaluate the convergence state using the 1D vascular model and obtain the convergence parameters to generate appropriate 3D initial conditions. The 0D patient-specific boundary conditions and the 3D initial conditions are used to predict FFR (FFRC). We conducted a retrospective study involving 40 patients (61 diseased vessels) with invasive FFR and their corresponding CTA images. The results demonstrate that the FFRC and the invasive FFR have a strong linear correlation (r = 0.80, p < 0.001) and high consistency (mean difference: 0.014 ±0.071). After applying the cut-off value of FFR (0.8), the accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of FFRC were 88.5%, 93.3%, 83.9%, 84.8%, and 92.9%, respectively. Compared with the conventional zero initial conditions method, our method improves prediction efficiency by 71.3% per case. Therefore, our multi-dimensional CFD framework is capable of improving the accuracy and efficiency of FFR prediction significantly.


Assuntos
Doença da Artéria Coronariana , Estenose Coronária , Reserva Fracionada de Fluxo Miocárdico , Isquemia Miocárdica , Humanos , Estudos Retrospectivos , Hidrodinâmica , Angiografia Coronária/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Valor Preditivo dos Testes , Vasos Coronários/diagnóstico por imagem
10.
Comput Biol Med ; 169: 107850, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38145602

RESUMO

BACKGROUND: Monocular depth estimation plays a fundamental role in clinical endoscopy surgery. However, the coherent illumination, smooth surfaces, and texture-less nature of endoscopy images present significant challenges to traditional depth estimation methods. Existing approaches struggle to accurately perceive depth in such settings. METHOD: To overcome these challenges, this paper proposes a novel multi-scale residual fusion method for estimating the depth of monocular endoscopy images. Specifically, we address the issue of coherent illumination by leveraging image frequency domain component space transformation, thereby enhancing the stability of the scene's light source. Moreover, we employ an image radiation intensity attenuation model to estimate the initial depth map. Finally, to refine the accuracy of depth estimation, we utilize a multi-scale residual fusion optimization technique. RESULTS: To evaluate the performance of our proposed method, extensive experiments were conducted on public datasets. The structural similarity measures for continuous frames in three distinct clinical data scenes reached impressive values of 0.94, 0.82, and 0.84, respectively. These results demonstrate the effectiveness of our approach in capturing the intricate details of endoscopy images. Furthermore, the depth estimation accuracy achieved remarkable levels of 89.3 % and 91.2 % for the two models' data, respectively, underscoring the robustness of our method. CONCLUSIONS: Overall, the promising results obtained on public datasets highlight the significant potential of our method for clinical applications, facilitating reliable depth estimation and enhancing the quality of endoscopy surgical procedures.


Assuntos
Endoscopia Gastrointestinal , Endoscopia
11.
Comput Biol Med ; 169: 107766, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38150885

RESUMO

Automatic vessel segmentation is a critical area of research in medical image analysis, as it can greatly assist doctors in accurately and efficiently diagnosing vascular diseases. However, accurately extracting the complete vessel structure from images remains a challenge due to issues such as uneven contrast and background noise. Existing methods primarily focus on segmenting individual pixels and often fail to consider vessel features and morphology. As a result, these methods often produce fragmented results and misidentify vessel-like background noise, leading to missing and outlier points in the overall segmentation. To address these issues, this paper proposes a novel approach called the progressive edge information aggregation network for vessel segmentation (PEA-Net). The proposed method consists of several key components. First, a dual-stream receptive field encoder (DRE) is introduced to preserve fine structural features and mitigate false positive predictions caused by background noise. This is achieved by combining vessel morphological features obtained from different receptive field sizes. Second, a progressive complementary fusion (PCF) module is designed to enhance fine vessel detection and improve connectivity. This module complements the decoding path by combining features from previous iterations and the DRE, incorporating nonsalient information. Additionally, segmentation-edge decoupling enhancement (SDE) modules are employed as decoders to integrate upsampling features with nonsalient information provided by the PCF. This integration enhances both edge and segmentation information. The features in the skip connection and decoding path are iteratively updated to progressively aggregate fine structure information, thereby optimizing segmentation results and reducing topological disconnections. Experimental results on multiple datasets demonstrate that the proposed PEA-Net model and strategy achieve optimal performance in both pixel-level and topology-level metrics.


Assuntos
Benchmarking , Pisum sativum , Processamento de Imagem Assistida por Computador
12.
BMC Med Inform Decis Mak ; 23(1): 247, 2023 11 03.
Artigo em Inglês | MEDLINE | ID: mdl-37924054

RESUMO

BACKGROUND: Clinical practice guidelines (CPGs) are designed to assist doctors in clinical decision making. High-quality research articles are important for the development of good CPGs. Commonly used manual screening processes are time-consuming and labor-intensive. Artificial intelligence (AI)-based techniques have been widely used to analyze unstructured data, including texts and images. Currently, there are no effective/efficient AI-based systems for screening literature. Therefore, developing an effective method for automatic literature screening can provide significant advantages. METHODS: Using advanced AI techniques, we propose the Paper title, Abstract, and Journal (PAJO) model, which treats article screening as a classification problem. For training, articles appearing in the current CPGs are treated as positive samples. The others are treated as negative samples. Then, the features of the texts (e.g., titles and abstracts) and journal characteristics are fully utilized by the PAJO model using the pretrained bidirectional-encoder-representations-from-transformers (BERT) model. The resulting text and journal encoders, along with the attention mechanism, are integrated in the PAJO model to complete the task. RESULTS: We collected 89,940 articles from PubMed to construct a dataset related to neck pain. Extensive experiments show that the PAJO model surpasses the state-of-the-art baseline by 1.91% (F1 score) and 2.25% (area under the receiver operating characteristic curve). Its prediction performance was also evaluated with respect to subject-matter experts, proving that PAJO can successfully screen high-quality articles. CONCLUSIONS: The PAJO model provides an effective solution for automatic literature screening. It can screen high-quality articles on neck pain and significantly improve the efficiency of CPG development. The methodology of PAJO can also be easily extended to other diseases for literature screening.


Assuntos
Aprendizado Profundo , Guias de Prática Clínica como Assunto , Humanos , Inteligência Artificial , Tomada de Decisão Clínica , Cervicalgia , Literatura de Revisão como Assunto
13.
Artigo em Inglês | MEDLINE | ID: mdl-37747865

RESUMO

Microwave ablation (MWA) is a minimally invasive procedure for the treatment of liver tumor. Accumulating clinical evidence has considered the minimal ablative margin (MAM) as a significant predictor of local tumor progression (LTP). In clinical practice, MAM assessment is typically carried out through image registration of pre- and post-MWA images. However, this process faces two main challenges: non-homologous match between tumor and coagulation with inconsistent image appearance, and tissue shrinkage caused by thermal dehydration. These challenges result in low precision when using traditional registration methods for MAM assessment. In this paper, we present a local contractive nonrigid registration method using a biomechanical model (LC-BM) to address these challenges and precisely assess the MAM. The LC-BM contains two consecutive parts: (1) local contractive decomposition (LC-part), which reduces the incorrect match between the tumor and coagulation and quantifies the shrinkage in the external coagulation region, and (2) biomechanical model constraint (BM-part), which compensates for the shrinkage in the internal coagulation region. After quantifying and compensating for tissue shrinkage, the warped tumor is overlaid on the coagulation, and then the MAM is assessed. We evaluated the method using prospectively collected data from 36 patients with 47 liver tumors, comparing LC-BM with 11 state-of-the-art methods. LTP was diagnosed through contrast-enhanced MR follow-up images, serving as the ground truth for tumor recurrence. LC-BM achieved the highest accuracy (97.9%) in predicting LTP, outperforming other methods. Therefore, our proposed method holds significant potential to improve MAM assessment in MWA surgeries.

14.
Phys Med Biol ; 68(17)2023 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-37549676

RESUMO

Objective.In computer-assisted minimally invasive surgery, the intraoperative x-ray image is enhanced by overlapping it with a preoperative CT volume to improve visualization of vital anatomical structures. Therefore, accurate and robust 3D/2D registration of CT volume and x-ray image is highly desired in clinical practices. However, previous registration methods were prone to initial misalignments and struggled with local minima, leading to issues of low accuracy and vulnerability.Approach.To improve registration performance, we propose a novel CT/x-ray image registration agent (CT2X-IRA) within a task-driven deep reinforcement learning framework, which contains three key strategies: (1) a multi-scale-stride learning mechanism provides multi-scale feature representation and flexible action step size, establishing fast and globally optimal convergence of the registration task. (2) A domain adaptation module reduces the domain gap between the x-ray image and digitally reconstructed radiograph projected from the CT volume, decreasing the sensitivity and uncertainty of the similarity measurement. (3) A weighted reward function facilitates CT2X-IRA in searching for the optimal transformation parameters, improving the estimation accuracy of out-of-plane transformation parameters under large initial misalignments.Main results.We evaluate the proposed CT2X-IRA on both the public and private clinical datasets, achieving target registration errors of 2.13 mm and 2.33 mm with the computation time of 1.5 s and 1.1 s, respectively, showing an accurate and fast workflow for CT/x-ray image rigid registration.Significance.The proposed CT2X-IRA obtains the accurate and robust 3D/2D registration of CT and x-ray images, suggesting its potential significance in clinical applications.


Assuntos
Algoritmos , Imageamento Tridimensional , Raios X , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos , Radiografia , Processamento de Imagem Assistida por Computador
15.
IEEE J Biomed Health Inform ; 27(8): 3924-3935, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37027679

RESUMO

Automatic segmentation of port-wine stains (PWS) from clinical images is critical for accurate diagnosis and objective assessment of PWS. However, this is a challenging task due to the color heterogeneity, low contrast, and indistinguishable appearance of PWS lesions. To address such challenges, we propose a novel multi-color space adaptive fusion network (M-CSAFN) for PWS segmentation. First, a multi-branch detection model is constructed based on six typical color spaces, which utilizes rich color texture information to highlight the difference between lesions and surrounding tissues. Second, an adaptive fusion strategy is used to fuse complementary predictions, which address the significant differences within the lesions caused by color heterogeneity. Third, a structural similarity loss with color information is proposed to measure the detail error between predicted lesions and truth lesions. Additionally, a PWS clinical dataset consisting of 1413 image pairs was established for the development and evaluation of PWS segmentation algorithms. To verify the effectiveness and superiority of the proposed method, we compared it with other state-of-the-art methods on our collected dataset and four publicly available skin lesion datasets (ISIC 2016, ISIC 2017, ISIC 2018, and PH2). The experimental results show that our method achieves remarkable performance in comparison with other state-of-the-art methods on our collected dataset, achieving 92.29% and 86.14% on Dice and Jaccard metrics, respectively. Comparative experiments on other datasets also confirmed the reliability and potential capability of M-CSAFN in skin lesion segmentation.


Assuntos
Mancha Vinho do Porto , Dermatopatias , Humanos , Mancha Vinho do Porto/patologia , Reprodutibilidade dos Testes , Algoritmos , Dermoscopia/métodos , Processamento de Imagem Assistida por Computador
16.
Comput Biol Med ; 153: 106546, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36641935

RESUMO

Accurate detection of coronary artery stenosis in X-ray angiography (XRA) images is crucial for the diagnosis and treatment of coronary artery disease. However, stenosis detection remains a challenging task due to complicated vascular structures, poor imaging quality, and fickle lesions. While devoted to accurate stenosis detection, most methods are inefficient in the exploitation of spatio-temporal information of XRA sequences, leading to a limited performance on the task. To overcome the problem, we propose a new stenosis detection framework based on a Transformer-based module to aggregate proposal-level spatio-temporal features. In the module, proposal-shifted spatio-temporal tokenization (PSSTT) scheme is devised to gather spatio-temporal region-of-interest (RoI) features for obtaining visual tokens within a local window. Then, the Transformer-based feature aggregation (TFA) network takes the tokens as the inputs to enhance the RoI features by learning the long-range spatio-temporal context for final stenosis prediction. The effectiveness of our method was validated by conducting qualitative and quantitative experiments on 233 XRA sequences of coronary artery. Our method achieves a high F1 score of 90.88%, outperforming other 15 state-of-the-art detection methods. It demonstrates that our method can perform accurate stenosis detection from XRA images due to the strong ability to aggregate spatio-temporal features.


Assuntos
Doença da Artéria Coronariana , Estenose Coronária , Humanos , Angiografia Coronária/métodos , Constrição Patológica , Raios X , Estenose Coronária/diagnóstico por imagem , Doença da Artéria Coronariana/diagnóstico
17.
Ultrasonics ; 128: 106862, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36240539

RESUMO

The classic N-wire phantom has been widely used in the calibration of freehand ultrasound probes. One of the main challenges of the phantom is accurately identifying N-fiducials in ultrasound images, especially with multiple N-wire structures. In this study, a method using a multilayer N-wire phantom for the automatic spatial calibration of ultrasound images is proposed. All dots in the ultrasound image are segmented, scored, and classified according to the unique geometric features of the multilayer N-wire phantom. A recognition method for identifying N-fiducials from the dots is proposed for calibrating the spatial transformation of the ultrasound probe. At depths of 9, 11, 13, and 15 cm, the reconstruction error of 50 points is 1.24 ± 0.16, 1.09 ± 0.06, 0.95 ± 0.08, 1.02 ± 0.05 mm, respectively. The reconstruction mockup test shows that the distance accuracy is 1.11 ± 0.82 mm at a depth of 15 cm.


Assuntos
Algoritmos , Imageamento Tridimensional , Calibragem , Imageamento Tridimensional/métodos , Imagens de Fantasmas , Ultrassonografia/métodos
18.
Med Phys ; 50(1): 226-239, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35997999

RESUMO

PURPOSE: Surface-based image-to-patient registration in current surgical navigation is mainly achieved by a 3D scanner, which has several limitations in clinical practice such as uncontrollable scanning range, complicated operation, and even high failure rate. An accurate, robust, and easy-to-perform image-to-patient registration method is urgently required. METHODS: An incremental point cloud registration method was proposed for surface-based image-to-patient registration. The point cloud in image space was extracted from the computed tomography (CT) image, and a template matching method was applied to remove the redundant points. The corresponding point cloud in patient space was incrementally collected by an optically tracked pointer, while the nearest point distance (NPD) constraint was applied to ensure the uniformity of the collected points. A coarse-to-fine registration method under the constraints of coverage ratio (CR) and outliers ratio (OR) was then proposed to obtain the optimal rigid transformation from image to patient space. The proposed method was integrated in the recently developed endoscopic navigation system, and phantom study and clinical trials were conducted to evaluate the performance of the proposed method. RESULTS: The results of the phantom study revealed that the proposed constraints greatly improved the accuracy and robustness of registration. The comparative experimental results revealed that the proposed registration method significantly outperform the scanner-based method, and achieved comparable accuracy to the fiducial-based method. In the clinical trials, the average registration duration was 1.24 ± 0.43 min, the target registration error (TRE) of 294 marker points (59 patients) was 1.25 ± 0.40 mm, and the lower 97.5% confidence limit of the success rate of positioning marker points exceeds the expected value (97.56% vs. 95.00%), revealed that the accuracy of the proposed method significantly met the clinical requirements (TRE ⩽ 2 mm, p < 0.05). CONCLUSIONS: The proposed method has both the advantages of high accuracy and convenience, which were absent in the scanner-based method and the fiducial-based method. Our findings will help improve the quality of endoscopic sinus and skull base surgery.


Assuntos
Marcadores Fiduciais , Cirurgia Assistida por Computador , Humanos , Imagens de Fantasmas , Base do Crânio/diagnóstico por imagem , Base do Crânio/cirurgia , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Ensaios Clínicos como Assunto
19.
Artigo em Inglês | MEDLINE | ID: mdl-35984790

RESUMO

Automatic liver tumor segmentation plays a key role in radiation therapy of hepatocellular carcinoma. In this paper, we propose a novel densely connected U-Net model with criss-cross attention (CC-DenseUNet) to segment liver tumors in computed tomography (CT) images. The dense interconnections in CC-DenseUNet ensure the maximum information flow between encoder layers when extracting intra-slice features of liver tumors. Moreover, the criss-cross attention is used in CC-DenseUNet to efficiently capture only the necessary and meaningful non-local contextual information of CT images containing liver tumors. We evaluated the proposed CC-DenseUNet on the LiTS dataset and the 3DIRCADb dataset. Experimental results show that the proposed method reaches the state-of-the-art performance for liver tumor segmentation. We further experimentally demonstrate the robustness of the proposed method on a clinical dataset comprising 20 CT volumes.

20.
Comput Biol Med ; 148: 105826, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35810696

RESUMO

BACKGROUND: Marker-based augmented reality (AR) calibration methods for surgical navigation often require a second computed tomography scan of the patient, and their clinical application is limited due to high manufacturing costs and low accuracy. METHODS: This work introduces a novel type of AR calibration framework that combines a Microsoft HoloLens device with a single camera registration module for surgical navigation. A camera is used to gather multi-view images of a patient for reconstruction in this framework. A shape feature matching-based search method is proposed to adjust the size of the reconstructed model. The double clustering-based 3D point cloud segmentation method and 3D line segment detection method are also proposed to extract the corner points of the image marker. The corner points are the registration data of the image marker. A feature triangulation iteration-based registration method is proposed to quickly and accurately calibrate the pose relationship between the image marker and the patient in the virtual and real space. The patient model after registration is wirelessly transmitted to the HoloLens device to display the AR scene. RESULTS: The proposed approach was used to conduct accuracy verification experiments on the phantoms and volunteers, which were compared with six advanced AR calibration methods. The proposed method obtained average fusion errors of 0.70 ± 0.16 and 0.91 ± 0.13 mm in phantom and volunteer experiments, respectively. The fusion accuracy of the proposed method is the highest among all comparison methods. A volunteer liver puncture clinical simulation experiment was also conducted to show the clinical feasibility. CONCLUSIONS: Our experiments proved the effectiveness of the proposed AR calibration method, and revealed a considerable potential for improving surgical performance.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Calibragem , Humanos , Imageamento Tridimensional , Imagens de Fantasmas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA