Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
IEEE Trans Biomed Eng ; PP2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38968023

RESUMO

Oral diseases have imposed a heavy social and financial burden on many countries and regions. If left untreated, severe cases can lead to malignant tumours. Common devices can no longer meet the high-resolution and non-invasive requirement, while Optical Coherence Tomography Angiography (OCTA) provides an ideal perspective for detecting vascular microcirculation. However, acquiring high-quality OCTA images takes time and can result in unpredictable motion artefacts. Therefore, we propose a systematic workflow for rapid OCTA data acquisition. Initially, we implement a fourfold reduction in sampling points to enhance the scanning speed. Then, we apply a deep neural network for rapid image reconstruction, elevating the resolution to the level achieved through full scanning. Specifically, it is a hybrid attention model with a structure-aware loss to extract local and global information on angiography, which improves the visualisation performance and quantitative metrics of numerous classical and recent-presented models by 3.536%-9.943% in SSIM and 0.930%-2.946% in MS-SSIM. Through this approach, the time of constructing one OCTA volume can be reduced from nearly 30 s to about 3 s. The rapid-scanning protocol of high-quality imaging also presents feasibility for future real-time detection applications.

2.
Med Image Anal ; 95: 103182, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38688039

RESUMO

Recently, deep learning-based brain segmentation methods have achieved great success. However, most approaches focus on supervised segmentation, which requires many high-quality labeled images. In this paper, we pay attention to one-shot segmentation, aiming to learn from one labeled image and a few unlabeled images. We propose an end-to-end unified network that joints deformation modeling and segmentation tasks. Our network consists of a shared encoder, a deformation modeling head, and a segmentation head. In the training phase, the atlas and unlabeled images are input to the encoder to get multi-scale features. The features are then fed to the multi-scale deformation modeling module to estimate the atlas-to-image deformation field. The deformation modeling module implements the estimation at the feature level in a coarse-to-fine manner. Then, we employ the field to generate the augmented image pair through online data augmentation. We do not apply any appearance transformations cause the shared encoder could capture appearance variations. Finally, we adopt supervised segmentation loss for the augmented image. Considering that the unlabeled images still contain rich information, we introduce confidence aware pseudo label for them to further boost the segmentation performance. We validate our network on three benchmark datasets. Experimental results demonstrate that our network significantly outperforms other deep single-atlas-based and traditional multi-atlas-based segmentation methods. Notably, the second dataset is collected from multi-center, and our network still achieves promising segmentation performance on both the seen and unseen test sets, revealing its robustness. The source code will be available at https://github.com/zhangliutong/brainseg.


Assuntos
Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Neuroanatomia
3.
Lancet Digit Health ; 5(12): e917-e924, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-38000875

RESUMO

The advent of generative artificial intelligence and large language models has ushered in transformative applications within medicine. Specifically in ophthalmology, large language models offer unique opportunities to revolutionise digital eye care, address clinical workflow inefficiencies, and enhance patient experiences across diverse global eye care landscapes. Yet alongside these prospects lie tangible and ethical challenges, encompassing data privacy, security, and the intricacies of embedding large language models into clinical routines. This Viewpoint highlights the promising applications of large language models in ophthalmology, while weighing up the practical and ethical barriers towards their real-world implementation. This Viewpoint seeks to stimulate broader discourse on the potential of large language models in ophthalmology and to galvanise both clinicians and researchers into tackling the prevailing challenges and optimising the benefits of large language models while curtailing the associated risks.


Assuntos
Medicina , Oftalmologia , Humanos , Inteligência Artificial , Idioma , Privacidade
4.
IEEE J Biomed Health Inform ; 27(11): 5381-5392, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37651479

RESUMO

Intracranial germ cell tumors are rare tumors that mainly affect children and adolescents. Radiotherapy is the cornerstone of interdisciplinary treatment methods. Radiation of the whole ventricle system and the local tumor can reduce the complications in the late stage of radiotherapy while ensuring the curative effect. However, manually delineating the ventricular system is labor-intensive and time-consuming for physicians. The diverse ventricle shape and the hydrocephalus-induced ventricle dilation increase the difficulty of automatic segmentation algorithms. Therefore, this study proposed a fully automatic segmentation framework. Firstly, we designed a novel unsupervised learning-based label mapper, which is used to handle the ventricle shape variations and obtain the preliminary segmentation result. Then, to boost the segmentation performance of the framework, we improved the region growth algorithm and combined the fully connected conditional random field to optimize the preliminary results from both regional and voxel scales. In the case of only one set of annotated data is required, the average time cost is 153.01 s, and the average target segmentation accuracy can reach 84.69%. Furthermore, we verified the algorithm in practical clinical applications. The results demonstrate that our proposed method is beneficial for physicians to delineate radiotherapy targets, which is feasible and clinically practical, and may fill the gap of automatic delineation methods for the ventricular target of intracranial germ celltumors.


Assuntos
Neoplasias Embrionárias de Células Germinativas , Neoplasias , Criança , Humanos , Adolescente , Aprendizado de Máquina não Supervisionado , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
5.
Biosci Trends ; 17(3): 230-233, 2023 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-37344394

RESUMO

Ultrasound image guidance is a method often used to help provide care, and it relies on accurate perception of information, and particularly tissue recognition, to guide medical procedures. It is widely used in various scenarios that are often complex. Recent breakthroughs in large models, such as ChatGPT for natural language processing and Segment Anything Model (SAM) for image segmentation, have revolutionized interaction with information. These large models exhibit a revolutionized understanding of basic information, holding promise for medicine, including the potential for universal autonomous ultrasound image guidance. The current study evaluated the performance of SAM on commonly used ultrasound images and it discusses SAM's potential contribution to an intelligent image-guided framework, with a specific focus on autonomous and universal ultrasound image guidance. Results indicate that SAM performs well in ultrasound image segmentation and has the potential to enable universal intelligent ultrasound image guidance.


Assuntos
Algoritmos , Ultrassonografia
6.
IEEE Trans Biomed Eng ; 70(11): 3166-3177, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37227912

RESUMO

OBJECTIVE: Ultrasound (US) probes scan over the surface of the human body to acquire US images in clinical vascular US diagnosis. However, due to the deformation and specificity of different human surfaces, the relationship between the scan trajectory of the skin and the internal tissues is not fully correlated, which poses a challenge for autonomous robotic US imaging in a dynamic and external-vision-free environment. Here, we propose a decoupled control strategy for autonomous robotic vascular US imaging in an environment without external vision. METHODS: The proposed system is divided into outer-loop posture control and inner-loop orientation control, which are separately determined by a deep learning (DL) agent and a reinforcement learning (RL) agent. First, we use a weakly supervised US vessel segmentation network to estimate the probe orientation. In the outer loop control, we use a force-guided reinforcement learning agent to maintain a specific angle between the US probe and the skin in the dynamic imaging processes. Finally, the orientation and the posture are integrated to complete the imaging process. RESULTS: Evaluation experiments on several volunteers showed that our RUS could autonomously perform vascular imaging in arms with different stiffness, curvature, and size without additional system adjustments. Furthermore, our system achieved reproducible imaging and reconstruction of dynamic targets without relying on vision-based surface information. CONCLUSION AND SIGNIFICANCE: Our system and control strategy provides a novel framework for the application of US robots in complex and external-vision-free environments.

7.
Med Image Anal ; 88: 102811, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37245436

RESUMO

The main objective of anatomically plausible results for deformable image registration is to improve model's registration accuracy by minimizing the difference between a pair of fixed and moving images. Since many anatomical features are closely related to each other, leveraging supervision from auxiliary tasks (such as supervised anatomical segmentation) has the potential to enhance the realism of the warped images after registration. In this work, we employ a Multi-Task Learning framework to formulate registration and segmentation as a joint issue, in which we utilize anatomical constraint from auxiliary supervised segmentation to enhance the realism of the predicted images. First, we propose a Cross-Task Attention Block to fuse the high-level feature from both the registration and segmentation network. With the help of initial anatomical segmentation, the registration network can benefit from learning the task-shared feature correlation and rapidly focusing on the parts that need deformation. On the other hand, the anatomical segmentation discrepancy from ground-truth fixed annotations and predicted segmentation maps of initial warped images are integrated into the loss function to guide the convergence of the registration network. Ideally, a good deformation field should be able to minimize the loss function of registration and segmentation. The voxel-wise anatomical constraint inferred from segmentation helps the registration network to reach a global optimum for both deformable and segmentation learning. Both networks can be employed independently during the testing phase, enabling only the registration output to be predicted when the segmentation labels are unavailable. Qualitative and quantitative results indicate that our proposed methodology significantly outperforms the previous state-of-the-art approaches on inter-patient brain MRI registration and pre- and intra-operative uterus MRI registration tasks within our specific experimental setup, which leads to state-of-the-art registration quality scores of 0.755 and 0.731 (i.e., by 0.8% and 0.5% increases) DSC for both tasks, respectively.


Assuntos
Imageamento por Ressonância Magnética , Neuroimagem , Humanos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos
8.
Comput Med Imaging Graph ; 104: 102184, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36657212

RESUMO

Over the past few years, deep learning-based image registration methods have achieved remarkable performance in medical image analysis. However, many existing methods struggle to ensure accurate registration while preserving the desired diffeomorphic properties and inverse consistency of the final deformation field. To address the problem, this paper presents a novel symmetric pyramid network for medical image inverse consistent diffeomorphic registration. Specifically, we first encode the multi-scale images to the feature pyramids via a shared-weights encoder network and then progressively conduct the feature-level diffeomorphic registration. The feature-level registration is implemented symmetrically to ensure inverse consistency. We independently carry out the forward and backward feature-level registration and average the estimated bidirectional velocity fields for more robust estimation. Finally, we employ symmetric multi-scale similarity loss to train the network. Experimental results on three public datasets, including Mindboggle101, CANDI, and OAI, show that our method significantly outperforms others, demonstrating that the proposed network can achieve accurate alignment and generate the deformation fields with expected properties. Our code will be available at https://github.com/zhangliutong/SPnet.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
9.
IEEE Trans Biomed Eng ; 70(4): 1401-1412, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36288237

RESUMO

The immunohistochemical index is significant to help the selection of treatment strategy for breast cancer patients. Existing studies that focus on conventional ultrasound features and certain types of immunohistochemistry expressions are limited to correlation exploration, and only few studies have built predictive models. In this study, a Tri-Branch deep learning network is built for prediction of the immunohistochemical HER2 using the hybrid ultrasound data, instead of relying on the invasive and biopsy-based histopathological examination. Specifically, the deep learning model uses the cross-model attention and the interactive learning approaches to obtain the strong complementarity of hybrid data comprising B-mode US, contrast-enhanced ultrasound, and optical flow motion information to enhance accuracy of immunohistochemical HER2 prediction. The proposed prediction model was evaluated using hybrid ultrasound dataset from 335 breast cancer patients. The experimental results indicated that the Tri-Branch model had a high accuracy of 86.23% for HER2 status prediction, and the HER2 status prediction for patients with different pathology grades exhibited some meaningful clinical observations.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/patologia , Ultrassonografia , Biópsia , Imuno-Histoquímica
10.
Int J Comput Assist Radiol Surg ; 17(9): 1731-1743, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35704237

RESUMO

PURPOSE: 4D reconstruction based on radiation-free ultrasound can provide valuable information about the anatomy. Current 4D US technologies are either faced with limited field-of-view (FoV), technical complications, or cumbersome setups. This paper proposes a spatiotemporal US reconstruction framework to enhance its ability to provide dynamic structure information. METHODS: We propose a spatiotemporal US reconstruction framework based on freehand sonography. First, a collecting strategy is presented to acquire 2D US images in multiple spatial and temporal positions. A morphology-based phase extraction method after pose correction is presented to decouple the compounding image variations. For temporal alignment and reconstruction, a robust kernel regression model is established to reconstruct images in arbitrary phases. Finally, the spatiotemporal reconstruction is demonstrated in the form of 4D movies by integrating the US images according to the tracked poses and estimated phases. RESULTS: Quantitative and qualitative experiments were conducted on the carotid US to validate the feasibility of the proposed pipeline. The mean phase localization and heart rate estimation errors were 0.07 ± 0.04 s and 0.83 ± 3.35 bpm, respectively, compared with cardiac gating signals. The assessment of reconstruction quality showed a low RMSE (<0.06) between consecutive images. Quantitative comparisons of anatomy reconstruction from the generated US volumes and MRI showed an average surface distance of 0.39 ± 0.09 mm on the common carotid artery and 0.53 ± 0.05 mm with a landmark localization error of 0.60 ± 0.18 mm on carotid bifurcation. CONCLUSION: A novel spatiotemporal US reconstruction framework based on freehand sonography is proposed that preserves the utility nature of conventional freehand US. Evaluations on in vivo datasets indicated that our framework could achieve acceptable reconstruction performance and show potential application value in the US examination of dynamic anatomy.


Assuntos
Imageamento Tridimensional , Imageamento por Ressonância Magnética , Algoritmos , Artérias Carótidas/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Ultrassonografia/métodos , Ultrassonografia Doppler
11.
IEEE J Biomed Health Inform ; 26(7): 3080-3091, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35077370

RESUMO

The visual quality of ultrasound (US) images is crucial for clinical diagnosis and treatment. The main source of image quality degradation is the inherent speckle noise generated during US image acquisition. Current deep learning-based methods cannot preserve the maximum boundary contrast when removing noise and speckle. In this paper, we address the issue by proposing a novel wavelet-based generative adversarial network (GAN) for real-time high-quality US image reconstruction, viz. WGAN-DUS. First, we propose a batch normalization module (BNM) to balance the importance of each sub-band image and fuse sub-band features simultaneously. Then, a wavelet reconstruction module (WRM) integrated with a cascade of wavelet residual channel attention block (WRCAB) is proposed to extract distinctive sub-band features used to reconstruct denoised images. A gradual tuning strategy is proposed to fine-tune our generator for better despeckling performance. We further propose a wavelet-based discriminator and a comprehensive loss function to effectively suppress speckle noise and preserve the image features. Besides, we have designed an algorithm to estimate the noise levels during despeckling of real US images. The performance of our network was then evaluated on natural, synthetic, simulated and clinical US images and compared against various despeckling methods. To verify the feasibility of WGAN-DUS, we further extend our work to uterine fibroid segmentation with the denoised US image of the proposed approach. Experimental result demonstrates that our proposed method is feasible and can be generalized to clinical applications for despeckling of US images in real-time without losing its fine details.


Assuntos
Algoritmos , Aumento da Imagem , Humanos , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador , Ultrassonografia/métodos
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3455-3458, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891983

RESUMO

Image registration is a fundamental and crucial step in medical image analysis. However, due to the differences between mono-mode and multi-mode registration tasks and the complexity of the corresponding relationship between multimode image intensity, the existing unsupervised methods based on deep learning can hardly achieve the two registration tasks simultaneously. In this paper, we proposed a novel approach to register both mono- and multi-mode images $\color{blue}{\text{in a differentiable }}$. By approximately calculating the mutual information in a $\color{blue}{\text{differentiable}}$ form and combining it with CNN, the deformation field can be predicted quickly and accurately without any prior information about the image intensity relationship. The registration process is implemented in an unsupervised manner, avoiding the need for the ground truth of the deformation field. We utilize two public datasets to evaluate the performance of the algorithm for mono-mode and multi-mode image registration, which confirms the effectiveness and feasibility of our method. In addition, the experiments on patient data also demonstrate the practicability and robustness of the proposed method.


Assuntos
Processamento de Imagem Assistida por Computador , Envio de Mensagens de Texto , Algoritmos , Humanos , Redes Neurais de Computação
13.
Int J Comput Assist Radiol Surg ; 16(12): 2189-2199, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34373973

RESUMO

PURPOSE: Autonomous ultrasound imaging by robotic ultrasound scanning systems in complex soft uncertain clinical environments is important and challenging to assist in therapy. To cope with the complex environment faced by the ultrasound probe during the scanning process, we propose an autonomous robotic ultrasound (US) control method based on reinforcement learning (RL) model to build the relationship between the environment and the system. The proposed method requires only contact force as input information to achieve robot control of the posture and contact force of the probe without any a priori information about the target and the environment. METHODS: First, an RL agent is proposed and trained by a policy gradient theorem-based RL model with the 6-degree-of-freedom (DOF) contact force of the US probe to learn the relationship between contact force and output force directly. Then, a force control strategy based on the admittance controller is proposed for synchronous force, orientation and position control by defining the desired contact force as the action space. RESULTS: The proposed method was evaluated via collected US images, contact force and scan trajectories by scanning an unknown soft phantom. The experimental results indicated that the proposed method differs from the free-hand scanned approach in the US images within 3 ± 0.4%. The analysis results of contact forces and trajectories indicated that our method could make stable scanning processes on a soft uncertain skin surface and obtained US images. CONCLUSION: We propose a concise and efficient force-guided US robot scanning control method for soft uncertain environment based on reinforcement learning. Experimental results validated our method's feasibility and validity for complex skin surface scanning, and the volunteer experiments indicated the potential application value in the complex clinical environment of robotic US imaging system especially with limited visual information.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Imagens de Fantasmas , Ultrassonografia
14.
IEEE Trans Biomed Eng ; 68(9): 2787-2797, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33497322

RESUMO

OBJECTIVE: In this paper, we introduce an autonomous robotic ultrasound (US) imaging system based on reinforcement learning (RL). The proposed system and framework are committed to controlling the US probe to perform fully autonomous imaging of a soft, moving and marker-less target based only on single RGB images of the scene. METHODS: We propose several different approaches and methods to achieve the following objectives: real-time US probe controlling, soft surface constant force tracking and automatic imaging. First, to express the state of the robotic US imaging task, we proposed a state representation model to reduce the dimensionality of the imaging state and encode the force and US information into the scene image space. Then, an RL agent is trained by a policy gradient theorem based RL model with the single RGB image as the only observation. To achieve adaptable constant force tracking between the US probe and the soft moving target, we propose a force-to-displacement control method based on an admittance controller. RESULTS: In the simulation experiment, we verified the feasibility of the integrated method. Furthermore, we evaluated the proposed force-to-displacement method to demonstrate the safety and effectiveness of adaptable constant force tracking. Finally, we conducted phantom and volunteer experiments to verify the feasibility of the method on a real system. CONCLUSION: The experiments indicated that our approaches were stable and feasible in the autonomic and accurate control of the US probe. SIGNIFICANCE: The proposed system has potential application value in the image-guided surgery and robotic surgery.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgia Assistida por Computador , Humanos , Imagens de Fantasmas , Ultrassonografia
15.
Sci Bull (Beijing) ; 66(9): 884-888, 2021 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-33457042

RESUMO

Coronavirus disease-2019 (COVID-19) has become a major global epidemic. Facilitated by HTS2 technology, we evaluated the effects of 578 herbs and all 338 reported anti-COVID-19 TCM formulae on cytokine storm-related signaling pathways, and identified the key targets of the relevant pathways and potential active ingredients in these herbs. This large-scale transcriptional study innovatively combines HTS2 technology with bioinformatics methods and computer-aided drug design. For the first time, it systematically explores the molecular mechanism of TCM in regulating the COVID-19-related cytokine storm, providing an important scientific basis for elucidating the mechanism of action of TCM in treating COVID-19.

16.
Theranostics ; 10(10): 4676-4693, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32292522

RESUMO

Rationale: High-intensity focused ultrasound (HIFU) therapy represents a noninvasive surgical approach to treat uterine fibroids. The operation of HIFU therapy relies on the information provided by medical images. In current HIFU therapy, all operations such as positioning of the lesion in magnetic resonance (MR) and ultrasound (US) images are manually performed by specifically trained doctors. Manual processing is an important limitation of the efficiency of HIFU therapy. In this paper, we aim to provide an automatic and accurate image guidance system, intelligent diagnosis, and treatment strategy for HIFU therapy by combining multimodality information. Methods: In intelligent HIFU therapy, medical information and treatment strategy are automatically processed and generated by a real-time image guidance system. The system comprises a novel multistage deep convolutional neural network for preoperative diagnosis and a nonrigid US lesion tracking procedure for HIFU intraoperative image-assisted treatment. In the process of intelligent therapy, the treatment area is determined from the autogenerated lesion area. Based on the autodetected treatment area, the HIFU foci are distributed automatically according to the treatment strategy. Moreover, an image-based unexpected movement warning and other physiological monitoring are used during the intelligent treatment procedure for safety assurance. Results: In the experiment, we integrated the intelligent treatment system on a commercial HIFU treatment device, and eight clinical experiments were performed. In the clinical validation, eight randomly selected clinical cases were used to verify the feasibility of the system. The results of the quantitative experiment indicated that our intelligent system met the HIFU clinical tracking accuracy and speed requirements. Moreover, the results of simulated repeated experiments confirmed that the autodistributed HIFU focus reached the level of intermediate clinical doctors. Operations performed by junior- or middle-level operators with the assistance of the proposed system can reach the level of operation performed by senior doctors. Various experiments prove that our proposed intelligent HIFU therapy process is feasible for treating common uterine fibroid cases. Conclusion: We propose an intelligent HIFU therapy for uterine fibroid which integrates multiple medical information processing procedures. The experiment results demonstrated that the proposed procedures and methods can achieve monitored and automatic HIFU diagnosis and treatment. This research provides a possibility for intelligent and automatic noninvasive therapy for uterine fibroid.


Assuntos
Ablação por Ultrassom Focalizado de Alta Intensidade/instrumentação , Leiomioma/terapia , Imagem Multimodal/métodos , Algoritmos , Feminino , Ablação por Ultrassom Focalizado de Alta Intensidade/métodos , Humanos , Leiomioma/diagnóstico por imagem , Leiomioma/patologia , Espectroscopia de Ressonância Magnética/métodos , Redes Neurais de Computação , Resultado do Tratamento , Ultrassonografia/métodos
17.
Med Biol Eng Comput ; 57(1): 47-57, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29967935

RESUMO

It is challenging to achieve high implant accuracy in dental implant placement, because high risk tissues need to be avoided. In this study, we present an augmented reality (AR) surgical navigation with an accurate cone beam computed tomography (CBCT)-patient registration method to provide clinically desired dental implant accuracy. A registration device is used for registration between preoperative data and patient outside the patient's mouth. After registration, the registration device is worn on the patient's teeth for tracking the patient. Naked-eye 3D images of the planning path and the mandibular nerve are superimposed onto the patient in situ to form an AR scene. Simultaneously, a 3D image of the drill is overlaid accurately on the real one to guide the implant procedure. Finally, implant accuracy is evaluated postoperatively. A model experiment was performed by an experienced dentist. Totally, ten parallel pins were inserted into five 3D-printed mandible models guided by our AR navigation method and through the dentist's experience, respectively. AR-guided dental implant placement showed better results than the dentist's experience (mean target error = 1.25 mm vs. 1.63 mm; mean angle error = 4.03° vs. 6.10°). Experimental results indicate that the proposed method is expected to be applied in the clinic. Graphical abstract ᅟ.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Implantes Dentários , Imageamento Tridimensional , Calibragem , Humanos , Mandíbula/inervação , Mandíbula/cirurgia , Cirurgia Assistida por Computador
18.
Healthc Technol Lett ; 6(6): 172-175, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32038852

RESUMO

High-intensity focused ultrasound (HIFU) therapy represents an image-guided and non-invasive surgical approach to treat uterine fibroid. During the HIFU operation, it is challenging to obtain the real-time and accurate lesion contour automatically in ultrasound (US) video. The current intraoperative image processing is completed manually or semi-automatic. In this Letter, the authors propose a morphological active contour without an edge-based model to obtain accurate real-time and non-rigid US lesion contour. Firstly, a targeted image pre-processing procedure is applied to reduce the influence of inadequate image quality. Then, an improved morphological contour detection method with a customised morphological kernel is harnessed to solve the low signal-to-noise ratio of HIFU US images and obtain an accurate non-rigid lesion contour. A more reasonable lesion tracking procedure is proposed to improve tracking accuracy especially in the case of large displacement and incomplete lesion area. The entire framework is accelerated by the GPU to achieve a high frame rate. Finally, a non-rigid, real-time and accurate lesion contouring for intraoperative US video is provided to the doctor. The proposed procedure could reach a speed of more than 30 frames per second in general computer and a Dice similarity coefficient of 90.67% and Intersection over Union of 90.14%.

19.
IEEE J Biomed Health Inform ; 23(6): 2483-2493, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-30530379

RESUMO

Augmented reality (AR) surgical navigation systems based on image overlay have been used in minimally invasive surgery. However, conventional systems still suffer from a limited viewing zone, a shortage of intuitive three-dimensional (3D) image guidance and cannot be moved freely. To fuse the 3-D overlay image with the patient in situ, it is essential to track the overlay device while it is moving. A direct line-of-sight should be maintained between the optical markers and the tracker camera. In this study, we propose a moving-tolerant AR surgical navigation system using autostereoscopic image overlay, which can avoid the use of the optical tracking system during the intraoperative period. The system captures binocular image sequences of environmental change in the operation room to locate the overlay device, rather than tracking the device directly. Therefore, it is no longer required to maintain a direct line-of-sight between the tracker and the tracked devices. The movable range of the system is also not limited by the scope of the tracker camera. Computer simulation experiments demonstrate the reliability of the proposed moving-tolerant AR surgical navigation system. We also fabricate a computer-generated integral photography-based 3-D overlay AR system to validate the feasibility of the proposed moving-tolerant approach. Qualitative and quantitative experiments demonstrate that the proposed system can always fuse the 3-D image with the patient, thus, increasing the feasibility and reliability of traditional 3-D overlay image AR surgical navigation systems.


Assuntos
Realidade Aumentada , Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Algoritmos , Desenho de Equipamento , Humanos , Imageamento Tridimensional/instrumentação , Procedimentos Cirúrgicos Minimamente Invasivos/instrumentação , Imagens de Fantasmas , Base do Crânio/diagnóstico por imagem , Base do Crânio/cirurgia , Tomografia Computadorizada por Raios X
20.
Adv Exp Med Biol ; 1093: 193-205, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30306483

RESUMO

Augmented reality (AR) techniques play an important role in the field of minimally invasive surgery for orthopedics. AR can improve the hand-eye coordination by providing surgeons with the merged surgical scene, which enables surgeons to perform surgical operations more easily. To display the navigation information in the AR scene, medical image processing and three-dimensional (3D) visualization of the important anatomical structures are required. As a promising 3D display technique, integral videography (IV) can produce an autostereoscopic image with full parallax and continuous viewing points. Moreover, IV-based 3D AR navigation technique is proposed to present intuitive scene and has been applied in orthopedics, including oral surgery and spine surgery. The accurate patient-image registration, as well as the real-time target tracking for surgical tools and the patient, can be achieved. This paper overviews IV-based AR navigation and the applications in orthopedics, discusses the infrastructure required for successful implementation of IV-based approaches, and outlines the challenges that must be overcome for IV-based AR navigation to advance further development.


Assuntos
Imageamento Tridimensional , Procedimentos Cirúrgicos Bucais , Ortopedia , Cirurgia Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA