Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 167
Filtrar
1.
Eur Radiol ; 34(3): 1434-1443, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37672052

RESUMO

OBJECTIVES: The histologic subtype of intracranial germ cell tumours (IGCTs) is an important factor in deciding the treatment strategy, especially for teratomas. In this study, we aimed to non-invasively diagnose teratomas based on fractal and radiomic features. MATERIALS AND METHODS: This retrospective study included 330 IGCT patients, including a discovery set (n = 296) and an independent validation set (n = 34). Fractal and radiomic features were extracted from T1-weighted, T2-weighted, and post-contrast T1-weighted images. Five classifiers, including logistic regression, random forests, support vector machines, K-nearest neighbours, and XGBoost, were compared for our task. Based on the optimal classifier, we compared the performance of clinical, fractal, and radiomic models and the model combining these features in predicting teratomas. RESULTS: Among the diagnostic models, the fractal and radiomic models performed better than the clinical model. The final model that combined all the features showed the best performance, with an area under the curve, precision, sensitivity, and specificity of 0.946 [95% confidence interval (CI): 0.882-0.994], 95.65% (95% CI: 88.64-100%), 88.00% (95% CI: 77.78-96.36%), and 91.67% (95% CI: 78.26-100%), respectively, in the test set of the discovery set, and 0.944 (95% CI: 0.855-1.000), 85.71% (95% CI: 68.18-100%), 94.74% (95% CI: 83.33-100%), and 80.00% (95% CI: 58.33-100%), respectively, in the independent validation set. SHapley Additive exPlanations indicated that two fractal features, two radiomic features, and age were the top five features highly associated with the presence of teratomas. CONCLUSION: The predictive model including image and clinical features could help guide treatment strategies for IGCTs. CLINICAL RELEVANCE STATEMENT: Our machine learning model including image and clinical features can non-invasively predict teratoma components, which could help guide treatment strategies for intracranial germ cell tumours (IGCT). KEY POINTS: • Fractals and radiomics can quantitatively evaluate imaging characteristics of intracranial germ cell tumours. • Model combing imaging and clinical features had the best predictive performance. • The diagnostic model could guide treatment strategies for intracranial germ cell tumours.


Assuntos
Neoplasias Embrionárias de Células Germinativas , Teratoma , Humanos , Estudos Retrospectivos , Fractais , Diagnóstico Diferencial , Radiômica , Neoplasias Embrionárias de Células Germinativas/diagnóstico por imagem , Teratoma/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos
2.
Eur Radiol ; 2023 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-37926739

RESUMO

OBJECTIVES: To investigate the value of diffusion MRI (dMRI) in H3K27M genotyping of brainstem glioma (BSG). METHODS: A primary cohort of BSG patients with dMRI data (b = 0, 1000 and 2000 s/mm2) and H3K27M mutation information were included. A total of 13 diffusion tensor and kurtosis imaging (DTI; DKI) metrics were calculated, then 17 whole-tumor histogram features and 29 along-tract white matter (WM) microstructural measurements were extracted from each metric and assessed within genotypes. After feature selection through univariate analysis and the least absolute shrinkage and selection operator method, multivariate logistic regression was used to build dMRI-derived genotyping models based on retained tumor and WM features separately and jointly. Model performances were tested using ROC curves and compared by the DeLong approach. A nomogram incorporating the best-performing dMRI model and clinical variables was generated by multivariate logistic regression and validated in an independent cohort of 27 BSG patients. RESULTS: At total of 117 patients (80 H3K27M-mutant) were included in the primary cohort. In total, 29 tumor histogram features and 41 WM tract measurements were selected for subsequent genotyping model construction. Incorporating WM tract measurements significantly improved diagnostic performances (p < 0.05). The model incorporating tumor and WM features from both DKI and DTI metrics showed the best performance (AUC = 0.9311). The nomogram combining this dMRI model and clinical variables achieved AUCs of 0.9321 and 0.8951 in the primary and validation cohort respectively. CONCLUSIONS: dMRI is valuable in BSG genotyping. Tumor diffusion histogram features are useful in genotyping, and WM tract measurements are more valuable in improving genotyping performance. CLINICAL RELEVANCE STATEMENT: This study found that diffusion MRI is valuable in predicting H3K27M mutation in brainstem gliomas, which is helpful to realize the noninvasive detection of brainstem glioma genotypes and improve the diagnosis of brainstem glioma. KEY POINTS: • Diffusion MRI has significant value in brainstem glioma H3K27M genotyping, and models with satisfactory performances were built. • Whole-tumor diffusion histogram features are useful in H3K27M genotyping, and quantitative measurements of white matter tracts are valuable as they have the potential to improve model performance. • The model combining the most discriminative diffusion MRI model and clinical variables can help make clinical decision.

3.
J Opt Soc Am A Opt Image Sci Vis ; 40(12): 2156-2163, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38086024

RESUMO

The rendering of specular highlights is a critical aspect of 3D rendering on autostereoscopic displays. However, the conventional highlight rendering techniques on autostereoscopic displays result in depth conflicts between highlights and diffuse surfaces. To address this issue, we propose a viewpoint-dependent highlight depiction method with head tracking, which incorporates microdisparity of highlights in binocular parallax and preserves the motion parallax of highlights. Our method was found to outperform physical highlight depiction and highlight depiction with microdisparity in terms of depth perception and realism, as demonstrated by experimental results. The proposed approach offers a promising alternative to traditional physical highlights on autostereoscopic displays, particularly in applications that require accurate depth perception.

4.
J Opt Soc Am A Opt Image Sci Vis ; 39(5): 782-792, 2022 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-36215437

RESUMO

We present anl omnidirectional 3D autostereoscopic aerial display with continuous parallax. Integral photography (IP) combined with polyhedron-shaped aerial imaging plates (AIPs) is utilized to achieve an extended view angle of 3D aerial images. With optical theoretical analysis and an aerial in situ rotation design, a 3D aerial display with an enlarged viewing angle is realized. In particular, the proposed 3D aerial display can realize any assigned angle within 360 deg. We also optimize the aerial display with artifact image removal and floating image brightness analysis. Experiments are performed to prove the 3D aerial display with full-motion parallax, continuous viewpoints, and multiplayer interaction. The proposed system is an attractive prospect of non-contact interaction and multi-person collaboration.

5.
Appl Opt ; 61(5): B339-B344, 2022 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-35201157

RESUMO

We propose a structure of a far-field nanofocusing metalens with focal shifting that is actively tuned at visible wavelengths. Surface plasmon polaritons (SPPs) can be excited by the metal-insulator-metal (MIM) subwavelength structure at visible wavelengths. The coherent interference of SPPs emitted by subwavelength nanostructures can form a nanoscale focus. When the SPPs are excited and pass through several concentric ring gratings with specific aspect ratios, the extraordinary optical transmission phenomenon occurs. Two metal concentric ring gratings achieve double diffraction, scattering light to the far field. An anisotropic or isotropic electrically adjustable refractive index material, such as liquid-crystal or optical phase change material, is filled in a dielectric layer between two metal layers, and the effective refractive index is modulated by electronically controlled active tuning. The focal shift is achieved by changing the effective refractive index of the intermediate dielectric. In addition, different incident wavelengths correspond to different effective refractive indices to achieve time-division-multiplexing multi-wavelength achromatic focusing. The finite-difference time-domain method was used to simulate the effect of substrate effective refractive index variation on achromatic superfocusing. The results show that the super-resolution focal spot (FWHM=0.158λ0) with long focal length (FL=5.177λ0) and large depth of field (DOF=3.412λ0) can be achieved by optimizing the design parameters. The visible plasma metalens has potential applications in high-density optical storage and optical microscopic imaging, especially in three-dimensional display for light field and integral imaging.

6.
Clin Oral Investig ; 26(2): 2005-2014, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34564760

RESUMO

OBJECTIVES: The aim of this study was to propose and validate an automatic approach based on iterative closest point algorithm for virtual complement and reconstruction for maxillofacial bone defects. MATERIALS AND METHODS: A 3D craniomaxillofacial database of normal Chinese people including 500 skull models was established. Modified iterative closest point (ICP) algorithm was developed to complete bone defects automatically. The performances were evaluated by two approaches: (1) model experiment, virtual bony defects were created on 30 intact normal skull models not included in the database. For each defect model, the algorithm was applied to select the reference skull model from the database. 3-Dimensional and 2-dimensional comparison were conducted to evaluate the error between reference skull model with original intact model. Root mean square error (RMSE) and processing time were calculated. (2) Clinical application, the algorithm was utilized to assist reconstruction of 5 patients with maxillofacial bone defects. The symmetry of post-operative skull model was evaluated by comparing with its mirrored model. RESULTS: The algorithm was tested on an CPU with 1.80 GHz and average processing time was 493.5 s. (1) Model experiment, the average root-mean-square deviation of defect area was less than 2 mm. (2) Clinical application, the RMSE of post-operative skull and its mirrored model was 1.72 mm. CONCLUSION: It is feasible using iterative closest point algorithm based on normal people database to automatically predict the reference data of missing maxillofacial bone. CLINICAL RELEVANCE: An automated approach based on ICP algorithm and normal people database for maxillofacial bone defect reconstruction has been proposed and validated.


Assuntos
Cirurgia Assistida por Computador , Algoritmos , Humanos , Imageamento Tridimensional , Crânio/diagnóstico por imagem , Crânio/cirurgia
7.
Opt Express ; 29(22): 35456-35473, 2021 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-34808979

RESUMO

The autostereoscopic 3D display has two important indicators, both the number of viewpoints and display resolution. However, it's a challenge to improve both the viewpoint and the resolution. Here, we develop a fixed-position multiview and lossless resolution autostereoscopic 3D display system that includes the dynamic liquid crystal (LC) grating screen. This display system consists of an LC display panel and an LC grating screen. The synchronization of the frame switching of the LC display panel and the LC grating screen shutter enables the preserved resolution. The "eye space" design makes the viewpoint dense enough and determines the LC grating screen's parameters. We use binocular viewpoint tracking technology to realize the LC grating screen's adaptive control based on the above work. Different binocular views are rendered in real-time according to the different positions of a single pair of stereoscopic viewpoints in the eye space, making the motion parallax possible. We present the working principle and mathematical analysis. We implement a prototype for verifying the principle. According to the experiment results analysis, this prototype can achieve viewpoint tracking and motion parallax based on resolution lossless and viewpoint dense enough.

8.
Opt Express ; 28(22): 32266-32293, 2020 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-33114917

RESUMO

This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.

9.
Opt Express ; 27(15): 20421-20434, 2019 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-31510136

RESUMO

We propose a novel full-parallax autostereoscopic display based on a lenticular tracking method to achieve separation between the viewing angle and image resolution and to improve these two parameters simultaneously. The proposed method enables the viewing angle to be independent of the image resolution and has the potential to solve the long-term trade-off problem in integral photography. By employing the lenticular lens array instead of the micro-lens array in integral photography with viewing tracking, the proposed method shows a high-image resolution and wide viewing angle 3D display with full parallax. A real-time tracking and rendering algorithm for the display method is also proposed in this study. The experimental results, compared with those of the conventional integral photography display and the tracking-based integral photography display, demonstrate the feasibility of this lenticular tracking display technology and its advantages in display resolution and viewing angle, suggesting its potential in practical three-dimensional applications.

10.
J Biomed Inform ; 100: 103319, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31655272

RESUMO

To provide natural simulated objects and intuitive user interaction in medical education and training, we propose a naked eye 3D display and interaction system. The current 3D rendering algorithms for naked eye 3D displays are not suitable for medical use, due to the requirements of displaying and interacting with high quality medical images and simulating soft tissues. Because the traditional 3D rendering procedure and vertex indexing in collision detection require substantial computing power when using a naked eye 3D display, the current method cannot achieve fluent displays and interactions. Thus, we develop a novel octree-based 3D rendering and interaction algorithm for high quality medical models to improve the rendering rate and obtain smooth human machine interactions when using the naked eye 3D display device. We also valuate the soft-body phantom simulation of the naked eye 3D display device by combining the traditional 3D rendering algorithm with the elastic 3D simulation to simulate deformable tissues. We integrate an incremental interaction method and a Kalman filter-based hand tracking method to achieve a larger user interaction range and robust hand tracking. We used the proposed system to perform human-computer interactions with rigid phantoms and soft-body phantoms. The experimental results showed that the proposed rendering algorithm for rigid phantoms could achieve higher rendering performance (50 FPS) than the traditional rendering algorithm (9.8 FPS). The user experiments showed that the 3D simulation system equipped with the enhanced rendering algorithm could achieve fluent interactions when using the naked eye 3D display, thus promoting education experiences and reducing task completion times.


Assuntos
Educação Médica/organização & administração , Imageamento Tridimensional/métodos , Interface Usuário-Computador , Visão Ocular , Algoritmos , Simulação por Computador , Humanos , Sistemas Homem-Máquina
11.
J Opt Soc Am A Opt Image Sci Vis ; 35(9): 1567-1574, 2018 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-30183012

RESUMO

Integral photography (IP) is one of the most promising 3D displays that can achieve a full parallax 3D display without glasses. There is a great need to render a correct, high-precision 3D image from an IP display. To achieve a correct 3D display, calibration is needed to correct optical misalignment and optical aberrations, while it is challenging to achieve correct mapping between a microlens array and matrix display. We propose an IP calibration method for a 3D autostereoscopic integral photography display based on a sparse camera array. Our method distinguishes itself from previous methods by estimating parameters for a dense correspondence map of an IP display with a relatively flexible setup and high precision in a reasonable time cost. We also propose a workflow to enable our method to handle both a visible and invisible microlens array and obtain a great outcome. One prototype is fabricated to evaluate the feasibility of the proposed method. Moreover, we evaluate our proposed method in geometry accuracy and image quality.

12.
Adv Exp Med Biol ; 1093: 193-205, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30306483

RESUMO

Augmented reality (AR) techniques play an important role in the field of minimally invasive surgery for orthopedics. AR can improve the hand-eye coordination by providing surgeons with the merged surgical scene, which enables surgeons to perform surgical operations more easily. To display the navigation information in the AR scene, medical image processing and three-dimensional (3D) visualization of the important anatomical structures are required. As a promising 3D display technique, integral videography (IV) can produce an autostereoscopic image with full parallax and continuous viewing points. Moreover, IV-based 3D AR navigation technique is proposed to present intuitive scene and has been applied in orthopedics, including oral surgery and spine surgery. The accurate patient-image registration, as well as the real-time target tracking for surgical tools and the patient, can be achieved. This paper overviews IV-based AR navigation and the applications in orthopedics, discusses the infrastructure required for successful implementation of IV-based approaches, and outlines the challenges that must be overcome for IV-based AR navigation to advance further development.


Assuntos
Imageamento Tridimensional , Procedimentos Cirúrgicos Bucais , Ortopedia , Cirurgia Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador , Interface Usuário-Computador
13.
Hepatobiliary Pancreat Dis Int ; 17(2): 101-112, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29567047

RESUMO

BACKGROUND: Augmented reality (AR) technology is used to reconstruct three-dimensional (3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes. DATA SOURCES: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the PubMed database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles. RESULTS: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery, which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology. CONCLUSIONS: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling, and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.


Assuntos
Doenças Biliares/cirurgia , Procedimentos Cirúrgicos do Sistema Biliar/métodos , Hepatectomia/métodos , Laparoscopia/métodos , Hepatopatias/cirurgia , Modelagem Computacional Específica para o Paciente , Procedimentos Cirúrgicos Robóticos/métodos , Doenças Biliares/diagnóstico por imagem , Humanos , Imageamento Tridimensional , Hepatopatias/diagnóstico por imagem , Imageamento por Ressonância Magnética , Interpretação de Imagem Radiográfica Assistida por Computador , Tomografia Computadorizada por Raios X
14.
Surg Innov ; 25(5): 492-498, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29909727

RESUMO

BACKGROUND: We applied augmented reality (AR) techniques to flexible choledochoscopy examinations. METHODS: Enhanced computed tomography data of a patient with intrahepatic and extrahepatic biliary duct dilatation were collected to generate a hollow, 3-dimensional (3D) model of the biliary tree by 3D printing. The 3D printed model was placed in an opaque box. An electromagnetic (EM) sensor was internally installed in the choledochoscope instrument channel for tracking its movements through the passages of the 3D printed model, and an AR navigation platform was built using image overlay display. The porta hepatis was used as the reference marker with rigid image registration. The trajectories of the choledochoscope and the EM sensor were observed and recorded using the operator interface of the choledochoscope. RESULTS: Training choledochoscopy was performed on the 3D printed model. The choledochoscope was guided into the left and right hepatic ducts, the right anterior hepatic duct, the bile ducts of segment 8, the hepatic duct in subsegment 8, the right posterior hepatic duct, and the left and the right bile ducts of the caudate lobe. Although stability in tracking was less than ideal, the virtual choledochoscope images and EM sensor tracking were effective for navigation. CONCLUSIONS: AR techniques can be used to assist navigation in choledochoscopy examinations in bile duct models. Further research is needed to determine its benefits in clinical settings.


Assuntos
Ducto Colédoco , Endoscopia do Sistema Digestório/métodos , Modelagem Computacional Específica para o Paciente , Impressão Tridimensional , Realidade Virtual , Adulto , Colelitíase , Ducto Colédoco/diagnóstico por imagem , Ducto Colédoco/cirurgia , Humanos , Masculino , Estudo de Prova de Conceito
15.
J Biomed Inform ; 71: 154-164, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28533140

RESUMO

Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Procedimentos Cirúrgicos Operatórios , Telemedicina , Algoritmos , Humanos
16.
J Opt Soc Am A Opt Image Sci Vis ; 34(5): 804-812, 2017 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-28463324

RESUMO

Quality of three-dimensional (3D) autostereoscopic displays is mainly influenced by the mismatch between the optical apparatus setups and image generation algorithms. In this paper, we take the optical apparatus setups into consideration and present an accurate 3D autostereoscopic display method using optimized parameters through quantitative calibration. Rotational and translational alignments are operated quantitatively to rectify the optical apparatus. In addition, the main parameters in a 3D display are evaluated for accurate 3D image rendering. Using the proposed method, the 3D autostereoscopic display can be calibrated quantitatively and provide 3D images with accurate spatial information. Experiments verified the availability and feasibility of the proposed method.

18.
Opt Express ; 23(8): 9812-23, 2015 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-25969022

RESUMO

In this paper, we present a polyhedron-shaped floating autostereoscopic display viewable from 360 degrees using integral photography (IP) and multiple semitransparent mirrors. IP combined with polyhedron-shaped multiple semitransparent mirrors is used to achieve a 360 degree viewable floating three-dimensional (3D) autostereoscopic display, having the advantage of being able to be viewed by several observers from various viewpoints simultaneously. IP is adopted to generate a 3D autostereoscopic image with full parallax property. Multiple semitransparent mirrors reflect corresponding IP images, and the reflected IP images are situated around the center of the polyhedron-shaped display device for producing the floating display. The spatial reflected IP images reconstruct a floating autostereoscopic image viewable from 360 degrees. We manufactured two prototypes for producing such displays and performed two sets of experiments to evaluate the feasibility of the method described above. The results of our experiments showed that our approach can achieve a floating autostereoscopic display viewable from surrounding area. Moreover, it is shown the proposed method is feasible to facilitate the continuous viewpoint of a whole 360 degree display without flipping.

19.
BMC Med Imaging ; 15: 51, 2015 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-26525142

RESUMO

BACKGROUND: This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. METHOD: A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject's anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. RESULTS: Accurate registration of the volunteer's anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. CONCLUSION: Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications.


Assuntos
Imageamento Tridimensional , Procedimentos Cirúrgicos Bucais/instrumentação , Cirurgia Assistida por Computador , Tomografia Computadorizada por Raios X , Calibragem , Estudos de Viabilidade , Humanos , Imagens de Fantasmas , Projetos Piloto , Interface Usuário-Computador , Gravação em Vídeo
20.
Int J Comput Assist Radiol Surg ; 19(2): 331-344, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37603164

RESUMO

PURPOSE: White light imaging (WLI) is a commonly seen examination mode in endoscopy. The particular light in compound band imaging (CBI) can highlight delicate structures, such as capillaries and tiny structures on the mucosal surface. These two modes complement each other, and doctors switch between them manually to complete the examination. This paper proposes an endoscopy image fusion system to combine WLI and CBI. METHODS: We add a real-time rotatable color wheel in the light source device of the AQ-200 endoscopy system to achieve rapid imaging of two modes at the same position of living tissue. The two images corresponding to the pixel level can avoid registration and lay the foundation for image fusion. We propose a multi-scale image fusion framework, which involves Laplacian pyramid (LP) and convolutional sparse representation (CSR) and strengthens the details in the fusion rule. RESULTS: Volunteer experiments and ex vivo pig stomach trials are conducted to verify the feasibility of our proposed system. We also conduct comparative experiments with other image fusion methods, evaluate the quality of the fused images, and verify the effectiveness of our fusion framework. The results show that our fused image has rich details, high color contrast, apparent structures, and clear lesion boundaries. CONCLUSION: An endoscopy image fusion system is proposed, which does not change the doctor's operation and makes the fusion of WLI and CBI optical staining technology a reality. We change the light source device of the endoscope, propose an image fusion framework, and verify the feasibility and effectiveness of our scheme. Our method fully integrates the advantages of WLI and CBI, which can help doctors make more accurate judgments than before. The endoscopy image fusion system is of great significance for improving the detection rate of early lesions and has broad application prospects.


Assuntos
Endoscopia Gastrointestinal , Endoscopia , Humanos , Animais , Suínos , Luz , Imagem de Banda Estreita/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA