Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Opt Express ; 29(22): 35022-35037, 2021 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-34808947

RESUMO

When the input colors of the left and right eyes are different from one another, binocular rivalry may occur. According to Hering theory, opponent colors would have the most significant tendency for rivalry. However, binocular color fusion still occurs under the condition that each eye's opponent chromatic responses do not exceed a specific chromatic fusion limit (CFL). This paper detects the binocular chromatic fusion limit for opposite colors within a conventional 3D display color gamut. We conducted a psychophysical experiment to quantitatively measure the binocular chromatic fusion limit on four opposite color directions in the CIELAB color space. Due to color inconsistency between eyes may affect the binocular color fusion, the experiment was divided into two sessions by swapping stimulation colors of left and right eyes. There were 5 subjects and they each experienced 320 trials. By analyzing the results, we used ellipses to quantify the chromatic fusion limits for opposing colors. The average semi-major axis of the ellipses is 27.55 Δ E a b∗, and the average semi-minor axis is 16.98 Δ E a b∗. We observed that the chromatic fusion limit varies with the opposite color direction: the CFL on RedBlue-GreenYellow direction is greater than that on Red-Green direction, the latter being greater than that on Yellow-Blue direction and the CFL on RedYellow-GreenBlue direction is smallest. Furthermore, we suggested that the chromatic fusion limit is independent of the distribution of cells, and there is no significant change in the fusion ellipse boundaries after swapping left and right eye colors.


Assuntos
Percepção de Cores/fisiologia , Visão Binocular/fisiologia , Adulto , Desenho de Equipamento , Humanos , Imageamento Tridimensional , Psicofísica , Disparidade Visual/fisiologia , Visão Ocular/fisiologia , Córtex Visual/fisiologia , Adulto Jovem
2.
IEEE Trans Industr Inform ; 17(9): 6519-6527, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37981912

RESUMO

A novel intelligent navigation technique for accurate image-guided COVID-19 lung biopsy is addressed, which systematically combines augmented reality (AR), customized haptic-enabled surgical tools, and deep neural network to achieve customized surgical navigation. Clinic data from 341 COVID-19 positive patients, with 1598 negative control group, have collected for the model synergy and evaluation. Biomechanics force data from the experiment are applied a WPD-CNN-LSTM (WCL) to learn a new patient-specific COVID-19 surgical model, and the ResNet was employed for the intraoperative force classification. To boost the user immersion and promote the user experience, intro-operational guiding images have combined with the haptic-AR navigational view. Furthermore, a 3-D user interface (3DUI), including all requisite surgical details, was developed with a real-time response guaranteed. Twenty-four thoracic surgeons were invited to the objective and subjective experiments for performance evaluation. The root-mean-square error results of our proposed WCL model is 0.0128, and the classification accuracy is 97%, which demonstrated that the innovative AR with deep learning (DL) intelligent model outperforms the existing perception navigation techniques with significantly higher performance. This article shows a novel framework in the interventional surgical integration for COVID-19 and opens the new research about the integration of AR, haptic rendering, and deep learning for surgical navigation.

3.
Biomed Opt Express ; 15(6): 3914-3931, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38867769

RESUMO

Virtual surgical training is crucial for enhancing minimally invasive surgical skills. Traditional geometric reconstruction methods based on medical CT/MRI images often fall short in providing color information, which is typically generated through pseudo-coloring or artistic rendering. To simultaneously reconstruct both the geometric shape and appearance information of organs, we propose a novel organ model reconstruction network called Endoscope-NeSRF. This network jointly leverages neural radiance fields and Signed Distance Function (SDF) to reconstruct a textured geometric model of the organ of interest from multi-view photometric images acquired by an endoscope. The prior knowledge of the inverse correlation between the distance from the light source to the object and the radiance improves the real physical properties of the organ. The dilated mask further refines the appearance and geometry at the organ's edges. We also proposed a highlight adaptive optimization strategy to remove highlights caused by the light source during the acquisition process, thereby preventing the reconstruction results in areas previously affected by highlights from turning white. Finally, the real-time realistic rendering of the organ model is achieved by combining the inverse rendering and Bidirectional Reflectance Distribution Function (BRDF) rendering methods. Experimental results show that our method closely matches the Instant-NGP method in appearance reconstruction, outperforming other state-of-the-art methods, and stands as the superior method in terms of geometric reconstruction. Our method obtained a detailed geometric model and realistic appearance, providing a realistic visual sense for virtual surgical simulation, which is important for medical training.

4.
Int J Comput Assist Radiol Surg ; 19(5): 951-960, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38413491

RESUMO

PURPOSE: In virtual surgery, the appearance of 3D models constructed from CT images lacks realism, leading to potential misunderstandings among residents. Therefore, it is crucial to reconstruct realistic endoscopic scene using multi-view images captured by an endoscope. METHODS: We propose an Endoscope-NeRF network for implicit radiance fields reconstruction of endoscopic scene under non-fixed light source, and synthesize novel views using volume rendering. Endoscope-NeRF network with multiple MLP networks and a ray transformer network represents endoscopic scene as implicit field function with color and volume density at continuous 5D vectors (3D position and 2D direction). The final synthesized image is obtained by aggregating all sampling points on each ray of the target camera using volume rendering. Our method considers the effect of distance from the light source to the sampling point on the scene radiance. RESULTS: Our network is validated on the lung, liver, kidney and heart of pig collected by our device. The results show that the novel views of endoscopic scene synthesized by our method outperform existing methods (NeRF and IBRNet) in terms of PSNR, SSIM, and LPIPS metrics. CONCLUSION: Our network can effectively learn a radiance field function with generalization ability. Fine-tuning the pre-trained model on a new endoscopic scene to further optimize the neural radiance fields of the scene, which can provide more realistic, high-resolution rendered images for surgical simulation.


Assuntos
Endoscopia , Imageamento Tridimensional , Suínos , Animais , Imageamento Tridimensional/métodos , Endoscopia/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Humanos , Simulação por Computador , Cirurgia Assistida por Computador/métodos , Fígado/cirurgia , Fígado/diagnóstico por imagem , Pulmão/cirurgia , Pulmão/diagnóstico por imagem
5.
Bioengineering (Basel) ; 10(11)2023 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-38002426

RESUMO

The rapid development of computers and robots has seen robotic minimally invasive surgery (RMIS) gradually enter the public's vision. RMIS can effectively eliminate the hand vibrations of surgeons and further reduce wounds and bleeding. However, suitable RMIS and virtual reality-based digital-twin surgery trainers are still in the early stages of development. Extensive training is required for surgeons to adapt to different operating modes compared to traditional MIS. A virtual-reality-based digital-twin robotic minimally invasive surgery (VRDT-RMIS) simulator was developed in this study, and its effectiveness was introduced. Twenty-five volunteers were divided into two groups for the experiment, the Expert Group and the Novice Group. The use of the VRDT-RMIS simulator for face, content, and structural validation training, including the peg transfer module and the soft tissue cutting module, was evaluated. Through subjective and objective evaluations, the potential roles of vision and haptics in robot surgery training were explored. The simulator can effectively distinguish surgical skill proficiency between experts and novices.

6.
Front Oncol ; 12: 811279, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35494066

RESUMO

Microbes and microbiota dysbiosis are correlated with the development of lung cancer; however, the airway taxa characteristics and bacterial topography in synchronous multiple primary lung cancer (sMPLC) are not fully understood. The present study aimed to investigate the microbiota taxa distribution and characteristics in the airways of patients with sMPLC and clarify specimen acquisition modalities in these patients. Using the precise positioning of electromagnetic navigation bronchoscopy (ENB), we analyzed the characteristics of the respiratory microbiome, which were collected from different sites and using different sampling methods. Microbiome predictor variables were bacterial DNA burden and bacterial community composition based on 16sRNA. Eight non-smoking patients with sMPLC in the same pulmonary lobe were included in this study. Compared with other sampling methods, bacterial burden and diversity were higher in surface areas sampled by bronchoalveolar lavage (BAL). Bacterial topography data revealed that the segment with sMPLC lesions provided evidence of specific colonizing bacteria in segments with lesions. After taxonomic annotation, we identified 4863 phylotypes belonging to 185 genera and 10 different phyla. The four most abundant specific bacterial community members detected in the airway containing sMPLC lesions were Clostridium, Actinobacteria, Fusobacterium, and Rothia, which all peaked at the segments with sMPLC lesions. This study begins to define the bacterial topography of the respiratory tract in patients with sMPLC and provides an approach to specimen acquisition for sMPLC, namely BAL fluid obtained from segments where lesions are located.

7.
IEEE Internet Things J ; 8(21): 15965-15976, 2021 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-35782175

RESUMO

This article presents a novel extended reality (XR) and deep-learning-based Internet-of-Medical-Things (IoMT) solution for the COVID-19 telemedicine diagnostic, which systematically combines virtual reality/augmented reality (AR) remote surgical plan/rehearse hardware, customized 5G cloud computing and deep learning algorithms to provide real-time COVID-19 treatment scheme clues. Compared to existing perception therapy techniques, our new technique can significantly improve performance and security. The system collected 25 clinic data from the 347 positive and 2270 negative COVID-19 patients in the Red Zone by 5G transmission. After that, a novel auxiliary classifier generative adversarial network-based intelligent prediction algorithm is conducted to train the new COVID-19 prediction model. Furthermore, The Copycat network is employed for the model stealing and attack for the IoMT to improve the security performance. To simplify the user interface and achieve an excellent user experience, we combined the Red Zone's guiding images with the Green Zone's view through the AR navigate clue by using 5G. The XR surgical plan/rehearse framework is designed, including all COVID-19 surgical requisite details that were developed with a real-time response guaranteed. The accuracy, recall, F1-score, and area under the ROC curve (AUC) area of our new IoMT were 0.92, 0.98, 0.95, and 0.98, respectively, which outperforms the existing perception techniques with significantly higher accuracy performance. The model stealing also has excellent performance, with the AUC area of 0.90 in Copycat slightly lower than the original model. This study suggests a new framework in the COVID-19 diagnostic integration and opens the new research about the integration of XR and deep learning for IoMT implementation.

8.
IEEE J Biomed Health Inform ; 25(5): 1495-1507, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33684049

RESUMO

Chronic kidney disease has become one of the diseases with the highest morbidity and mortality in kidney diseases, and there are still some problems in surgery. During the operation, the surgeon can only operate on two-dimensional ultrasound images and cannot determine the spatial position relationship between the lesion and the medical puncture needle in real-time. The average number of punctures per patient will reach 3 to 4, Increasing the incidence of complications after a puncture. This article starts with ultrasound-guided renal biopsy navigation training, optimizes puncture path planning, and puncture training assistance. The augmented reality technology, combined with renal puncture surgery training was studied. This paper develops a prototype ultrasound-guided renal biopsy surgery training system, which improves the accuracy and reliability of the system training. The system is compared with the VR training system. The results show that the augmented reality training platform is more suitable as a surgical training platform. Because it takes a short time and has a good training effect.


Assuntos
Realidade Aumentada , Biópsia , Cirurgia Assistida por Computador , Humanos , Reprodutibilidade dos Testes , Ultrassonografia de Intervenção
9.
Int J Med Robot ; : e2160, 2020 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-32890440

RESUMO

BACKGROUND: Neurosurgery has exceptionally high requirements for minimally invasive and safety. This survey attempts to analyze the practical application of AR in neurosurgical navigation. Also, this survey describes future trends in augmented reality neurosurgical navigation systems. METHODS: In this survey, we searched related keywords "augmented reality", "virtual reality", "neurosurgery", "surgical simulation", "brain tumor surgery", "neurovascular surgery", "temporal bone surgery", and "spinal surgery" through Google Scholar, World Neurosurgery, PubMed and Science Direct. We collected 85 articles published over the past five years in areas related to this survey. RESULTS: Detailed study has been conducted on the application of AR in neurosurgery and found that AR is constantly improving the overall efficiency of doctor training and treatment, which can help neurosurgeons learn and practice surgical procedures with zero risks. CONCLUSIONS: Neurosurgical navigation is essential in neurosurgery. Despite certain technical limitations, it is still a necessary tool for the pursuit of maximum security and minimal intrusiveness. This article is protected by copyright. All rights reserved.

10.
J Phys Condens Matter ; 32(23): 235701, 2020 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-32079005

RESUMO

We investigate the topological supersolid states of dipolar Fermi gases trapped in a spin-dependent 2D optical lattice. Our results show that topological supersolid states can be achieved via the combination of topological superfluid states with the stripe order. Different from the general held belief that supersolid state in fermionic system can only survive with simultaneous coexistence of the repulsive and attractive dipolar interaction. We demonstrate that it can be maintained when the dipolar interaction is attractive in both x and y direction. By adjusting the ratio of hopping amplitude between different directions and dipolar interaction strength U, the system will undergo a phase transition among p x + ip y superfluid state, p y -wave superfluid state, and the topological supersolid state. The supersolid state in the attractive environment is proved to be stable by the positive sign of the inverse compressibility. We also design an experimental protocol to realize the staggered next-next-nearest-neighbor hopping via the laser assisted tunneling technique, which is the key to simulate the spin-dependent potential.

11.
Gland Surg ; 9(6): 1933-1944, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33447544

RESUMO

BACKGROUND: Myasthenia gravis (MG) is a chronic autoimmune neuromuscular disorder causing muscle weakness and characterized by a defect in synaptic transmission at the neuromuscular junction. The pathogenesis of this disease remains unclear. We aimed to predict the key signaling pathways of genetic variants and miRNAs in the pathogenesis of MG, and identify the key genes among them. METHODS: We searched published information regarding associated single nucleotide polymorphisms (SNPs) and differentially-expressed miRNAs in MG cases. We search of SNPs and miRNAs in literature databases about MG, then we used bioinformatic tools to predict target genes of miRNAs. Moreover, functional enrichment analysis for key genes was carried out utilizing the Cytoscape-plugin, known as ClueGO. These key genes were mapped to STRING database to construct a protein-protein interaction (PPI) network. Then a miRNA-target gene regulatory network was established to screen key genes. RESULTS: Five genes containing SNPs associated with MG risk were involved in the inflammatory bowel disease (IBD) signaling pathway, and FoxP3 was the key gene. MAPK1, SMAD4, SMAD2 and BCL2 were predicted to be targeted by the 18 miRNAs and to act as the key genes in adherens, junctions, apoptosis, or cancer-related pathways respectively. These five key genes containing SNPs or targeted by miRNAs were found to be involved in negative regulation of T cell differentiation. CONCLUSIONS: We speculate that SNPs cause the genes to be defective or the miRNAs to downregulate the factors that subsequently negatively regulate regulatory T cells and trigger the onset of MG.

12.
Appl Bionics Biomech ; 2019: 9756842, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31341513

RESUMO

Realistic tool-tissue interactive modeling has been recognized as an essential requirement in the training of virtual surgery. A virtual basic surgical training framework integrated with real-time force rendering has been recognized as one of the most immersive implementations in medical education. Yet, compared to the original intraoperative data, there has always been an argument that these data are represented by lower fidelity in virtual surgical training. In this paper, a dynamic biomechanics experimental framework is designed to achieve a highly immersive haptic sensation during the biopsy therapy with human respiratory motion; it is the first time to introduce the idea of periodic extension idea into the dynamic percutaneous force modeling. Clinical evaluation is conducted and performed in the Yunnan First People's Hospital, which not only demonstrated a higher fitting degree (AVG: 99.36%) with the intraoperation data than previous algorithms (AVG: 87.83%, 72.07%, and 66.70%) but also shows a universal fitting range with multilayer tissue. 27 urologists comprising 18 novices and 9 professors were invited to the VR-based training evaluation based on the proposed haptic rendering solution. Subjective and objective results demonstrated higher performance than the existing benchmark training simulator. Combining these in a systematic approach, tuned with specific fidelity requirements, haptically enabled medical simulation systems would be able to provide a more immersive and effective training environment.

13.
J Healthc Eng ; 2019: 6813719, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30723539

RESUMO

The aim of this study is to develop and assess the peg transfer training module face, content and construct validation use of the box, virtual reality (VR), cognitive virtual reality (CVR), augmented reality (AR), and mixed reality (MR) trainer, thereby to compare advantages and disadvantages of these simulators. Training system (VatsSim-XR) design includes customized haptic-enabled thoracoscopic instruments, virtual reality helmet set, endoscope kit with navigation, and the patient-specific corresponding training environment. A cohort of 32 trainees comprising 24 novices and 8 experts underwent the real and virtual simulators that were conducted in the department of thoracic surgery of Yunnan First People's Hospital. Both subjective and objective evaluations have been developed to explore the visual and haptic potential promotions in peg transfer education. Experiments and evaluation results conducted by both professional and novice thoracic surgeons show that the surgery skills from experts are better than novices overall, AR trainer is able to provide a more balanced training environments on visuohaptic fidelity and accuracy, box trainer and MR trainer demonstrated the best realism 3D perception and surgical immersive performance, respectively, and CVR trainer shows a better clinic effect that the traditional VR trainer. Combining these in a systematic approach, tuned with specific fidelity requirements, medical simulation systems would be able to provide a more immersive and effective training environment.


Assuntos
Cirurgia Torácica Vídeoassistida/educação , Adulto , Realidade Aumentada , Competência Clínica , Simulação por Computador , Instrução por Computador/métodos , Instrução por Computador/estatística & dados numéricos , Feminino , Humanos , Neoplasias Pulmonares/cirurgia , Masculino , Pessoa de Meia-Idade , Software , Cirurgia Torácica Vídeoassistida/estatística & dados numéricos , Interface Usuário-Computador , Realidade Virtual , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA