Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Formos Med Assoc ; 121(12): 2457-2464, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35667953

RESUMO

BACKGROUND: The accuracy of histopathology diagnosis largely depends on the pathologist's experience. It usually takes over 10 years to cultivate a senior pathologist, and small numbers of them lead to a high workload for those available. Meanwhile, inconsistent diagnostic results may arise among different pathologists, especially in complex cases, because diagnosis based on morphology is subjective. Computerized analysis based on deep learning has shown potential benefits as a diagnostic strategy. METHODS: This research aims to automatically determine the location of gastric cancer (GC) in the images of GC slices through artificial intelligence. We use image data from a regional teaching hospital in Taiwan for training. We collect images of patients diagnosed with GC from January 1, 2019 to December 31, 2020. In this study, scanned images are used to dissect 13,600 images from 50 different patients with GC sections whose GC sections are stained with hematoxylin and eosin (H&E stained) through a whole slide scanner, the scanned images from 50 different GC slice patients are divided into 80% training combinations, 2200 images of 40 patients are trained. The remaining 20%, totaling 10 people, are validated from a test set of 550 images. RESULTS: The validation results show that 91% of the correct rates are interpreted as GC images through deep learning. The sensitivity, specificity, PPV, and NPV were 84.9%, 94%, 87.7%, and 92.5%, respectively. After creating a 3D model through the grayscale value, the position of the GC is completely marked by the 3D model. The purpose of this research is to use artificial intelligence (AI) to determine the location of the GC in the image of GC slice. CONCLUSION: In patients undergoing pancreatectomy for pancreatic cancer, intraoperative infusion of lidocaine did not improve overall or disease-free survival. Reduced formation of circulating NETs was absent in pancreatic tumour tissue. CONCLUSION: For AI to assist pathologists in daily practice, to help a pathologist making a definite diagnosis is not the prime purpose at present time. The benefits could come from cancer screening and double-check quality control in a heavy workload which could distract the attention of pathologist during the time constraint examination process. We propose a two-steps method to identify cancerous areas in endoscopic gastric biopsy slices via deep learning. Then a 3D model is used to further mark all the positions of GC in the picture, and the model overcomes the problem that deep learning can't catch all GC.


Assuntos
Aprendizado Profundo , Neoplasias Gástricas , Humanos , Inteligência Artificial , Patologistas , Biópsia
2.
Sensors (Basel) ; 22(21)2022 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-36366028

RESUMO

BACKGROUND: Climate change causes devastating impacts with extreme weather conditions, such as flooding, polar ice caps melting, sea level rise, and droughts. Environmental conservation education is an important and ongoing project nowadays for all governments in the world. In this paper, a novel 3D virtual reality architecture in the metaverse (VRAM) is proposed to foster water resources education using modern information technology. METHODS: A quasi-experimental study was performed to observe a comparison between learning involving VRAM and learning without VRAM. The 3D VRAM multimedia content comes from a picture book for learning environmental conservation concepts, based on the cognitive theory of multimedia learning to enhance human cognition. Learners wear VRAM helmets to run VRAM Android apps by entering the immersive environment for playing and/or interacting with 3D VRAM multimedia content in the metaverse. They shake their head to move the interaction sign to initiate interactive actions, such as replaying, going to consecutive video clips, displaying text annotations, and replying to questions when learning soil-and-water conservation course materials. Interactive portfolios of triggering actions are transferred to the cloud computing database immediately by the app. RESULTS: Experimental results showed that participants who received instruction involving VRAM had significant improvement in their flow experience, learning motivation, learning interaction, self-efficacy, and presence in learning environmental conservation concepts. CONCLUSIONS: The novel VRAM is highly suitable for multimedia educational systems. Moreover, learners' interactive VRAM portfolios can be analyzed by big-data analytics to understand behaviors for using VRAM in the future to improve the quality of environmental conservation education.


Assuntos
Instrução por Computador , Realidade Virtual , Humanos , Instrução por Computador/métodos , Aprendizagem , Cognição
3.
Neural Netw ; 166: 313-325, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37541163

RESUMO

This paper proposes an unsupervised image-to-image (UI2I) translation model, called Perceptual Contrastive Generative Adversarial Network (PCGAN), which can mitigate the distortion problem to enhance performance of the traditional UI2I methods. The PCGAN is designed with a two-stage UI2I model. In the first stage of the PCGAN, it leverages a novel image warping to transform shapes of objects in input (source) images. In the second stage of the PCGAN, the residual prediction is devised in refinements of the outputs of the first stage of the PCGAN. To promote performance of the image warping, a loss function, called Perceptual Patch-Wise InfoNCE, is developed in the PCGAN to effectively memorize the visual correspondences between warped images and refined images. Experimental results on quantitative evaluation and visualization comparison for UI2I benchmarks show that the PCGAN is superior to other existing methods considered here.


Assuntos
Benchmarking , Processamento de Imagem Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA