Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Anal Chem ; 95(26): 9959-9966, 2023 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-37351568

RESUMO

Being characterized by the self-adaption and high accuracy, the deep learning-based models have been widely applied in the 1D spectroscopy-related field. However, the "black-box" operation and "end-to-end" working style of the deep learning normally bring the low interpretability, where a reliable visualization is highly demanded. Although there are some well-developed visualization methods, such as Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM), for the 2D image data, they cannot correctly reflect the weights of the model when being applied to the 1D spectral data, where the importance of position information is not considered. Here, aiming at the visualization of Convolutional Neural Network-based models toward the qualitative and quantitative analysis of 1D spectroscopy, we developed a novel visualization algorithm (1D Grad-CAM) to more accurately display the decision-making process of the CNN-based models. Different from the classical Grad-CAM, with the removal of the gradient averaging (GAP) and the ReLU operations, a significantly improved correlation between the gradient and the spectral location and a more comprehensive spectral feature capture were realized for 1D Grad-CAM. Furthermore, the introduction of difference (purity or linearity) and feature contribute in the CNN output in 1D Grad-CAM achieved a reliable evaluation of the qualitative accuracy and quantitative precision of CNN-based models. Facing the qualitative and adulteration quantitative analysis of vegetable oils by the combination of Raman spectroscopy and ResNet, the visualization by 1D Grad-CAM well reflected the origin of the high accuracy and precision brought by ResNet. In general, 1D Grad-CAM provides a clear vision about the judgment criterion of CNN and paves the way for CNN to a broad application in the field of 1D spectroscopy.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1663-1666, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086459

RESUMO

Automatic surgical phase recognition plays a key role in surgical workflow analysis and overall optimization in clinical work. In the complicated surgical procedures, similar inter-class appearance and drastic variability in phase duration make this still a challenging task. In this paper, a spatio-temporal transformer is proposed for online surgical phase recognition with different granularity. To extract rich spatial information, a spatial transformer is used to model global spatial dependencies of each time index. To overcome the variability in phase duration, a temporal transformer captures the multi-scale temporal context of different time indexes with a dual pyramid pattern. Our method is thoroughly validated on the public Cholec80 dataset with 7 coarse-grained phases and the CATARACTS2020 dataset with 19 fine-grained phases, outperforming state-of-the-art approaches with 91.4% and 84.2% accuracy, taking only 24.5M parameters.


Assuntos
Algoritmos , Fluxo de Trabalho
3.
Med Image Anal ; 70: 101920, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33676097

RESUMO

Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions. While numerous methods for detecting, segmenting and tracking of medical instruments based on endoscopic video images have been proposed in the literature, key limitations remain to be addressed: Firstly, robustness, that is, the reliable performance of state-of-the-art methods when run on challenging images (e.g. in the presence of blood, smoke or motion artifacts). Secondly, generalization; algorithms trained for a specific intervention in a specific hospital should generalize to other interventions or institutions. In an effort to promote solutions for these limitations, we organized the Robust Medical Instrument Segmentation (ROBUST-MIS) challenge as an international benchmarking competition with a specific focus on the robustness and generalization capabilities of algorithms. For the first time in the field of endoscopic image processing, our challenge included a task on binary segmentation and also addressed multi-instance detection and segmentation. The challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures from three different types of surgery. The validation of the competing methods for the three tasks (binary segmentation, multi-instance detection and multi-instance segmentation) was performed in three different stages with an increasing domain gap between the training and the test data. The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap. While the average detection and segmentation quality of the best-performing algorithms is high, future research should concentrate on detection and segmentation of small, crossing, moving and transparent instrument(s) (parts).


Assuntos
Processamento de Imagem Assistida por Computador , Laparoscopia , Algoritmos , Artefatos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA