Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
ArXiv ; 2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38855539

RESUMEN

Knowledge distillation (KD) has demonstrated remarkable success across various domains, but its application to medical imaging tasks, such as kidney and liver tumor segmentation, has encountered challenges. Many existing KD methods are not specifically tailored for these tasks. Moreover, prevalent KD methods often lack a careful consideration of 'what' and 'from where' to distill knowledge from the teacher to the student. This oversight may lead to issues like the accumulation of training bias within shallower student layers, potentially compromising the effectiveness of KD. To address these challenges, we propose Hierarchical Layer-selective Feedback Distillation (HLFD). HLFD strategically distills knowledge from a combination of middle layers to earlier layers and transfers final layer knowledge to intermediate layers at both the feature and pixel levels. This design allows the model to learn higher-quality representations from earlier layers, resulting in a robust and compact student model. Extensive quantitative evaluations reveal that HLFD outperforms existing methods by a significant margin. For example, in the kidney segmentation task, HLFD surpasses the student model (without KD) by over 10%, significantly improving its focus on tumor-specific features. From a qualitative standpoint, the student model trained using HLFD excels at suppressing irrelevant information and can focus sharply on tumor-specific details, which opens a new pathway for more efficient and accurate diagnostic tools. Code is available here.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38015680

RESUMEN

Learning representations from unlabeled time series data is a challenging problem. Most existing self-supervised and unsupervised approaches in the time-series domain fall short in capturing low-and high-frequency features at the same time. As a result, the generalization ability of the learned representations remains limited. Furthermore, some of these methods employ large-scale models like transformers or rely on computationally expensive techniques such as contrastive learning. To tackle these problems, we propose a noncontrastive self-supervised learning (SSL) approach that efficiently captures low-and high-frequency features in a cost-effective manner. The proposed framework comprises a Siamese configuration of a deep neural network with two weight-sharing branches which are followed by low-and high-frequency feature extraction modules. The two branches of the proposed network allow bootstrapping of the latent representation by taking two different augmented views of raw time series data as input. The augmented views are created by applying random transformations sampled from a single set of augmentations. The low-and high-frequency feature extraction modules of the proposed network contain a combination of multilayer perceptron (MLP) and temporal convolutional network (TCN) heads, respectively, which capture the temporal dependencies from the raw input data at various scales due to the varying receptive fields. To demonstrate the robustness of our model, we performed extensive experiments and ablation studies on five real-world time-series datasets. Our method achieves state-of-art performance on all the considered datasets.

3.
Comput Biol Med ; 167: 107569, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37865984

RESUMEN

Early diagnosis plays a pivotal role in effectively treating numerous diseases, especially in healthcare scenarios where prompt and accurate diagnoses are essential. Contrastive learning (CL) has emerged as a promising approach for medical tasks, offering advantages over traditional supervised learning methods. However, in healthcare, patient metadata contains valuable clinical information that can enhance representations, yet existing CL methods often overlook this data. In this study, we propose an novel approach that leverages both clinical information and imaging data in contrastive learning to enhance model generalization and interpretability. Furthermore, existing contrastive methods may be prone to sampling bias, which can lead to the model capturing spurious relationships and exhibiting unequal performance across protected subgroups frequently encountered in medical settings. To address these limitations, we introduce Patient-aware Contrastive Learning (PaCL), featuring an inter-class separability objective (IeSO) and an intra-class diversity objective (IaDO). IeSO harnesses rich clinical information to refine samples, while IaDO ensures the necessary diversity among samples to prevent class collapse. We demonstrate the effectiveness of PaCL both theoretically through causal refinements and empirically across six real-world medical imaging tasks spanning three imaging modalities: ophthalmology, radiology, and dermatology. Notably, PaCL outperforms previous techniques across all six tasks.


Asunto(s)
Metadatos , Radiología , Humanos , Diagnóstico Precoz , Instituciones de Salud
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...